WO2022227514A1 - 一种耳机 - Google Patents

一种耳机 Download PDF

Info

Publication number
WO2022227514A1
WO2022227514A1 PCT/CN2021/131927 CN2021131927W WO2022227514A1 WO 2022227514 A1 WO2022227514 A1 WO 2022227514A1 CN 2021131927 W CN2021131927 W CN 2021131927W WO 2022227514 A1 WO2022227514 A1 WO 2022227514A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
noise
microphone
earphone
ear
Prior art date
Application number
PCT/CN2021/131927
Other languages
English (en)
French (fr)
Inventor
郑金波
张承乾
肖乐
廖风云
齐心
Original Assignee
深圳市韶音科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/CN2021/089670 external-priority patent/WO2022226696A1/zh
Priority claimed from PCT/CN2021/109154 external-priority patent/WO2022022618A1/zh
Application filed by 深圳市韶音科技有限公司 filed Critical 深圳市韶音科技有限公司
Priority to JP2022580472A priority Critical patent/JP2023532489A/ja
Priority to KR1020227044224A priority patent/KR20230013070A/ko
Priority to BR112022023372A priority patent/BR112022023372A2/pt
Priority to EP21938133.2A priority patent/EP4131997A4/en
Priority to TW111111172A priority patent/TW202243486A/zh
Priority to US18/047,639 priority patent/US20230063283A1/en
Publication of WO2022227514A1 publication Critical patent/WO2022227514A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1781Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
    • G10K11/17821Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions characterised by the analysis of the input signals only
    • G10K11/17823Reference signals, e.g. ambient acoustic environment
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1785Methods, e.g. algorithms; Devices
    • G10K11/17857Geometric disposition, e.g. placement of microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17873General system configurations using a reference signal without an error signal, e.g. pure feedforward
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1008Earpieces of the supra-aural or circum-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1058Manufacture or assembly
    • H04R1/1066Constructional aspects of the interconnection between earpiece and earpiece support
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/02Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R9/00Transducers of moving-coil, moving-strip, or moving-wire type
    • H04R9/06Loudspeakers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/10Applications
    • G10K2210/108Communication systems, e.g. where useful sound is kept and noise is cancelled
    • G10K2210/1081Earphones, e.g. for telephones, ear protectors or headsets
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3023Estimation of noise, e.g. on error signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3023Estimation of noise, e.g. on error signals
    • G10K2210/30231Sources, e.g. identifying noisy processes or components
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3025Determination of spectrum characteristics, e.g. FFT
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3035Models, e.g. of the acoustic system
    • G10K2210/30351Identification of the environment for applying appropriate model characteristics
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3038Neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3047Prediction, e.g. of future values of noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K2210/00Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
    • G10K2210/30Means
    • G10K2210/301Computational
    • G10K2210/3056Variable gain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/105Earpiece supports, e.g. ear hooks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1058Manufacture or assembly
    • H04R1/1075Mountings of transducers in earphones or headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/09Non-occlusive ear tips, i.e. leaving the ear canal open, for both custom and non-custom tips
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/11Aspects relating to vents, e.g. shape, orientation, acoustic properties in ear tips of hearing devices to prevent occlusion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers

Definitions

  • the present application relates to the field of acoustics, and in particular, to an earphone.
  • Active noise cancellation technology is a method of canceling ambient noise by using the speaker of the headset to output sound waves that are opposite to the external ambient noise.
  • Headphones can generally be divided into two categories: in-ear headphones and open-back headphones.
  • In-ear headphones will block the user's ear during use, and the user is prone to experience blockage, foreign body, pain, etc. when wearing it for a long time.
  • Open earphones can open the user's ears, which is conducive to long-term wearing, but when the external noise is large, the noise reduction effect is not obvious, which reduces the user's listening experience.
  • An embodiment of the present application provides an earphone, comprising: a fixing structure configured to fix the earphone at a position near a user's ear without blocking the user's ear canal, the fixing structure comprising: a hook portion and a body portion, Wherein, when the user wears the earphone, the hook-shaped part is hung between the first side of the user's ear and the head, and the body part contacts the second side of the ear; A microphone array, located in the body part, is configured to pick up ambient noise; a processor, located in the hook-shaped part or the body part, is configured to: use the first microphone array to perform a sound field at a target spatial position.
  • the body portion includes a connecting portion and a retaining portion, wherein the retaining portion contacts the second side of the ear portion when the user wears the earphone, the connecting portion connecting the the hook portion and the holding portion.
  • the connecting portion when the user wears the earphone, the connecting portion extends from the first side of the ear portion to the second side of the ear portion, and the connecting portion cooperates with the hook-shaped portion providing the retaining portion with a compressive force against the second side of the ear portion, and the connecting portion cooperates with the retaining portion to provide the hook portion with a compressive force against the first side of the ear portion force.
  • the hook in a direction from a first connection point between the hook and the connecting portion to a free end of the hook, is directed toward the first portion of the ear.
  • the side is bent and forms a first contact point with the first side of the ear portion, and the retaining portion forms a second contact point with the second side of the ear portion, wherein the first contact point is in a natural state
  • the distance between the point and the second contact point along the extension direction of the connection part is smaller than the distance between the first contact point and the second contact point along the extension direction of the connection part in the wearing state, and thus is the
  • the retaining portion provides a compressive force against the second side of the ear portion, and the hook portion provides a compressive force against the first side of the ear portion.
  • the hook portion is bent toward the head in a direction from the first connection point between the hook portion and the connecting portion to the free end of the hook portion, and form a first contact point and a third contact point with the head, wherein the first contact point is located between the third contact point and the first connection point, so that the hook-shaped part is formed to
  • the first contact point is a lever structure of a fulcrum, and the force provided by the head at the third contact point directed to the outside of the head is converted into a force at the first connection point through the lever structure.
  • the force directed at the head provides the retaining portion with a pressing force against the second side of the ear portion via the connecting portion.
  • the speaker is disposed on the holding part, and the holding part is a multi-segment structure, so as to adjust the relative position of the speaker on the overall structure of the earphone.
  • the retaining portion includes a first retaining segment, a second retaining segment and a third retaining segment that are connected end to end in sequence, and an end of the first retaining segment facing away from the second retaining segment is connected to the connecting portion connected, the second holding section is folded back relative to the first holding section, and has a distance so that the first holding section and the second holding section are in a U-shaped structure, and the speaker is arranged on the The third hold segment.
  • the retaining portion includes a first retaining segment, a second retaining segment and a third retaining segment that are connected end to end in sequence, and an end of the first retaining segment facing away from the second retaining segment is connected to the connecting portion connected, the second holding section is bent relative to the first holding section, the third holding section and the first holding section are arranged side by side with a distance, and the speaker is arranged on the third holding section keep the segment.
  • the sound outlet hole is provided on a side of the holding portion facing the ear portion, so that the target signal output by the speaker is transmitted to the ear portion through the sound outlet hole.
  • a side of the holding portion facing the ear portion includes a first area and a second area, the first area is provided with a sound outlet, and the second area is relatively larger than the first area.
  • the region is further away from the connecting portion, and protrudes toward the ear portion compared to the first region, so as to allow the sound outlet to be spaced from the ear portion in the wearing state.
  • the distance between the sound outlet and the user's ear canal is less than 10 mm.
  • a pressure relief hole is provided on a side of the holding portion along a vertical axis direction and close to the top of the user's head, and the pressure relief hole is farther away from the user's ear canal than the sound outlet hole.
  • the distance between the pressure relief hole and the user's ear canal is 5 mm to 15 mm.
  • the included angle between the connection line between the pressure relief hole and the sound outlet hole and the thickness direction of the holding portion is 0° to 50°.
  • the pressure relief hole and the sound outlet hole form an acoustic dipole
  • the first microphone array is disposed in a first target area
  • the first target area radiates a sound field for the dipole the acoustic zero position.
  • the first microphone array is located at the connection portion.
  • a line between the first microphone array and the sound outlet and a line between the sound outlet and the pressure relief hole have a first included angle
  • the first The connection line between the microphone array and the pressure relief hole and the connection line between the sound outlet hole and the pressure relief hole have a second included angle
  • the first included angle and the second included angle are The difference is not more than 30°.
  • first distance between the first microphone array and the sound outlet there is a first distance between the first microphone array and the sound outlet, a second distance between the first microphone array and the pressure relief hole, and the first distance is the same as The difference between the second distances is not more than 6 mm.
  • the generating a noise reduction signal based on the sound field estimation of the target spatial location comprises: estimating noise at the target spatial location based on the picked-up ambient noise; and estimating the noise at the target spatial location based on the noise at the target spatial location and all
  • the noise reduction signal is generated from a sound field estimate of the spatial location of the object.
  • the headset further includes one or more sensors located on the hook portion and/or the body portion, configured to obtain motion information of the headset, and the processor is further configured to configured to: update the noise of the target spatial position and the sound field estimate of the target spatial position based on the motion information; and based on the updated noise of the target spatial position and the updated sound field estimate of the target spatial position
  • the noise reduction signal is generated.
  • the estimating noise at the target spatial location based on the picked-up ambient noise comprises: determining one or more spatial noise sources associated with the picked-up ambient noise; and based on the spatial noise sources , estimating the noise at the spatial location of the target.
  • using the first microphone array to estimate the sound field of the target spatial position includes: constructing a virtual microphone based on the first microphone array, the virtual microphone comprising a mathematical model or a machine learning model for representing the audio data collected by the microphone if the target spatial position includes a microphone; and estimating the sound field of the target spatial position based on the virtual microphone.
  • the generating a noise reduction signal based on the sound field estimation of the target spatial position comprises: estimating noise at the target spatial position based on the virtual microphone; and based on the target spatial position noise and the target The sound field estimation of the spatial location generates the noise reduction signal.
  • the headset includes a second microphone located on the body portion, the second microphone configured to pick up the ambient noise and the target signal; and the processor is configured to pick up the target signal based on the The sound signal picked up by the second microphone updates the target signal.
  • the second microphone includes at least one microphone that is closer to the user's ear canal than any microphone in the first microphone array.
  • the second microphone is disposed in a second target area, and the second target area is an area on the holding portion close to the user's ear canal.
  • the distance between the second microphone and the user's ear canal is less than 10 mm.
  • the distance between the second microphone and the sound outlet along the sagittal axis direction is less than 10 mm.
  • the distance between the second microphone and the sound outlet along the vertical axis direction is 2 mm to 5 mm.
  • the updating the noise reduction signal based on the sound signal picked up by the second microphone comprises: estimating a sound field at the user's ear canal based on the sound signal picked up by the second microphone; and The noise reduction signal is updated according to the sound field at the user's ear canal.
  • generating the noise reduction signal based on the sound field estimation of the target spatial location comprises: dividing the picked-up ambient noise into a plurality of frequency bands, the plurality of frequency bands corresponding to different frequency ranges; and based on the plurality of frequency bands; at least one of the frequency bands, the noise reduction signal corresponding to each of the at least one frequency band is generated.
  • the generating, based on at least one of the plurality of frequency bands, the noise reduction signal corresponding to each of the at least one frequency band comprises: obtaining sound pressure levels of the plurality of frequency bands; Based on the sound pressure levels of the plurality of frequency bands and the frequency ranges of the plurality of frequency bands, only the noise reduction signal corresponding to a partial frequency band is generated.
  • the first microphone array or the second microphone includes a bone conduction microphone configured to pick up the user's speech
  • the processor is based on the picked-up environment
  • Noise estimating the noise of the target spatial position includes: removing a component associated with the signal picked up by the bone conduction microphone from the picked-up environmental noise to update the environmental noise; and according to the updated environmental noise Estimate the noise at the spatial location of the object.
  • the headset further includes an adjustment module configured to: obtain user input; and the processor is further configured to adjust the noise reduction signal according to the user input.
  • FIG. 1 is a frame diagram of an exemplary headset shown in accordance with some embodiments of the present application.
  • Figure 2 is a schematic diagram of an exemplary ear shown in accordance with some embodiments of the present application.
  • FIG. 3 is a block diagram of an exemplary earphone according to some embodiments of the present application.
  • FIG. 4 is a wearing diagram of an exemplary headset according to some embodiments of the present application.
  • FIG. 5 is a block diagram of an exemplary earphone according to some embodiments of the present application.
  • FIG. 6 is a wearing diagram of an exemplary headset according to some embodiments of the present application.
  • FIG. 7 is a block diagram of an exemplary earphone according to some embodiments of the present application.
  • FIG. 8 is a wearing diagram of an exemplary headset according to some embodiments of the present application.
  • FIG. 9A is a block diagram of an exemplary earphone according to some embodiments of the present application.
  • 9B is a block diagram of an exemplary earphone according to some embodiments of the present application.
  • FIG. 10 is a structural diagram of an ear-facing side of an exemplary earphone according to some embodiments of the present application.
  • FIG. 11 is a structural diagram of an exemplary earphone facing away from the ear according to some embodiments of the present application.
  • Figure 12 is a top view of an exemplary headset shown in accordance with some embodiments of the present application.
  • FIG. 13 is a schematic cross-sectional structural diagram of an exemplary earphone according to some embodiments of the present application.
  • FIG. 14 is an exemplary noise reduction flow diagram of an earphone according to some embodiments of the present application.
  • FIG. 15 is an exemplary flowchart of estimating noise at a spatial location of a target according to some embodiments of the present application.
  • FIG. 16 is an exemplary flowchart for estimating the sound field and noise of a target spatial location according to some embodiments of the present application.
  • FIG. 17 is an exemplary flowchart of updating a noise reduction signal according to some embodiments of the present application.
  • FIG. 18 is an exemplary noise reduction flow diagram of an earphone according to some embodiments of the present application.
  • FIG. 19 is an exemplary flowchart for estimating noise at a spatial location of a target, according to some embodiments of the present application.
  • system means for distinguishing different components, elements, parts, sections or assemblies at different levels.
  • device means for separating components, elements, parts, sections or assemblies at different levels.
  • module means for separating components, elements, parts, sections or assemblies at different levels.
  • other words may be replaced by other expressions if they serve the same purpose.
  • the earphones may be open-back earphones.
  • the open earphone can fix the speaker near the user's ear without blocking the user's ear canal through the fixing structure.
  • the headset may include a stationary structure, a first microphone array, a processor, and a speaker.
  • the securing structure may be configured to secure the earphone near the user's ear without blocking the user's ear canal.
  • the first microphone array, processor and speaker may be located at the fixed structure to implement the active noise reduction function of the earphone.
  • the fixing structure may include a hook portion and a body portion.
  • the hook portion When the user wears the earphone, the hook portion may be hung between the first side of the user's ear and the head, and the body portion contacts the first side of the ear. two sides.
  • the body portion may include a connecting portion that contacts the second side of the ear portion when the earphone is worn by a user, and a retaining portion that connects the hook portion and the retaining portion.
  • the connecting portion extends from the first side of the ear portion to the second side of the ear portion, the connecting portion cooperates with the hook-shaped portion to provide the holding portion with a pressing force on the second side of the ear portion, and the connecting portion cooperates with the holding portion as a hook
  • the shaped part provides a pressing force on the first side of the ear part, so that the earphone can clamp the user's ear part and ensures the stability of the earphone in wearing.
  • the first microphone array may be located on the body portion of the headset for picking up ambient noise.
  • the processor is located on the hook or body of the earphone and is used to estimate the sound field at the target spatial location.
  • the target spatial location may include a spatial location close to the user's ear canal by a certain distance, eg, the target spatial location may be closer to the user's ear canal than any microphone in the first microphone array.
  • the microphones in the first microphone array may be distributed at different positions near the ear canal of the user, and the processor may estimate the position close to the ear canal of the user according to the ambient noise collected by the microphones in the first microphone array ( For example, the sound field of the target spatial location).
  • the speaker may be located in the body part (holding part), and output the target signal according to the noise reduction signal.
  • the target signal can be transmitted to the outside of the earphone through the sound outlet on the holding part, so as to reduce the environmental noise heard by the user.
  • the body portion may include a second microphone.
  • the second microphone may be closer to the ear canal of the user than the first microphone array, and the sound signal collected by the second microphone is closer and may reflect the sound heard by the user.
  • the processor can update the noise reduction signal according to the sound signal collected by the second microphone, so as to achieve a more ideal noise reduction effect.
  • the earphones provided in the embodiments of the present specification can be fixed near the user's ear through the fixing structure without blocking the user's ear canal, which opens the user's ears and improves the stability and comfort of the earphone in wearing.
  • the ambient noise at the user's ear canal is reduced, thereby realizing the active noise reduction of the earphone, and improving the user's hearing experience in the process of using the earphone.
  • FIG. 1 is a block diagram of an exemplary headset shown in accordance with some embodiments of the present application.
  • the headset 100 may include a stationary structure 110 , a first microphone array 120 , a processor 130 and a speaker 140 .
  • the first microphone array 120 , the processor 130 and the speaker 140 may be located at the fixed structure 110 .
  • the earphone 100 can clamp the user's ear through the fixing structure 110 to fix the earphone 100 near the user's ear without blocking the user's ear canal.
  • the first microphone array 120 located at the fixed structure 110 eg, the body part
  • the processor 130 is coupled (eg, electrically connected) to the first microphone array 120 and the speaker 140 .
  • the processor 130 may receive and process the electrical signal transmitted by the first microphone array 120 to generate a noise reduction signal, and transmit the generated noise reduction signal to the speaker 140 .
  • the speaker 140 may output the target signal according to the noise reduction signal.
  • the target signal can be transmitted to the outside of the earphone 100 through the sound outlet on the fixed structure 110 (eg, the holding part), and used to reduce or cancel the ambient noise at the position of the user's ear canal (eg, the target spatial position), so as to realize the earphone 100
  • the active noise reduction can improve the user's listening experience in the process of using the headset 100.
  • the securing structure 110 may include a hook portion 111 and a body portion 112 .
  • the hook portion 111 may be hung between the first side of the user's ear and the head, and the body portion 112 contacts the second side of the ear.
  • the first side of the ear may be the back side of the user's ear
  • the second side of the user's ear may be the front side of the user's ear.
  • the front side of the user's ear refers to the side of the user's ear including the concha, the triangular fossa, the antihelix, the concha, and the helix (see Figure 2 for the structure of the ear).
  • the back side of the user's ear refers to the side of the user's ear that is away from the front side, that is, the side opposite to the front side.
  • the body portion 112 may include a connecting portion and a retaining portion.
  • the holding portion contacts the second side of the ear portion, and the connecting portion connects the hook portion and the holding portion.
  • the connecting portion extends from the first side of the ear portion to the second side of the ear portion, the connecting portion cooperates with the hook-shaped portion to provide the holding portion with a pressing force on the second side of the ear portion, and the connecting portion cooperates with the holding portion as a hook
  • the shaped portion provides a pressing force on the first side of the ear, so that the earphone 100 can be clamped near the user's ear by the fixing structure 110 , thereby ensuring the stability of the earphone 100 in wearing.
  • the part where the hook portion 111 and/or the body portion 112 (the connecting portion and/or the retaining portion) contacts the ear of the user may be made of a softer material, a harder material, etc., or a combination thereof to make.
  • a softer material refers to a material having a hardness (eg, Shore hardness) less than a first hardness threshold (eg, 15A, 20A, 30A, 35A, 40A, etc.).
  • a softer material may have a Shore hardness of 45-85A, 30-60D.
  • Softer materials may include, but are not limited to, Polyurethanes (PU) (eg, Thermoplastic Polyurethanes (TPU)), Polycarbonate (PC), Polyamides (PA), acrylic Acrylonitrile Butadiene Styrene (ABS), Polystyrene (PS), High Impact Polystyrene (HIPS), Polypropylene (PP), Parylene Polyethylene Terephthalate (PET), Polyvinyl Chloride (PVC), Polyurethane (Polyurethanes, PU), Polyethylene (Polyethylene, PE), Phenol Formaldehyde (PF), Urea-Formaldehyde Resin (Urea-Formaldehyde, UF), melamine-formaldehyde (Melamine-Formaldeh
  • PU Polyurethanes
  • TPU Thermoplastic Polyurethanes
  • PC Polycarbonate
  • PA Polyamides
  • ABS Acrylic Acrylonitrile Butadiene Styren
  • Harder materials may include, but are not limited to, polyethersulfone (Poly (ester sulfones), PES), polyvinylidene chloride (PVDC), polymethyl methacrylate (Polymethyl Methacrylate, PMMA), Poly-ether-ether-ketone (PEEK), etc. or a combination thereof, or a mixture thereof formed with reinforcing agents such as glass fiber and carbon fiber.
  • the material of the portion where the hook portion 111 of the fixing structure 110 and/or the body portion 112 is in contact with the user's ear can be selected according to specific conditions.
  • the softer material can improve the user's comfort when wearing the earphone 100, and the harder material can improve the strength of the earphone 100.
  • the materials of the components of the earphone 100 can be improved. while increasing the strength of the headset 100.
  • the first microphone array 120 may be located on the body portion 112 (eg, the connecting portion or the holding portion) of the fixed structure 110 for picking up ambient noise.
  • ambient noise refers to a combination of multiple external sounds in the environment in which the user is located.
  • the first microphone array 120 may be located near the user's ear canal. Based on the ambient noise obtained in this way, the processor 130 can more accurately calculate the noise actually transmitted to the user's ear canal, which is more conducive to subsequent active noise reduction of the ambient noise heard by the user.
  • the ambient noise may include the sound of the user speaking.
  • the first microphone array 120 may pick up ambient noise according to the working state of the earphone 100 .
  • the working state of the earphone 100 may refer to the usage state used when the user wears the earphone 100 .
  • the working state of the headset 100 may include, but is not limited to, a call state, a non-call state (eg, a music playing state), a voice message sending state, and the like.
  • the headset 100 is not in a call state, the sound produced by the user's own speech may be regarded as environmental noise, and the first microphone array 120 may pick up the user's own speech and other environmental noises.
  • the sound produced by the user's own speech may not be regarded as ambient noise, and the first microphone array 120 may pick up ambient noise other than the user's own speaking sound.
  • the first microphone array 120 may pick up noise emitted by a noise source located at a distance (eg, 0.5 meters, 1 meter) away from the first microphone array 120 .
  • the first microphone array 120 may include one or more air conduction microphones.
  • the air conduction microphone can simultaneously acquire the noise of the external environment and the voice of the user while speaking, and use the acquired noise of the external environment and the voice of the user as the ambient noise.
  • the first microphone array 120 may also include one or more bone conduction microphones. The bone conduction microphone can be in direct contact with the user's skin, and the vibration signal generated by the bones or muscles when the user speaks can be directly transmitted to the bone conduction microphone, and then the bone conduction microphone converts the vibration signal into an electrical signal, and transmits the electrical signal to the processor 130 to be processed.
  • the bone conduction microphone may not be in direct contact with the human body, and the vibration signal generated by the bones or muscles when the user speaks can be transmitted to the fixed structure 110 of the earphone 100 first, and then transmitted to the bone conduction microphone by the fixed structure 110 .
  • the processor 130 may use the sound signal collected by the air conduction microphone as environmental noise and use the environmental noise for noise reduction, and the sound signal collected by the bone conduction microphone may be transmitted to the terminal device as a voice signal , so as to ensure the call quality of the user during the call.
  • the processor 130 may control the switch states of the bone conduction microphone and the air conduction microphone based on the working state of the headset 100 .
  • the switch state of the bone conduction microphone and the switch state of the air conduction microphone in the first microphone array 120 may be determined according to the working state of the earphone 100 .
  • the switch state of the bone conduction microphone may be the standby state, and the switch state of the air conduction microphone may be the working state.
  • the switch state of the bone conduction microphone may be in the working state
  • the switch state of the air conduction microphone may be the working state.
  • the processor 130 may control the switch states of the microphones (eg, bone conduction microphones, air conduction microphones) in the first microphone array 120 by sending a control signal.
  • the first microphone array 120 may include a dynamic microphone, a ribbon microphone, a condenser microphone, an electret microphone, an electromagnetic microphone, a carbon particle microphone, etc., or any of them. combination.
  • the arrangement of the first microphone array 120 may include a linear array (eg, a straight line, a curve), a planar array (eg, a cross, a circle, a ring, a polygon, a mesh, etc., regular and and/or irregular shapes), stereoscopic arrays (eg, cylindrical, spherical, hemispherical, polyhedral, etc.), etc., or any combination thereof.
  • the processor 130 may be located on the hook portion 111 or the body portion 112 of the fixed structure 110 , and the processor 130 may use the first microphone array 120 to estimate the sound field of the target spatial position.
  • the sound field of a target spatial location may refer to the distribution and variation of sound waves at or near the target spatial location (eg, as a function of time, as a function of location).
  • the physical quantities describing the sound field may include sound pressure level, sound frequency, sound amplitude, sound phase, sound source vibration velocity, or medium (eg air) density, and the like. In general, these physical quantities can be functions of position and time.
  • the target spatial location may refer to a spatial location close to the user's ear canal by a specific distance.
  • the specific distance here may be a fixed distance, for example, 2mm, 5mm, 10mm, and the like.
  • the target spatial location may be closer to the user's ear canal than any microphone in the first microphone array 120 .
  • the target spatial position may be related to the number of each microphone in the first microphone array 120 and the distribution position relative to the user's ear canal.
  • the target spatial position can be adjusted by adjusting the number and/or the distribution position of each microphone in the first microphone array 120 relative to the user's ear canal. For example, by increasing the number of microphones in the first microphone array 120, the target spatial position can be made closer to the user's ear canal.
  • the target spatial position can also be made closer to the ear canal of the user by reducing the distance between the microphones in the first microphone array 120 .
  • the arrangement of the microphones in the first microphone array 120 can also be changed to make the target spatial position closer to the user's ear canal.
  • the processor 130 may be further configured to generate a noise reduction signal based on the sound field estimate of the target spatial location.
  • the processor 130 may receive the ambient noise acquired by the first microphone array 120 and process it to acquire parameters (eg, amplitude, phase, etc.) of the ambient noise, and determine the spatial position of the target based on the parameters of the ambient noise.
  • the sound field is estimated.
  • the processor 130 generates a noise reduction signal based on the sound field estimation of the target spatial location.
  • the parameters of the noise reduction signal (eg, amplitude, phase, etc.) are related to the ambient noise at the target spatial location.
  • the magnitude of the noise reduction signal may be approximately equal to the magnitude of the ambient noise at the target spatial location
  • the phase of the noise reduction signal may be approximately opposite to the phase of the ambient noise at the target spatial location.
  • the speaker 140 may be located at the holding portion of the fixing structure 110, and when the user wears the earphone 100, the speaker 140 is located near the user's ear.
  • the speaker 140 may output the target signal according to the noise reduction signal.
  • the target signal can be transmitted to the user's ear through the sound outlet hole of the holding part, so as to reduce or eliminate the environmental noise transmitted to the user's ear canal.
  • the speaker 140 may include an electrodynamic speaker (eg, a moving coil speaker), a magnetic speaker, an ion speaker, an electrostatic speaker (or a condenser speaker), a piezoelectric speaker, etc., depending on the working principle of the speaker one or more of.
  • the speaker 140 may include an air conduction speaker or a bone conduction speaker according to the transmission mode of the sound output by the speaker.
  • the number of speakers 140 may be one or more.
  • the speaker can output the target signal to cancel the ambient noise, and at the same time deliver effective sound information (eg, device media audio, call far-end audio) to the user.
  • the air conduction speaker can be used to output a target signal to cancel ambient noise.
  • the target signal may be a sound wave (ie, the vibration of the air), which may be transmitted through the air to the target spatial location and cancel each other with ambient noise at the target spatial location.
  • the sound wave output by the air conduction speaker also includes effective sound information.
  • the bone conduction speaker can be used to output the target signal to eliminate ambient noise.
  • the target signal may be a vibration signal, which may be transmitted through the bone or tissue to the user's basilar membrane and cancel each other out with ambient noise at the user's basilar membrane.
  • the vibration signal output by the bone conduction speaker also includes effective sound information.
  • a part of the multiple speakers 140 may be used to output the target signal to eliminate ambient noise, and the other part may be used to deliver effective sound information (eg, device media) to the user audio, call remote audio).
  • the air conduction speakers can be used to output sound waves to reduce or eliminate ambient noise, and the bone conduction speakers can be used to deliver effective sound information to the user.
  • bone conduction speakers can directly transmit mechanical vibrations through the user's body (eg, bones, skin tissue, etc.) to the user's auditory nerves, and the interference to the air conduction microphones that pick up ambient noise is relatively high during this process. Small.
  • the speaker 340 and the first microphone array 120 are both located on the body portion 112 of the earphone 300, and the target signal output by the speaker 340 may also be picked up by the first microphone array 120, but the target signal is not expected to be picked up, That is, the target signal should not be considered part of the ambient noise.
  • the first microphone array 120 may be disposed in the first target area.
  • the first target area may be an area where the intensity of the sound emitted by the speaker 340 is low or even the smallest in space.
  • the first target area may be the acoustic zero position of the radiated sound field of the acoustic dipole formed by the earphone 100 (eg, sound outlet, pressure relief hole), or a position within a certain distance from the acoustic zero position within a threshold range.
  • the fixing structure 110 of the earphone 100 can be replaced with a housing structure having a shape suitable for human ears (eg, C-shape, semicircle, etc.), so that the earphone 100 can be hung near the user's ear.
  • a component in headset 100 may be split into multiple sub-components, or multiple components may be combined into a single component.
  • FIG. 2 is a schematic diagram of an exemplary ear shown in accordance with some embodiments of the present application.
  • the ear 200 may include an external auditory canal 201 , a concha cavity 202 , a concha 203 , a triangular fossa 204 , an antihelix 205 , a concha 206 , a helix 207 , an earlobe 208 and a helix 209 .
  • wearing and stabilization of an earphone eg, earphone 100
  • the external auditory canal 201 , the concha cavity 202 , the concha 203 , the triangular fossa 204 and other parts have a certain depth and volume in the three-dimensional space, which can be used to meet the wearing requirements of the earphone.
  • an open earphone eg, earphone 100
  • parts such as the user's earlobe 208 may also be used.
  • the user's external auditory canal 201 can be "liberated", and the impact of the earphone on the user's ear health can be reduced.
  • the earphone will not block the user's external ear canal 201, and the user can receive both the sound from the earphone and the sound from the environment (for example, honking, car bells, surrounding human voices, traffic command sounds) etc.), thereby reducing the probability of traffic accidents.
  • the whole or part of the structure of the earphone may be located on the front side of the helix 209 (eg, the area J enclosed by the dotted line in FIG. 2 ).
  • the whole or part of the structure of the earphone may be connected with the upper part of the external auditory canal 201 (for example, the helix 209, the concha 203, the triangular fossa 204, the antihelix 205, the concha 206, the helix 207, etc. or where multiple parts are located) contact.
  • the whole or part of the structure of the earphone may be located in one or more parts of the ear (for example, the concha cavity 202, the concha 203, the triangular fossa 204, etc.) (for example, in FIG. 2 ) Area M) enclosed by dashed lines.
  • ear 200 is for illustrative purposes only and is not intended to limit the scope of the present application.
  • various changes and modifications can be made based on the description of the present application.
  • the structure, shape, size, thickness, etc. of one or more parts of the ear 200 may be different for different users.
  • a part of the structure of the earphone may shield part or all of the external auditory canal 201 .
  • FIG. 3 is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
  • 4 is a donning diagram of an exemplary headset shown in accordance with some embodiments of the present application.
  • the earphone 300 may include a fixing structure 310 , a first microphone array 320 , a processor 330 and a speaker 340 .
  • the first microphone array 320 , the processor 330 and the speaker 340 are located at the fixed structure 310 .
  • the fixing structure 310 can be used to hang the earphone 300 near the user's ear without blocking the user's ear canal.
  • the securing structure 310 may include a hook portion 311 and a body portion 312 .
  • the hook portion 311 may comprise any shape suitable for being worn by a user, eg, a C shape, a hook shape, and the like.
  • the hook portion 311 When the user wears the earphone 300, the hook portion 311 may be hung between the first side of the user's ear and the head.
  • the body part 312 may include a connecting part 3121 and a holding part 3122 , wherein the connecting part 3121 is used for connecting the hook part 311 and the holding part 3122 .
  • the holding part 3122 contacts the second side of the ear part, the connecting part 3121 extends from the first side of the ear part to the second side of the ear part, and the two ends of the connecting part 3121 are respectively connected with the hook-shaped part 311 and the second side of the ear part.
  • the holding portion 3122 is connected.
  • connection part 3121 cooperates with the hook part 311 to provide the holding part 3122 with a pressing force on the second side of the ear part, and the connection part 3121 cooperates with the holding part 3122 to provide the connection part 3121 with a pressing force on the first side of the ear part. tight.
  • the connecting portion 3121 connects the hook portion 311 and the holding portion 3122, so that the fixing structure 310 is curved in three-dimensional space. It can also be understood that in the three-dimensional space, the hook portion 311 , the connecting portion 3121 , and the holding portion 3122 are not coplanar. In this setting, when the earphone 300 is in the wearing state, as shown in FIG.
  • the hook portion 311 can be hung between the first side of the user's ear 100 and the head, and the holding portion 3122 contacts the user's ear the second side of the part 100, so that the retaining part 3122 and the hook part 311 cooperate to clamp the ear part.
  • the connecting portion 3121 may extend from the head to the outside of the head (ie, from the first side of the ear portion 100 to the second side of the ear portion), and then cooperate with the hook portion 311 to provide a pair of the retaining portion 3122. The compressive force of the second side of the ear 100 .
  • the fixing structure 310 can clamp the user's ear 100 to realize the wearing of the earphone 300 .
  • the holding portion 3122 can press against the ear under the action of the pressing force, for example, against the area where the concha, the triangular fossa, the antihelix and other parts are located, so that the earphone 300 is in the wearing state.
  • the external auditory canal of the ear is not covered.
  • the projection of the holding portion 3122 on the user's ear may fall within the range of the helix of the ear; further, the holding portion 3122 may be located on the side of the external auditory canal of the ear close to the top of the user's head , and in contact with the helix and/or the antihelix.
  • the holding portion 3122 can be prevented from covering the external auditory canal, thereby liberating the user's ears.
  • the contact area between the holding portion 3122 and the ear portion can also be increased, thereby improving the wearing comfort of the earphone 300 .
  • the speaker 340 located at the holding part 3122 can be closer to the user's ear canal, improving the user's listening experience when using the headset 300 .
  • the earphone 300 in order to improve the stability and comfort of the user wearing the earphone 300, can also elastically clamp the ear.
  • the hook portion 311 of the earphone 300 may include an elastic portion (not shown) connected with the connection portion 3121 .
  • the elastic portion may have a certain elastic deformation capability, so that the hook portion 311 can be deformed under the action of an external force, and then displaced relative to the holding portion 3122 to allow the hook portion 311 and the holding portion 3122 to cooperate to elastically clamp the ear portion.
  • the user can first force the hook-shaped part 311 to deviate from the holding part 3122, so that the ear part can extend between the holding part 3122 and the hook-shaped part 311;
  • the earphone 300 is allowed to elastically grip the ear.
  • the user can further adjust the position of the earphone 300 on the ear according to the actual wearing situation.
  • the hook portion 311 may be rotatable relative to the connecting portion 3121 , or the retaining portion 3122 may be rotatable relative to the connecting portion 3121 , or a part of the connecting portion 3121 may be rotatable relative to another portion, so as to The relative positional relationship of the hook portion 311 , the connecting portion 3121 , and the holding portion 3122 in the three-dimensional space can be adjusted, so that the earphone 300 can be adapted to different users, that is, the application range of the earphone 300 to users is increased.
  • the relative positional relationship of the hook portion 311, the connecting portion 3121, and the holding portion 3122 in the three-dimensional space is set to be adjustable, and the positions of the first microphone array 320 and the speaker 340 relative to the user's ear (eg, the external auditory canal) can also be adjusted. , thereby improving the active noise reduction effect of the earphone 300 .
  • the connecting part 3121 can be made of a deformable material such as soft steel wire. The user bends the connecting part 3121 to rotate one part relative to the other part, so as to adjust the hook-shaped part 311 , the connecting part 3121 and the holding part 3122 The relative position in the three-dimensional space, and then meet its wearing needs.
  • the connecting portion 3121 may also be provided with a rotating shaft mechanism 31211, through which the user adjusts the relative positions of the hook portion 311, the connecting portion 3121, and the holding portion 3122 in three-dimensional space to meet their wearing requirements.
  • the earphone 300 can use the first microphone array 320 and the processor 330 to estimate the sound field at the user's ear canal (eg, the target spatial position), and output the target signal through the speaker 340 to reduce the sound field at the user's ear canal ambient noise, so as to achieve active noise reduction of the earphone 300 .
  • the first microphone array 320 may be located on the body portion 312 of the fixed structure 310 , so that when the user wears the headset 300 , the first microphone array 320 may be located near the user's ear canal.
  • the first microphone array 320 can pick up the environmental noise near the user's ear canal, and the processor 330 can further estimate the environmental noise at the target spatial position according to the environmental noise near the user's ear canal, for example, the environmental noise at the user's ear canal.
  • the target signal output by the speaker 340 is also picked up by the first microphone array 320.
  • the first microphone array 320 may be located at The sound emitted by the loudspeaker 340 is in an area with low intensity or even the smallest intensity in space, for example, the acoustic zero point position of the radiated sound field of the acoustic dipole formed by the earphone 300 (eg, the sound outlet and the pressure relief hole).
  • the acoustic zero point position of the radiated sound field of the acoustic dipole formed by the earphone 300 eg, the sound outlet and the pressure relief hole.
  • the processor 330 may be located on the hook portion 311 or the body portion 312 of the fixation structure 310 .
  • the processor 330 is electrically connected to the first microphone array 320 .
  • the processor 330 may estimate the sound field of the target spatial position based on the ambient noise picked up by the first microphone array 320, and generate a noise reduction signal based on the sound field estimation of the target spatial position.
  • FIGS. 14-16 of this specification For the specific content of the sound field of the target spatial position estimated by the processor 330 using the first microphone array 320, reference may be made to FIGS. 14-16 of this specification, and related descriptions thereof.
  • the processor 330 may also be used to control the sound production of the speaker 340 .
  • the processor 330 can control the sound of the speaker 340 according to the instruction input by the user.
  • the processor 330 may generate instructions to control the speaker 340 based on information of one or more components of the headset 300 .
  • the processor 330 may control other components of the headset 300 (eg, the battery).
  • the processor 330 may be disposed on any part of the fixed structure 310 .
  • the processor 330 may be provided in the holding portion 3122 .
  • the wiring distance between the processor 330 and other components (eg, the speaker 340, the key switch, etc.) disposed on the holding part 3122 can be shortened, so as to reduce the signal interference between the wirings and reduce the wiring the possibility of a short circuit between them.
  • the speaker 340 may be located in the holding portion 3122 of the body portion 312, such that when the user wears the headset 300, the speaker 340 may be located in the vicinity of the user's ear canal.
  • the speaker 340 may output a target signal based on the noise reduction signal generated by the processor 330 .
  • the target signal may be transmitted to the outside of the earphone 300 through a sound outlet (not shown) on the holding part 3122 for reducing ambient noise at the user's ear canal.
  • the sound hole on the holding part 3122 may be located on the side of the holding part 3122 facing the user's ear, so that the sound hole can be close enough to the user's ear canal, and the sound emitted by the sound hole can be better heard by the user.
  • the headset 300 may also include components such as a battery 350 .
  • the battery 350 may provide power for other components of the headset 300 (eg, the first microphone array 320, the speaker 340, etc.).
  • any two of the first microphone array 320, the processor 330, the speaker 340, and the battery 350 may communicate in a variety of ways, eg, wired connection, wireless connection, etc., or a combination thereof.
  • wired connections may include metallic cables, optical cables, or hybrid metallic and optical cables, among others. The examples described above are only used for convenience of illustration, and the medium of the wired connection may also be other types, for example, other transmission carriers of electrical signals or optical signals.
  • Wireless connections may include radio communications, free space optical communications, acoustic communications, electromagnetic induction, and the like.
  • the battery 350 may be disposed at an end of the hook portion 311 away from the connecting portion 3121 and located between the backside of the user's ear and the head when the earphone 300 is in a wearing state. In this setting mode, the capacity of the battery 350 can be increased, and the battery life of the earphone 300 can be improved. At the same time, the weight of the earphone 300 can also be balanced so as to overcome the self-weight of the holding part 3122 , its internal processor 330 , the speaker 340 and other structures to improve the wearing stability and comfort of the earphone 300 . In some embodiments, the battery 350 may also transmit its own state information to the processor 330 and receive instructions from the processor 330 to perform corresponding operations. The status information of the battery 350 may include on/off status, remaining power, remaining power usage time, charging time, etc., or a combination thereof.
  • One or more coordinate systems are established in this specification in order to facilitate the description of the interrelationship of various parts of the headset (eg, the headset 300 ) and the relationship between the headset and the user.
  • three basic planes of sagittal plane (Sagittal Plane), coronal plane (Coronal Plane) and transverse plane (Horizontal Plane) of the human body, as well as sagittal axis (Sagittal Axis), coronal axis (coronal plane) can be defined similar to the medical field (Coronal Axis) and vertical axis (Vertical Axis) three basic axes. Referring to the coordinate axes in Fig. 2-Fig.
  • the sagittal plane refers to a cut plane perpendicular to the ground along the front-back direction of the body, which divides the human body into two parts: left and right.
  • the sagittal plane can be It refers to the YZ plane, that is, the X axis is perpendicular to the sagittal plane of the user;
  • the coronal plane refers to the cut plane perpendicular to the ground along the left and right directions of the body, which divides the human body into two parts: front and rear.
  • the plane can refer to the XZ plane, that is, the Y axis is perpendicular to the coronal plane of the user; the cross section refers to the cut plane parallel to the ground made along the upper and lower directions of the body, which divides the human body into upper and lower parts.
  • a cross-section may refer to a cross-section in the XY plane, ie, the Z-axis perpendicular to the user.
  • the sagittal axis refers to an axis that vertically passes through the coronal plane along the anterior-posterior direction of the body.
  • the sagittal axis may refer to the Y-axis; the coronal axis refers to an axis that vertically passes through the sagittal plane along the left-right direction of the body. , in the embodiment of the present specification, the coronal axis may refer to the X axis; the vertical axis refers to the axis that vertically passes through the horizontal plane along the up-down direction of the body, and in the embodiment of the present specification, the vertical axis may refer to the Z axis.
  • FIG. 5 is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
  • 6 is a donning diagram of an exemplary headset shown in accordance with some embodiments of the present application.
  • the hook portion 311 may be close to the holding portion 3122, so that when the earphone 300 is in the wearing state, as shown in Fig. 6, the hook portion 311 acts away from the free end of the connecting portion 3121 on the first side (rear side) of the user's ear 100 .
  • the connecting portion 3121 is connected with the hook-shaped portion 311 , and the connecting portion 3121 and the hook-shaped portion 311 form a first connection point C.
  • the hook portion 311 is bent toward the rear side of the ear portion 100 , and is connected to the rear of the ear portion 100 .
  • the side forms a first contact point B
  • the holding portion 3122 forms a second contact point F with the second side (front side) of the ear portion 100 .
  • the distance between the first contact point B and the second contact point F along the extending direction of the connecting portion 3121 is smaller than that in the wearing state.
  • the distance between the first contact point B and the second contact point F of the earphone 300 along the extension direction of the connecting portion 3121 in the natural state is smaller than the thickness of the user's ear portion 100, so that the earphone 300 can be like in the wearing state.
  • the user's ear 100 is clamped like a "clip".
  • the hook portion 311 may also extend in a direction away from the connecting portion 3121 , that is, the entire length of the hook portion 311 is extended, so that when the earphone 300 is in a wearing state, the hook portion 311 can also be connected to the ear
  • the rear side of the part 100 forms a third contact point A, and the first contact point B is located between the first connection point C and the third contact point A, and is close to the first connection point C. As shown in FIG.
  • the distance between the projections of the first contact point B and the third contact point A on a reference plane (such as the YZ plane) perpendicular to the extension direction of the connecting portion 3121 in the natural state may be smaller than that of the first contact in the wearing state
  • the free end of the hook portion 311 is pressed against the back side of the user's ear portion 100 , so that the third contact point A is located in the area of the ear portion 100 close to the earlobe, so that the hook portion 311 can be placed in the vertical position.
  • the user's ear portion 100 is clamped in the straight direction (Z-axis direction) to overcome the self-weight of the holding portion 3122 .
  • the contact area between the hook-shaped portion 311 and the user's ear portion 100 can be increased while clamping the user's ear portion 100 in the vertical direction. , that is, to increase the frictional force between the hook portion 311 and the user's ear portion 100 , thereby improving the wearing stability of the earphone 300 .
  • a connecting portion 3121 is provided between the hook portion 311 of the earphone 300 and the holding portion 3122 , so that when the earphone 300 is in a wearing state, the connecting portion 3121 cooperates with the hook portion 311 to provide an opposite ear for the holding portion 3122
  • the pressing force on the first side of the earphone 300 can be firmly attached to the user's ear when the earphone 300 is in the wearing state, thereby improving the stability of the earphone 300 in wearing and the reliability of the earphone 300 in sound production.
  • FIG. 7 is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
  • 8 is a donning diagram of an exemplary headset shown in accordance with some embodiments of the present application.
  • the earphone 300 shown in FIGS. 7-8 is substantially the same as the earphone 300 shown in FIGS. 5-6 , the difference being that the bending direction of the hook portion 311 is different.
  • the hook portion 311 in the direction from the first connection point C between the hook part 311 and the connection part 3121 to the free end of the hook part 311 (the end away from the connection part 3121 ) , the hook portion 311 is bent toward the user's head, and forms a first contact point B and a third contact point A with the head.
  • the first contact point B is located between the third contact point A and the first connection point C.
  • the hook portion 311 can form a lever structure with the first contact point B as a fulcrum.
  • the free end of the hook portion 311 is pressed against the user's head, and the user's head provides a force directed to the outside of the head at the third contact point A, and the force is converted into the first connection through the lever structure
  • the head-directed force at point C provides the holding portion 3122 with a pressing force on the first side of the ear portion 100 via the connecting portion 3121 .
  • the magnitude of the force directed to the outside of the head by the user's head at the third contact point A and the relationship between the free end of the hook portion 311 and the YZ plane when the headset 300 is in the non-wearing state The size of the included angle is positively correlated. Specifically, the larger the angle formed between the free end of the hook portion 311 and the YZ plane when the earphone 300 is in the non-wearing state, the better the free end of the hook portion 311 can press when the earphone 300 is in the wearing state. For the user's head, the force that the user's head can provide at the third contact point A toward the outside of the head is correspondingly larger.
  • the angle formed between the free end of the hook portion 311 and the YZ plane when the earphone 300 is in the non-wearing state can be greater than the angle formed between the free end of the hook portion 311 and the YZ plane when the earphone 300 is in the wearing state angle.
  • the hook when the free end of the hook-shaped portion 311 is pressed against the user's head, in addition to making the user's head provide a force directed to the outside of the head at the third contact point A, the hook will also be At least the first side of the ear portion 100 of the shaped portion 311 forms another pressing force, and can cooperate with the pressing force formed by the holding portion 3122 against the second side of the ear portion 100 to form a pressing force against the ear portion 100 of the user.
  • the pressing effect of the "front and rear pinch" improves the stability of the earphone 300 in wearing.
  • the actual wearing of the earphone 300 will have a certain impact.
  • the contact point between the earphone 300 and the user's head or ear (For example, the positions of the first contact point B, the second contact point F, the third contact point A, etc.) may be changed accordingly.
  • the speaker 340 when the speaker 340 is located in the holding part 3122, the actual wearing of the earphone 300 will be affected to a certain extent due to the differences in the physiological structures of the head and ears of different users. Therefore, when the earphone 300 is worn by different users, the speaker The relative position of 340 to the user's ear will change. In some embodiments, the position of the speaker 340 on the overall structure of the earphone 300 can be adjusted by setting the structure of the holding portion 3122, thereby adjusting the distance of the speaker 340 relative to the user's ear canal.
  • 9A is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
  • 9B is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
  • the holding part 3122 can be designed as a multi-segment structure to adjust the relative position of the speaker 340 on the overall structure of the earphone 300 .
  • the holding portion 3122 is a multi-segment structure, which can make the earphone 300 in the wearing state, and can make the speaker 340 as close to the ear canal as possible while not covering the external auditory canal of the ear, so as to improve the hearing of the user when using the earphone 300 experience.
  • the retaining portion 3122 may include a first retaining segment 3122-1, a second retaining segment 3122-2, and a third retaining segment 3122-3 connected end to end in sequence.
  • One end of the first holding section 3122-1 away from the second holding section 3122-2 is connected to the connecting portion 3121, and the second holding section 3122-2 is folded back relative to the first holding section 3122-1, so that the second holding section 3122- 2 and the first holding segment 3122-1 have a space therebetween.
  • a U-shaped structure may be formed between the second holding section 3122-2 and the first holding section 3122-1.
  • the third holding section 3122-3 is connected to an end of the second holding section 3122-2 facing away from the first holding section 3122-1, and the third holding section 3122-3 can be used for arranging structural components such as the speaker 340.
  • the second retaining segment 3122-2 can be relative to the first retaining segment 3122-2.
  • the folded back length of the holding section 3122-1 (the length of the second holding section 3122-2 along the Y-axis direction), etc., to adjust the position of the third holding section 3122-3 on the overall structure of the earphone 300, so as to adjust the position of the third holding section 3122-3 on the overall structure of the earphone 300.
  • the position or distance of the speaker 340 of segment 3122-3 relative to the ear canal of the user is maintained.
  • the distance between the second retaining segment 3122-2 and the first retaining segment 3122-1, and the folded length of the second retaining segment 3122-2 relative to the first retaining segment 3122-1 can be adjusted according to different users.
  • the ear features (eg, shape, size, etc.) of the device should be set accordingly, which is not specifically limited here.
  • the retaining portion 3122 may include a first retaining segment 3122-1, a second retaining segment 3122-2, and a third retaining segment 3122-3 connected end to end in sequence.
  • one end of the first holding section 3122-1 facing away from the second holding section 3122-2 is connected to the connecting portion 3121, and the second holding section 3122-2 is bent relative to the first holding section 3122-1, so that the third holding section is There is a gap between 3122-3 and the first retaining segment 3122-1.
  • the third holding section 3122-3 may be used to set structural members such as the speaker 340.
  • the second retaining segment 3122-2 can be relative to the first retaining segment 3122-2.
  • the bending length of the holding section 3122-1 (the length of the second holding section 3122-2 along the Z-axis direction), etc., adjust the position of the third holding section 3122-3 on the overall structure of the earphone 300, so as to adjust the position of the third holding section 3122-3 on the overall structure of the headset 300.
  • the distance between the third retaining segment 3122-3 and the first retaining segment 3122-1, and the bending length of the second retaining segment 3122-2 relative to the first retaining segment 3122-1 may be determined according to
  • the ear features (eg, shape, size, etc.) of different users are set correspondingly, which are not specifically limited here.
  • FIG. 10 is a structural diagram of an ear-facing side of an exemplary earphone according to some embodiments of the present application.
  • the side of the holding part 3122 facing the ear may be provided with a sound outlet 301 , and the target signal output by the speaker 340 may be transmitted to the user's ear through the sound outlet 301 .
  • the side of the retaining portion 3122 facing the ear portion may include a first region 3122A and a second region 3122B, and the second region 3122B is farther away from the connecting portion 3121 than the first region 3122A, that is, the second region 3122B may be located at the free end of the holding portion 3122 away from the connecting portion 3121 .
  • the first area 3122A may be provided with a sound outlet 301
  • the second area 3122B is convex toward the ear compared with the first area 3122A, so that the second area 3122B is in contact with the ear to allow the sound outlet 301 is spaced from the ear in the wearing state.
  • the free end of the holding part 3122 may be configured as a convex hull structure, and on the side of the holding part 3122 close to the user's ear, the convex hull structure protrudes outward (ie, toward the user's ear) relative to the side surface. . Since the speaker 340 can generate sound (eg, target signal) transmitted to the ear through the sound hole 301 , the convex hull structure can prevent the ear from blocking the sound hole 301 and the sound produced by the speaker 340 is weakened or even cannot be output.
  • sound eg, target signal
  • the protrusion height of the convex hull structure may be represented by the maximum protrusion height of the second region 3122B relative to the first region 3122A.
  • the maximum raised height of the second region 3122B relative to the first region 3122A may be greater than or equal to 1 mm.
  • the maximum raised height of the second region 3122B relative to the first region 3122A may be greater than or equal to 0.8 mm.
  • the maximum raised height of the second region 3122B relative to the first region 3122A may be greater than or equal to 0.5 mm.
  • the distance between the sound outlet hole 301 and the user's ear canal is less than 10 mm. In some embodiments, by setting the structure of the holding portion 3122, when the user wears the earphone 300, the distance between the sound outlet hole 301 and the user's ear canal is less than 8 mm. In some embodiments, by setting the structure of the holding portion 3122, when the user wears the earphone 300, the distance between the sound outlet hole 301 and the user's ear canal is less than 7 mm. In some embodiments, by setting the structure of the holding portion 3122, when the user wears the earphone 300, the distance between the sound outlet hole 301 and the user's ear canal is less than 6 mm.
  • the raised area toward the ear compared to the first area 3122A may also be located in other areas of the holding portion 3122 , such as the sound outlet 301 . and the area between the connecting portion 3121.
  • the orthographic projection of the sound outlet 301 on the ear along the thickness direction of the retaining portion 3122 may at least partially fall on the concha and the ear hole. / or in a concha boat.
  • the holding part 3122 may be located on the side of the ear hole close to the top of the user's head and contact the antihelix. At least part of it fell within the concha boat.
  • FIG. 11 is a structural diagram of a side of an exemplary earphone facing away from the ear according to some embodiments of the present application.
  • 12 is a top view of an exemplary headset shown in accordance with some embodiments of the present application.
  • a pressure relief hole 302 may be provided on the side of the holding portion 3122 along the vertical axis (Z axis) and close to the top of the user's head.
  • the opening direction of the pressure relief hole 302 may be toward the top of the user's head, and there may be a specific angle between the opening direction of the pressure relief hole 302 and the vertical axis (Z-axis) to allow the pressure relief hole 302 to be farther away from the user's ear Therefore, it is difficult for the user to hear the sound outputted through the pressure relief hole 302 and transmitted to the user's ear.
  • the included angle between the opening direction of the pressure relief hole 302 and the vertical axis (Z axis) may be 0° to 10°. In some embodiments, the included angle between the opening direction of the pressure relief hole 302 and the vertical axis (Z axis) may be 0° to 8°. In some embodiments, the included angle between the opening direction of the pressure relief hole 302 and the vertical axis (Z axis) may be 0° to 5°.
  • the pressure relief hole 302 and the user can be made when the user wears the earphone 300 .
  • the distance between the ear canals is within an appropriate range. In some embodiments, when the user wears the earphone 300, the distance between the pressure relief hole 302 and the user's ear canal may be 5 mm to 20 mm. In some embodiments, when the user wears the earphone 300, the distance between the pressure relief hole 302 and the user's ear canal may be 5 mm to 18 mm.
  • the distance between the pressure relief hole 302 and the user's ear canal may be 5 mm to 15 mm. In some embodiments, when the user wears the earphone 300, the distance between the pressure relief hole 302 and the ear canal of the user may be 6 mm to 14 mm. In some embodiments, when the user wears the earphone 300, the distance between the pressure relief hole 302 and the ear canal of the user may be 8 mm to 10 mm.
  • FIG. 13 is a schematic cross-sectional structural diagram of an exemplary earphone according to some embodiments of the present application.
  • FIG. 13 shows the acoustic structure formed by the holding part (for example, holding part 3122 ) of the earphone (for example, earphone 300 ), including: sound outlet 301 , pressure relief hole 302 , sound adjustment hole 303 , front cavity 304 and rear cavity 305.
  • the holding portion 3122 may respectively form a front cavity 304 and a rear cavity 305 on opposite sides of the speaker 340 .
  • the front cavity 304 communicates with the outside of the earphone 300 through the sound outlet 301, and outputs sound (eg, target signal, audio signal, etc.) to the ear.
  • the rear cavity 305 communicates with the outside of the earphone 300 through a pressure relief hole 302 , and the pressure relief hole 302 is farther away from the user's ear canal than the sound outlet hole 301 .
  • the pressure relief hole 302 can allow air to freely enter and exit the rear cavity 305 , so that the change of air pressure in the front cavity 304 can not be blocked by the rear cavity 305 as much as possible, thereby improving the air flow through the sound outlet 301 .
  • the quality of the sound output from the ear can allow air to freely enter and exit the rear cavity 305 , so that the change of air pressure in the front cavity 304 can not be blocked by the rear cavity 305 as much as possible, thereby improving the air flow through the sound outlet 301 .
  • the quality of the sound output from the ear can allow air to freely enter and exit the rear cavity 305 , so that the change of air pressure in the front cavity 304 can not be blocked by the rear cavity 305 as much as possible, thereby improving the air flow through the sound outlet 301 .
  • the quality of the sound output from the ear can allow air to freely enter and exit the rear cavity 305 , so that the change of air pressure in the front cavity 304 can not be blocked by the rear cavity 305 as much as
  • the included angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction (X-axis direction) of the holding portion 3122 may be 0° to 50°. In some embodiments, the included angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction of the holding portion 3122 may be 5° to 45°. In some embodiments, the included angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction of the holding portion 3122 may be 10° to 40°. In some embodiments, the included angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction of the holding portion 3122 may be 15° to 35°.
  • the angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction of the holding portion 3122 may be the connection between the center of the pressure relief hole 302 and the center of the sound outlet hole 301 The angle between the line and the thickness direction of the holding portion 3122 .
  • the sound outlet hole 301 and the pressure relief hole 302 can be regarded as two sound sources that radiate sound outward, and the radiated sounds have the same amplitude and opposite phases.
  • the two sound sources can approximately form an acoustic dipole or similar acoustic dipoles, so the sound radiated outward has obvious directivity, forming a "8"-shaped sound radiation area.
  • the sound radiated by the two sound sources is the largest, and the radiated sound in the other directions is obviously reduced, and the sound radiated at the perpendicular line between the two sound sources is the smallest.
  • the sound radiated by the pressure relief hole 302 and the sound outlet hole 301 is the largest, and the radiated sound in other directions is obviously reduced, and the pressure relief hole 302 and the sound outlet hole 301 are connected.
  • the radiated sound is the least at the mid-perpendicular of the line.
  • the acoustic dipole formed by the pressure relief hole 302 and the sound outlet hole 301 can reduce the sound leakage of the speaker 340 .
  • the holding portion 3122 may further be provided with a sound adjustment hole 303 communicating with the rear cavity 305 , and the sound adjustment hole 303 may be used to destroy the high-pressure area of the sound field in the rear cavity 305 , so that the rear cavity 305 is The wavelength of the standing wave in the cavity 305 is shortened, so that the resonance frequency of the sound output to the outside of the earphone 300 through the pressure relief hole 302 is as high as possible, eg, greater than 4 kHz, thereby reducing the sound leakage of the speaker 340 .
  • the sound adjustment hole 303 and the pressure relief hole 302 may be located on opposite sides of the speaker 340 , for example, arranged opposite to each other in the Z-axis direction, so as to destroy the high pressure region of the sound field in the rear cavity 305 to the greatest extent.
  • the sound adjustment hole 303 may be farther away from the sound outlet hole 301 than the pressure relief hole 302 , so as to increase the distance between the sound adjustment hole 303 and the sound outlet hole 301 as much as possible, thereby reducing the adjusted sound
  • the target signal output by the speaker 340 through the sound outlet 301 and/or the pressure relief hole 302 will also be picked up by the first microphone array 320, and the target signal will affect the processor 330's perception of the sound field at the target spatial position. It is estimated that the target signal output by the speaker 340 is not expected to be picked up. In this case, in order to reduce the influence of the target signal output by the speaker 340 on the first microphone array 320, the first microphone array 320 may be set in the first target area where the sound output by the speaker 340 is as small as possible.
  • the first target area may be a position of or near the acoustic zero point of the radiated sound field of the acoustic dipole formed by the pressure relief hole 302 and the sound outlet hole 301 .
  • the first target area may be the area G shown in FIG. 10 .
  • the area G is located in front of the sound outlet 301 and/or the pressure relief hole 302 (the front here refers to the direction the user faces), that is, the area G is closer to the user's eyes.
  • the region G may be a partial region on the connecting portion 3121 of the fixing structure 310 . That is, the first microphone array 320 may be located at the connection part 3121 .
  • the first microphone array 320 may be located at a position where the connecting part 3121 is close to the holding part 3122 .
  • the area G may also be located behind the sound outlet 301 and/or the pressure relief hole 302 (the front here refers to the direction opposite to the direction the user faces).
  • the region G may be located on the end of the holding portion 3122 away from the connecting portion 3121 .
  • the first microphone array 320 and the The relative position between the sound outlet hole 301 and the pressure relief hole 302 may be the location where any microphone in the first microphone array 320 is located.
  • the connection line between the first microphone array 320 and the sound outlet hole 301 and the connection line between the sound outlet hole 301 and the pressure relief hole 302 form a first included angle
  • the first microphone array 320 and the pressure relief hole 302 form a first included angle.
  • the connecting line between the holes 302 and the connecting line between the sound outlet hole 301 and the pressure relief hole 302 form a second included angle.
  • the difference between the first included angle and the second included angle may not be greater than 30°.
  • the difference between the first included angle and the second included angle may be no greater than 25°.
  • the difference between the first included angle and the second included angle may be no greater than 20°.
  • the difference between the first included angle and the second included angle may not be greater than 15°.
  • the difference between the first included angle and the second included angle may not be greater than 10°.
  • the difference between the first distance and the second distance may not be greater than 6 mm. In some embodiments, the difference between the first distance and the second distance may be no greater than 5 millimeters. In some embodiments, the difference between the first distance and the second distance may be no greater than 4 millimeters. In some embodiments, the difference between the first distance and the second distance may be no greater than 3 millimeters.
  • the positional relationship between the first microphone array 320 and the sound outlet hole 301 and the pressure relief hole 302 described herein may refer to the center of any microphone in the first microphone array 320 and the sound outlet hole 301 and the positional relationship between the center of the pressure relief hole 302 .
  • the connection line between the first microphone array 320 and the sound outlet hole 301 and the connection line between the sound outlet hole 301 and the pressure relief hole 302 form a first included angle, which may refer to any microphone in the first microphone array 320
  • the line connecting the center of the sound outlet hole 301 and the line connecting the center of the sound outlet hole 301 and the center of the pressure relief hole 302 form a first included angle.
  • the first distance between the first microphone array 320 and the sound hole 301 may mean that any microphone in the first microphone array 320 has a first distance from the center of the sound hole 301 .
  • the first microphone array 320 is located at the acoustic zero position of the acoustic dipole formed by the sound outlet 301 and the pressure relief hole 302, so that the first microphone array 320 is minimally affected by the target signal output by the speaker 340, Thus, the first microphone array 320 can more accurately pick up the ambient noise near the user's ear canal. Further, the processor 330 may more accurately estimate the ambient noise at the user's ear canal based on the ambient noise picked up by the first microphone array 320 and generate a noise reduction signal, thereby better implementing the active noise reduction of the earphone 300 . For a specific description of implementing the active noise reduction of the earphone 300 by using the first microphone array 320, reference may be made to FIG. 14-FIG. 16, and related descriptions thereof.
  • FIG 14 is an exemplary noise reduction flow diagram for a headset according to some embodiments of the present application.
  • process 1400 may be performed by headset 300 .
  • process 1400 may include:
  • step 1410 ambient noise is picked up. In some embodiments, this step may be performed by the first microphone array 320 .
  • ambient noise may refer to a combination of various external sounds (eg, traffic noise, industrial noise, building construction noise, social noise) in the user's environment.
  • the first microphone array 320 may be located on the body portion 312 of the earphone 300 near the user's ear canal for picking up ambient noise near the user's ear canal. Further, the first microphone array 320 can convert the picked-up environmental noise signal into an electrical signal and transmit it to the processor 330 for processing.
  • step 1420 the noise of the target spatial location is estimated based on the picked-up ambient noise. In some embodiments, this step may be performed by processor 330 .
  • the processor 330 may perform signal separation on the picked-up ambient noise.
  • the ambient noise picked up by the first microphone array 320 may include various sounds.
  • the processor 330 may perform signal analysis on the ambient noise picked up by the first microphone array 320 to separate various sounds.
  • the processor 330 can adaptively adjust the parameters of the filter according to the statistical distribution characteristics and structural characteristics of various sounds in different dimensions such as space, time domain, and frequency domain, and estimate the parameter information of each sound signal in the environmental noise, And complete the signal separation process according to the parameter information of each sound signal.
  • the statistical distribution characteristics of noise may include probability distribution density, power spectral density, autocorrelation function, probability density function, variance, mathematical expectation, and the like.
  • the structured features of noise may include noise distribution, noise intensity, global noise intensity, noise rate, etc., or any combination thereof.
  • the global noise intensity may refer to an average noise intensity or a weighted average noise intensity.
  • the noise rate may refer to the degree of dispersion of the noise distribution.
  • the ambient noise picked up by the first microphone array 320 may include a first signal, a second signal, and a third signal.
  • the processor 330 obtains the differences between the first signal, the second signal, and the third signal in the space (eg, where the signals are located), the time domain (eg, delay), and the frequency domain (eg, amplitude, phase), and according to the three
  • the first signal, the second signal, and the third signal are separated by the difference in these dimensions, and the relatively pure first signal, the second signal, and the third signal are obtained.
  • the processor 330 may update the environmental noise according to the parameter information (eg, frequency information, phase information, amplitude information) of the separated signal.
  • the processor 330 may determine that the first signal is the user's call sound according to the parameter information of the first signal, and remove the first signal from the ambient noise to update the ambient noise.
  • the removed first signal may be transmitted to the far end of the call.
  • the first signal may be transmitted to the far end of the call.
  • the target spatial location is a location determined based on the first microphone array 320 at or near the user's ear canal.
  • the target spatial location may refer to a spatial location that is a certain distance (eg, 2mm, 3mm, 5mm, etc.) close to the user's ear canal (eg, ear canal).
  • the target spatial location is closer to the user's ear canal than any microphone in the first microphone array 320 .
  • the target spatial position is related to the number of each microphone in the first microphone array 320 and the distribution position relative to the user's ear canal. By adjusting the number of each microphone in the first microphone array 320 and/or relative to the user's ear canal The distribution position of the track can be adjusted to the target space position.
  • estimating noise at the spatial location of the target based on the picked-up environmental noise may further include determining one or more spatial noise sources related to the picked-up environmental noise, estimating the target based on the spatial noise sources Noise in spatial location.
  • the ambient noise picked up by the first microphone array 320 may come from different azimuths and different types of spatial noise sources.
  • the parameter information eg, frequency information, phase information, and amplitude information corresponding to each spatial noise source is different.
  • the processor 330 may perform signal separation and extraction on the noise at the target spatial location according to the statistical distribution and structural features of different types of noise in different dimensions (eg, spatial domain, time domain, frequency domain, etc.), so as to obtain Noise of different types (eg, different frequencies, different phases, etc.), and estimate the parameter information (eg, amplitude information, phase information, etc.) corresponding to each noise.
  • the processor 330 may further determine the overall parameter information of the noise at the target spatial position according to the parameter information corresponding to different types of noise at the target spatial position. More information on estimating noise at a target spatial location based on one or more spatial noise sources can be found elsewhere in this specification, eg, FIG. 15 and its corresponding description.
  • estimating noise at the target spatial location based on the picked-up ambient noise may further include constructing a virtual microphone based on the first microphone array 320 and estimating noise at the target spatial location based on the virtual microphone.
  • estimating noise at a target spatial location based on a virtual microphone reference may be made to other places in this specification, such as FIG. 16 and its corresponding description.
  • step 1430 a noise reduction signal is generated based on the noise at the target spatial location. In some embodiments, this step may be performed by processor 330 .
  • the processor 330 may generate a noise reduction signal based on the parameter information (eg, amplitude information, phase information, etc.) of the noise at the target spatial location obtained in step 1420 .
  • the phase difference between the phase of the noise reduction signal and the phase of the noise at the target spatial location may be less than or equal to a preset phase threshold.
  • the preset phase threshold may be in the range of 90-180 degrees.
  • the preset phase threshold can be adjusted within this range according to user needs. For example, when the user does not want to be disturbed by the sound of the surrounding environment, the preset phase threshold may be a larger value, such as 180 degrees, that is, the phase of the noise reduction signal is opposite to the phase of the noise at the target spatial location.
  • the preset phase threshold may be a small value, such as 90 degrees. It should be noted that the more ambient sounds the user wishes to receive, the closer the preset phase threshold may be to 90 degrees, and the less ambient sounds the user wishes to receive, the closer the preset phase threshold may be to 180 degrees.
  • the phase of the noise reduction signal is a certain phase (eg, the phase is opposite) to the noise at the target spatial position, the amplitude of the noise at the target spatial position is different from the amplitude of the noise reduction signal. Can be less than or equal to the preset amplitude threshold.
  • the preset amplitude threshold may be a small value, such as 0 dB, that is, the amplitude of the noise reduction signal is equal to the amplitude of the noise at the target spatial position.
  • the preset amplitude threshold may be a relatively large value, for example, approximately equal to the amplitude of the noise at the target spatial position.
  • the preset amplitude threshold can be to the amplitude of the noise at the target spatial position, and the less the user wishes to receive the sound of the surrounding environment, the preset amplitude threshold can be The closer it is to 0dB.
  • the speaker 340 may output the target signal based on the noise reduction signal generated by the processor 330 .
  • the speaker 340 can convert a noise reduction signal (eg, an electrical signal) into a target signal (ie, a vibration signal) based on its vibration component, and the target signal is transmitted to the user's ear through the sound outlet 301 on the earphone 300, and is transmitted to the user's ear.
  • the ear canal and ambient noise cancel each other out.
  • the speaker 340 may output target signals corresponding to the plurality of spatial noise sources based on the noise reduction signal.
  • the speaker 340 may output a first target signal having an approximately opposite phase and approximately equal amplitude to the noise of the first spatial noise source to cancel the first spatial noise.
  • the noise of the noise source and the noise of the second spatial noise source are approximately opposite in phase and approximately equal in amplitude to the second target signal to cancel the noise of the second spatial noise source.
  • the loudspeaker 340 is an air conduction loudspeaker
  • the position where the target signal and the ambient noise are canceled may be the target spatial position.
  • the distance between the target space position and the user's ear canal is small, and the noise at the target space position can be approximately regarded as the noise at the user's ear canal position. Therefore, the noise reduction signal and the noise at the target space position cancel each other out, and can be approximately transmitted to the user.
  • the ambient noise of the ear canal is eliminated, and the active noise reduction of the earphone 300 is realized.
  • the loudspeaker 340 is a bone conduction loudspeaker
  • the position where the target signal and the ambient noise are canceled may be the basilar membrane.
  • the target signal and ambient noise are canceled at the basilar membrane of the user, thereby realizing active noise reduction of the earphone 300 .
  • the earphone 300 may further include one or more sensors, which may be located anywhere on the earphone 300 , for example, the hook portion 311 and/or the connecting portion 3121 and/or the holding portion 3122 .
  • One or more sensors may be electrically connected to other components of headset 300 (eg, processor 330).
  • one or more sensors may be used to obtain physical location and/or motion information of the headset 300 .
  • the one or more sensors may include an Inertial Measurement Unit (IMU), a Global Positioning System (GPS), a radar, and the like.
  • the motion information may include motion trajectory, motion direction, motion speed, motion acceleration, motion angular velocity, motion-related time information (eg, motion start time, end time), etc., or any combination thereof.
  • the IMU may include a Micro Electro Mechanical System (MEMS).
  • MEMS Micro Electro Mechanical System
  • the microelectromechanical system may include multi-axis accelerometers, gyroscopes, magnetometers, etc., or any combination thereof.
  • the IMU may be used to detect the physical location and/or motion information of the headset 300 to enable control of the headset 300 based on the physical location and/or motion information.
  • the processor 330 may be based on motion information (eg, motion trajectory, motion direction, motion speed, motion acceleration, motion angular velocity, motion-related time information) of the headset 300 acquired by one or more sensors of the headset 300 . Update the noise at the target space location and the sound field estimate at the target space location. Further, the processor 330 may generate a noise reduction signal based on the updated noise at the target spatial location and the sound field estimate at the target spatial location.
  • One or more sensors can record the motion information of the earphone 300, and then the processor 330 can quickly update the noise reduction signal, which can improve the noise tracking performance of the earphone 300, so that the noise reduction signal can more accurately eliminate environmental noise, and further. Improve noise reduction and user listening experience.
  • FIG. 15 is an exemplary flowchart for estimating noise at a spatial location of a target, according to some embodiments of the present application. As shown in Figure 15, process 1500 may include:
  • step 1510 one or more sources of spatial noise related to ambient noise picked up by the first microphone array 320 are determined. In some embodiments, this step may be performed by processor 330 .
  • determining a spatial noise source refers to determining information related to the spatial noise source, such as the location of the spatial noise source (including the orientation of the spatial noise source, the distance between the spatial noise source and the target spatial location, etc.), the spatial noise source The phase and the amplitude of the spatial noise source, etc.
  • a spatial noise source related to ambient noise refers to a noise source whose sound waves can be delivered to the user's ear canal (eg, a target spatial location) or near the user's ear canal.
  • the spatial noise sources may be noise sources in different directions (eg, front, rear, etc.) of the user's body. For example, there is crowd noise in front of the user's body and vehicle whistle noise to the left of the user's body.
  • the spatial noise sources include crowd noise sources in front of the user's body and vehicle whistle noise sources to the left of the user's body.
  • the first microphone array 320 can pick up spatial noises in all directions of the user's body, convert the spatial noises into electrical signals, and transmit them to the processor 330.
  • the processor 330 can analyze the electrical signals corresponding to the spatial noises to obtain Parameter information (eg, frequency information, amplitude information, phase information, etc.) of the picked up spatial noise in various directions.
  • the processor 330 determines the information of the spatial noise sources in various directions according to the parameter information of the spatial noise in various directions, for example, the orientation of the spatial noise source, the distance of the spatial noise source, the phase of the spatial noise source, and the amplitude of the spatial noise source.
  • the processor 330 may determine the source of the spatial noise through a noise localization algorithm based on the spatial noise picked up by the first microphone array 320 .
  • the noise localization algorithm may include one or more of a beamforming algorithm, a super-resolution spatial spectrum estimation algorithm, a time difference of arrival algorithm (also referred to as a delay estimation algorithm), and the like.
  • the processor 330 may divide the picked-up environmental noise into multiple frequency bands according to a specific frequency bandwidth (for example, every 500 Hz as a frequency band), each frequency band may correspond to a different frequency range, and at least one The spatial noise source corresponding to the frequency band is determined on the frequency band.
  • the processor 330 may perform signal analysis on the frequency bands divided by the environmental noise, obtain parameter information of the environmental noise corresponding to each frequency band, and determine the spatial noise source corresponding to each frequency band according to the parameter information.
  • step 1520 the noise of the target spatial location is estimated based on the spatial noise sources. In some embodiments, this step may be performed by processor 330 . As described herein, estimating the noise at the target spatial position refers to estimating parameter information of the noise at the target spatial position, such as frequency information, amplitude information, phase information, and the like.
  • the processor 330 may estimate that each spatial noise source transmits, based on the parameter information (eg, frequency information, amplitude information, phase information, etc.) of the spatial noise sources located in various directions of the user's body obtained in step 1510, respectively.
  • the parameter information of the noise to the target space position, so as to estimate the noise of the target space position.
  • the processor 330 may estimate the frequency information of the second azimuth spatial noise source when the noise of the second azimuth spatial noise source is transmitted to the target spatial position according to the position information, frequency information, phase information or amplitude information of the second azimuth spatial noise source. , phase information or amplitude information. Further, the processor 330 may estimate the noise information of the target spatial position based on the frequency information, phase information or amplitude information of the first azimuth spatial noise source and the second azimuth spatial noise source, thereby estimating the noise information of the target spatial position.
  • the processor 330 may estimate noise information for the target spatial location using virtual microphone techniques or other methods.
  • the processor 330 may extract the parameter information of the noise of the spatial noise source from the frequency response curve of the spatial noise source picked up by the microphone array through a feature extraction method.
  • the method for extracting the parameter information of the noise of the spatial noise source may include, but is not limited to, Principal Components Analysis (PCA), Independent Component Algorithm (ICA), Linear Discriminant Analysis (Linear Discriminant) Analysis, LDA), singular value decomposition (Singular Value Decomposition, SVD) and so on.
  • PCA Principal Components Analysis
  • ICA Independent Component Algorithm
  • LDA Linear Discriminant Analysis
  • SVD singular value decomposition
  • process 1500 is only for example and illustration, and does not limit the scope of application of the present application.
  • process 1500 may further include steps of locating the spatial noise source, extracting noise parameter information of the spatial noise source, and the like. Such corrections and changes are still within the scope of this application.
  • FIG. 16 is an exemplary flowchart for estimating the sound field and noise of a target spatial location according to some embodiments of the present application. As shown in Figure 16, process 1600 may include:
  • a virtual microphone is constructed based on the first microphone array 320. In some embodiments, this step may be performed by processor 330 .
  • a virtual microphone may be used to represent or simulate audio data collected by the microphone if the microphone is placed at the target spatial location. That is, the audio data obtained by the virtual microphone can be approximated or equivalent to the audio data collected by the physical microphone if the physical microphone is placed at the target spatial position.
  • the virtual microphone may include a mathematical model.
  • the mathematical model can embody the noise or sound field estimation of the target spatial location and the parameter information (eg, frequency information, amplitude information, phase information, etc.) of the ambient noise picked up by the microphone array (eg, the first microphone array 320 ) and the microphone array relationship between parameters.
  • the parameters of the microphone array may include one or more of the arrangement of the microphone array, the spacing between the microphones, the number and position of the microphones in the microphone array, and the like.
  • the mathematical model can be obtained by calculation based on the initial mathematical model and parameters of the microphone array and parameter information (eg, frequency information, amplitude information, phase information, etc.) of the sound (eg, ambient noise) picked up by the microphone array.
  • the initial mathematical model may include parameters and model parameters corresponding to parameters of the microphone array and parameter information of ambient noise picked up by the microphone array.
  • the parameters of the microphone array and the parameter information of the sound picked up by the microphone array and the initial values of the model parameters are brought into the initial mathematical model to obtain the predicted noise or sound field of the target spatial position.
  • This predicted noise or sound field is then compared with the data (noise and sound field estimates) obtained by physical microphones placed at the target spatial location to make adjustments to the model parameters of the mathematical model.
  • the mathematical model is obtained by adjusting multiple times through a large amount of data (for example, parameters of the microphone array and parameter information of ambient noise picked up by the microphone array).
  • the virtual microphone may include a machine learning model.
  • the machine learning model may be obtained through training based on parameters of the microphone array and parameter information (eg, frequency information, amplitude information, phase information, etc.) of the sound (eg, ambient noise) picked up by the microphone array.
  • the machine learning model is obtained by training an initial machine learning model (eg, a neural network model) using the parameters of the microphone array and the parameter information of the sound picked up by the microphone array as training samples.
  • the parameters of the microphone array and the parameter information of the sound picked up by the microphone array can be input into the initial machine learning model, and the prediction results (for example, the noise and sound field estimation of the target spatial position) can be obtained.
  • This prediction is then compared with data (noise and sound field estimates) obtained from physical microphones set up at the target spatial location to adjust the parameters of the initial machine learning model.
  • data noise and sound field estimates
  • the parameters of the initial machine learning model are optimized until the prediction results of the initial machine learning model are consistent with the target space.
  • the machine learning model is obtained when the data obtained by the physical microphone set at the location is the same or approximately the same.
  • Virtual microphone technology can move physical microphones away from locations where microphone placement is difficult (eg, target spatial locations). For example, in order to open the user's ears without blocking the user's ear canal, the physical microphone cannot be set at the position of the user's ear hole (eg, a target spatial position). At this time, the microphone array can be set at a position close to the user's ear without blocking the ear canal through the virtual microphone technology, and then a virtual microphone at the position of the user's ear hole can be constructed through the microphone array.
  • the virtual microphone may utilize physical microphones (eg, first microphone array 320 ) at a first location to predict sound data (eg, amplitude, phase, sound pressure, sound field, etc.) at a second location (eg, a target spatial location).
  • sound data eg, amplitude, phase, sound pressure, sound field, etc.
  • the sound data of the second position (which may also be referred to as a specific position, such as a target spatial position) predicted by the virtual microphone may be based on the distance between the virtual microphone and the physical microphone (the first microphone array 320 ), the virtual Adjust the type of microphone (eg, mathematical model virtual microphone, machine learning virtual microphone), etc. For example, the closer the distance between the virtual microphone and the physical microphone, the more accurate the sound data of the second position predicted by the virtual microphone.
  • the sound data of the second position predicted by the machine learning virtual microphone is more accurate than that of the mathematical model virtual microphone.
  • the position corresponding to the virtual microphone ie, the second position, for example, the target spatial position
  • the position corresponding to the virtual microphone may be near the first microphone array 320 , or may be far away from the first microphone array 320 .
  • step 1620 the noise and sound field of the target spatial location is estimated based on the virtual microphone. In some embodiments, this step may be performed by processor 330 .
  • the processor 330 may real-time analyze the parameter information (eg, frequency information, amplitude information, phase information, etc.) and the parameters of the first microphone array (for example, the arrangement of the first microphone array, the spacing between the individual microphones, the number of microphones in the first microphone array) are input into the mathematical model as parameters of the mathematical model to estimate the target Noise and sound field at spatial location.
  • the parameter information eg, frequency information, amplitude information, phase information, etc.
  • the parameters of the first microphone array for example, the arrangement of the first microphone array, the spacing between the individual microphones, the number of microphones in the first microphone array
  • the processor 330 may combine the parameter information (eg, frequency information, amplitude information, phase information, etc.) of the ambient noise picked up by the first microphone array with the first microphone in real time
  • the parameters of the array eg, the arrangement of the first microphone array, the spacing between the individual microphones, the number of microphones in the first microphone array
  • the noise and sound field at the target spatial location are estimated based on the output of the machine learning model .
  • process 1600 is only for example and description, and does not limit the scope of application of the present application.
  • steps 1620 may be divided into two steps to estimate the noise and sound field of the target spatial location, respectively. Such corrections and changes are still within the scope of this application.
  • the speaker 340 outputs a target signal based on the noise reduction signal. After the target signal is canceled with the ambient noise, there may still be a part of the sound signal near the user's ear canal that has not been canceled with each other. These sound signals have not been canceled. It may be residual ambient noise and/or residual target signal, so there is still a certain amount of noise in the user's ear canal.
  • the earphone 100 shown in FIG. 1 and the earphone 300 shown in FIGS. 3 to 12 may further include a second microphone 360 .
  • the second microphone 360 may be located on the body portion 312 (eg, the holding portion 3122).
  • the second microphone 360 may be configured to pick up ambient noise and target signals.
  • the number of the second microphones 360 may be one or more.
  • the second microphone can be used to pick up the ambient noise and the target signal at the user's ear canal, so as to monitor the sound field at the user's ear canal after the target signal and the ambient noise are cancelled.
  • the number of the second microphones 360 is multiple, the multiple second microphones can be used to pick up the ambient noise and the target signal at the user's ear canal, and the relevant parameter information of the sound signal at the user's ear canal picked up by the multiple microphones can be in the form of The noise at the user's ear canal is estimated by means of averaging or weighting algorithms.
  • the number of the second microphones 360 when the number of the second microphones 360 is multiple, some of the multiple microphones can be used to pick up the ambient noise and the target signal at the user's ear canal, and the rest of the microphones can be used as the first microphone array 320 In this case, the microphones in the first microphone array 320 and the microphones in the second microphone 360 overlap or intersect.
  • the second microphone 360 may be disposed in a second target area, and the second target area may be an area on the holding portion 3122 close to the user's ear canal.
  • the second target area may be area H in FIG. 10 .
  • the area H may be a partial area of the holding part 3122 close to the user's ear canal. That is, the second microphone 360 may be located at the holding part 3122.
  • the region H may be a partial region in the first region 3122A on the side of the holding portion 3122 facing the user's ear.
  • the second microphone 360 can be located near the ear canal of the user and closer to the ear canal of the user than the first microphone array 320, thereby ensuring that the sound signal (for example, residual ambient noise, residual target signal, etc.) are closer to the sound heard by the user, and the processor 330 further updates the noise reduction signal according to the sound signal picked up by the second microphone 360, so as to achieve a more ideal noise reduction effect.
  • the sound signal For example, residual ambient noise, residual target signal, etc.
  • the position of the second microphone 360 on the holding part 3122 can be adjusted so that the second microphone 360 is connected to the user's ear canal.
  • the distance between them is within a suitable range.
  • the distance between the second microphone 360 and the user's ear canal may be less than 10 mm.
  • the distance between the second microphone 360 and the user's ear canal may be less than 9 mm.
  • the distance between the second microphone 360 and the user's ear canal may be less than 8 mm.
  • the distance between the second microphone 360 and the user's ear canal may be less than 7 mm.
  • the second microphone 360 needs to pick up the target signal output by the speaker 340 through the sound outlet 301 and the residual target signal after canceling the ambient noise.
  • the distance between the second microphone 360 and the sound outlet 301 can be set reasonably.
  • the distance between the second microphone 360 and the sound exit hole 301 in the direction of the sagittal axis (Y axis) may be less than 10 mm.
  • the distance between the second microphone 360 and the sound exit hole 301 in the direction of the sagittal axis (Y axis) may be less than 9 mm. In some embodiments, on the sagittal plane (YZ plane) of the user, the distance between the second microphone 360 and the sound exit hole 301 along the sagittal axis (Y axis) direction may be less than 8 mm. In some embodiments, on the sagittal plane (YZ plane) of the user, the distance between the second microphone 360 and the sound exit hole 301 in the direction of the sagittal axis (Y axis) may be less than 7 mm.
  • the distance between the second microphone 360 and the sound exit hole 301 along the vertical axis (Z axis) direction may be 3 mm to 6 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the sound exit hole 301 along the vertical axis (Z axis) direction may be 2.5 mm to 5.5 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the sound exit hole 301 along the vertical axis (Z axis) direction may be 3 mm to 5 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the sound exit hole 301 along the vertical axis (Z axis) direction may be 3.5 mm to 4.5 mm.
  • the distance between the second microphone 360 and the first microphone array 320 along the vertical axis (Z axis) direction may be 2 mm to 8 mm mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 in the direction of the vertical axis (Z axis) may be 3 mm to 7 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 along the vertical axis (Z axis) direction may be 4 mm to 6 mm.
  • the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) direction may be 2 mm to 20 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) may be 4 mm to 18 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) may be 5 mm to 15 mm.
  • the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) may be 6 mm to 12 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) direction may be 8 mm to 10 mm.
  • the distance between the second microphone 360 and the first microphone array 320 in the direction of the coronal axis (X axis) may be less than 3 mm in the transverse plane (XY plane) of the user. In some embodiments, the distance between the second microphone 360 and the first microphone array 320 in the direction of the coronal axis (X axis) may be less than 2.5 millimeters in the cross-section (XY plane) of the user. In some embodiments, the distance between the second microphone 360 and the first microphone array 320 in the direction of the coronal axis (X axis) may be less than 2 millimeters in the cross-section (XY plane) of the user. It can be understood that the distance between the second microphone 360 and the first microphone array 320 may be the distance between the second microphone 360 and any microphone in the first microphone array 320 .
  • the second microphone 360 is configured to pick up ambient noise and target signals. Further, the processor 330 can update the noise reduction signal based on the sound signal picked up by the second microphone 360 , thereby further improving the active noise reduction of the earphone 300 Effect.
  • the second microphone 360 can update the noise reduction signal based on the sound signal picked up by the second microphone 360 , thereby further improving the active noise reduction of the earphone 300 Effect.
  • Figure 17 is an exemplary flow diagram of updating a noise reduction signal according to some embodiments of the present application. As shown in Figure 17, process 1700 may include:
  • step 1710 based on the sound signal picked up by the second microphone 360, the sound field at the user's ear canal is estimated.
  • this step may be performed by processor 330 .
  • the sound signal picked up by the second microphone 360 includes ambient noise and the target signal output by the speaker 340.
  • these uncancelled sound signals may be residual ambient noise and /residual target signal, so there is still a certain amount of noise in the user's ear canal after the ambient noise and the target signal are canceled.
  • the processor 330 may process the sound signal (eg, environmental noise, target signal) picked up by the second microphone 360 to obtain parameter information of the sound field at the user's ear canal, such as frequency information, amplitude information, and phase information, etc. Achieves an estimation of the sound field at the user's ear canal.
  • the sound signal eg, environmental noise, target signal
  • step 1720 the noise reduction signal is updated according to the sound field at the user's ear canal.
  • step 1720 may be performed by processor 330 .
  • the processor 330 may adjust the parameter information (eg, frequency information, amplitude information and/or phase information) of the noise reduction signal according to the parameter information of the sound field at the user's ear canal obtained in step 1710, such that The amplitude information and frequency information of the updated noise reduction signal are more consistent with the amplitude information and frequency information of the environmental noise at the user's ear canal, and the phase information of the updated noise reduction signal is inverse to the environmental noise at the user's ear canal. The bit information is more consistent, so that the updated noise reduction signal can eliminate ambient noise more accurately.
  • the parameter information eg, frequency information, amplitude information and/or phase information
  • the microphone that picks up the sound field at the user's ear canal is not limited to the second microphone 360, and may also include other microphones, such as a third microphone, a fourth microphone, etc., and the relevant parameters of the sound field at the user's ear canal picked up by multiple microphones
  • the information estimates the sound field at the user's ear canal in ways such as averaging or weighting algorithms.
  • the second microphone 360 may include a microphone that is closer to the user's ear canal than any microphone in the first microphone array 320 .
  • the sound signal picked up by the first microphone array 320 is ambient noise
  • the sound signal picked up by the second microphone 360 is the ambient noise and the target signal.
  • the processor 330 may estimate the sound field at the user's ear canal according to the sound signal picked up by the second microphone 360 to update the noise reduction signal. The second microphone 360 needs to monitor the sound field at the user's ear canal after the noise reduction signal and the ambient noise are canceled.
  • the second microphone 360 includes a microphone that is closer to the user's ear canal than any microphone in the first microphone array 320, which can more accurately characterize the sound field.
  • the sound signal heard by the user is estimated through the sound field of the second microphone 360 to update the noise reduction signal, which can further improve the noise reduction effect and the user's sense of hearing experience.
  • the earphone 300 may not include the above-mentioned first microphone array, but only use the second microphone 360 to perform active noise reduction.
  • the processor 330 may regard the ambient noise picked up by the second microphone 360 as the noise at the user's ear canal, and generate a feedback signal based on this to adjust the noise reduction signal, so as to cancel or reduce the ambient noise at the user's ear canal.
  • the processor 330 can update the noise reduction signal according to the sound signal at the user's ear canal after the target signal and the ambient noise are cancelled, so as to further improve the active noise reduction effect of the earphone 300 .
  • Figure 18 is an exemplary noise reduction flow diagram for a headset according to some embodiments of the present application. As shown in Figure 18, process 1800 may include:
  • the picked-up environmental noise is divided into multiple frequency bands, and the multiple frequency bands correspond to different frequency ranges.
  • this step may be performed by processor 330 .
  • the ambient noise picked up by the microphone array (eg, the first microphone array 320 ) contains different frequency components.
  • the processor 330 may divide the environmental noise frequency band into a plurality of frequency bands, and each frequency band corresponds to a different frequency range.
  • the frequency range corresponding to each frequency band here may be a preset frequency range, for example, 20-100Hz, 100Hz-1000Hz, 3000Hz-6000Hz, 9000Hz-20000Hz, and so on.
  • step 1820 based on at least one of the plurality of frequency bands, a noise reduction signal corresponding to each of the at least one frequency band is generated.
  • this step may be performed by processor 330 .
  • the processor 330 may analyze the frequency bands divided by the environmental noise to obtain parameter information (eg, frequency information, amplitude information, phase information, etc.) of the environmental noise corresponding to each frequency band.
  • the processor 330 generates a noise reduction signal corresponding to each of the at least one frequency band according to the parameter information. For example, in the frequency band of 20Hz-100Hz, the processor 330 may generate noise reduction corresponding to the frequency band 20Hz-100Hz based on the parameter information (eg, frequency information, amplitude information, phase information, etc.) of the environmental noise corresponding to the frequency band 20Hz-100Hz Signal.
  • the speaker 340 outputs the target signal based on the noise reduction signal in the frequency band of 20Hz-100Hz.
  • the speaker 340 may output a target signal that is approximately opposite in phase and approximately equal in amplitude to the noise in the frequency band 20Hz-100Hz to cancel the noise in this frequency band.
  • generating a noise reduction signal corresponding to each of the at least one frequency band based on at least one of the plurality of frequency bands may include obtaining sound pressure levels corresponding to the plurality of frequency bands, and based on the plurality of frequency bands Corresponding sound pressure levels and frequency ranges corresponding to multiple frequency bands generate noise reduction signals corresponding to only part of the frequency bands.
  • the sound pressure levels of ambient noise in different frequency bands picked up by the microphone array may be different.
  • the processor 330 analyzes the frequency bands divided by the environmental noise, and can obtain the sound pressure level corresponding to each frequency band.
  • the earphone 300 may select the ambient noise frequency band in consideration of the difference in the structure of the open-back earphone (eg, the earphone 300 ) and the change of the transfer function due to the difference in the user's ear structure resulting in the different wearing position of the earphone Active noise reduction is performed on some of the frequency bands in the
  • the processor 330 generates noise reduction signals corresponding to only part of the frequency bands based on the sound pressure levels and frequency ranges of the plurality of frequency bands. For example, when the low frequency (eg, 20Hz-100Hz) in the ambient noise is loud (eg, the sound pressure level is greater than 60dB), the open-back earphone may not emit a sufficiently large noise reduction signal to cancel the low frequency noise.
  • the processor 330 may generate only the noise reduction signal corresponding to the higher frequency partial frequency band (eg, 100Hz-1000Hz, 3000Hz-6000Hz) in the ambient noise frequency band.
  • the processor 330 may only generate a noise reduction signal corresponding to a lower frequency part of the frequency band (eg, 20Hz-100Hz) in the ambient noise frequency band.
  • FIG. 19 is an exemplary flowchart for estimating noise at a spatial location of a target, according to some embodiments of the present application. As shown in Figure 19, process 1900 may include:
  • step 1910 components associated with the signal picked up by the bone conduction microphone are removed from the picked up ambient noise in order to update the ambient noise.
  • this step may be performed by processor 330 .
  • the microphone array eg, the first microphone array 320
  • the user's own speaking voice is also picked up by the microphone array, that is, the user's own speaking voice is also regarded as a part of the ambient noise .
  • the target signal output by the speaker eg, the speaker 340
  • the user's own voice needs to be preserved, for example, in scenarios such as the user making a voice call or sending a voice message.
  • the headset eg, the headset 300
  • the headset may include a bone conduction microphone.
  • the bone conduction microphone may pick up the user by picking up vibration signals generated by the facial bones or muscles when the user speaks.
  • the voice signal of speaking is transmitted to the processor 330 .
  • the processor 330 acquires parameter information from the sound signal picked up by the bone conduction microphone, and removes sound signal components associated with the sound signal picked up by the bone conduction microphone from the ambient noise picked up by the microphone array.
  • the processor 330 updates the ambient noise according to the remaining parameter information of the ambient noise.
  • the updated environmental noise no longer includes the sound signal of the user's own speech, that is, the user can hear the sound signal of the user's own speech when the user makes a voice call.
  • step 1920 the noise of the target spatial location is estimated according to the updated ambient noise.
  • this step may be performed by processor 330 .
  • Step 1920 may be performed in a similar manner to step 1420, and the related description is not repeated here.
  • process 1900 is only for illustration and description, and does not limit the scope of application of the present application.
  • Various modifications and changes to process 1900 may be made to process 1900 under the guidance of the present application to those skilled in the art.
  • components associated with the signal picked up by the bone conduction microphone may also be preprocessed, and the signal picked up by the bone conduction microphone may be transmitted to the terminal device as an audio signal. Such corrections and changes are still within the scope of this application.
  • the noise reduction signal may also be updated based on manual user input. For example, in some embodiments, due to differences in ear structures or different wearing states of the earphone 300 for different users, the active noise reduction effect of the earphone 300 may be different, resulting in an unsatisfactory listening experience effect. At this time, the user can manually adjust the parameter information (for example, frequency information, phase information or amplitude information) of the noise reduction signal according to his own hearing effect, so as to match the wearing positions of the headphones 300 worn by different users and improve the active noise reduction of the headphones 300 performance.
  • the parameter information for example, frequency information, phase information or amplitude information
  • the hearing ability is different from that of an ordinary user, and the noise reduction signal generated by the earphone 300 itself is different from the hearing ability of the special user.
  • the ability does not match, resulting in a poor listening experience for special users.
  • the special user can manually adjust the frequency information, phase information or amplitude information of the noise reduction signal according to his own hearing effect, so as to update the noise reduction signal to improve the hearing experience of the special user.
  • the way for the user to manually adjust the noise reduction signal may be manual adjustment through the keys on the earphone 300 .
  • any position of the fixing structure 310 of the earphone 300 may be provided with a key position for user adjustment, so as to adjust the effect of the active noise reduction of the earphone 300, thereby improving the A user's listening experience using the headset 300 .
  • the way for the user to manually adjust the noise reduction signal may also be manual input adjustment through a terminal device.
  • the earphone 300 or an electronic product such as a mobile phone, a tablet computer, a computer, etc., which are communicatively connected to the earphone 300 can display the sound field at the ear canal of the user, and feedback the frequency information range and amplitude of the noise reduction signal suggested to the user.
  • the user can manually input the parameter information of the proposed noise reduction signal, and then fine-tune the parameter information according to their own listening experience.
  • aspects of this application may be illustrated and described in several patentable categories or situations, including any new and useful process, machine, product, or combination of matter, or combinations of them. of any new and useful improvements. Accordingly, various aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, microcode, etc.), or by a combination of hardware and software.
  • the above hardware or software may be referred to as a "data block”, “module”, “engine”, “unit”, “component” or “system”.
  • aspects of the present application may be embodied as a computer product comprising computer readable program code embodied in one or more computer readable media.
  • a computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on baseband or as part of a carrier wave.
  • the propagating signal may take a variety of manifestations, including electromagnetic, optical, etc., or a suitable combination.
  • Computer storage media can be any computer-readable media other than computer-readable storage media that can communicate, propagate, or transmit a program for use by coupling to an instruction execution system, apparatus, or device.
  • Program code on a computer storage medium may be transmitted over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
  • the computer program coding required for the operation of the various parts of this application may be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python Etc., conventional procedural programming languages such as C language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code may run entirely on the user's computer, or as a stand-alone software package on the user's computer, or partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any network, such as a local area network (LAN) or wide area network (WAN), or to an external computer (eg, through the Internet), or in a cloud computing environment, or as a service Use eg software as a service (SaaS).
  • LAN local area network
  • WAN wide area network
  • SaaS software as a service

Abstract

本说明书的一个或多个实施例涉及一种耳机,所述耳机包括:固定结构,被配置为将所述耳机固定在用户耳部附近且不堵塞用户耳道的位置,所述固定结构包括:钩状部和机体部,其中,在所述用户佩戴所述耳机时,所述钩状部挂设在所述用户耳部的第一侧与头部之间,所述机体部接触所述耳部的第二侧;第一麦克风阵列,位于所述机体部,被配置为拾取环境噪声;处理器,位于所述钩状部或所述机体部,被配置为:利用所述第一麦克风阵列对目标空间位置的声场进行估计,所述目标空间位置比所述第一麦克风阵列中任一麦克风更加靠近所述用户耳道,以及基于所述目标空间位置的声场估计生成降噪信号;以及扬声器,位于所述机体部,被配置为:根据所述降噪信号输出目标信号,所述目标信号通过出声孔传递至所述耳机的外部,用于降低所述环境噪声。

Description

一种耳机
交叉引用
本申请要求2021年07月29日提交的国际申请号PCT/CN2021/109154的优先权,2021年04月25日提交的国际申请号PCT/CN2021/089670的优先权,2021年4月30日提交的国际申请号PCT/CN2021/091652的优先权,其内容通过引用结合于此。
技术领域
本申请涉及声学领域,特别涉及一种耳机。
背景技术
主动降噪技术是利用耳机的扬声器输出与外界环境噪音相反的声波以抵消环境噪音的方法。耳机通常可以分为入耳式耳机和开放式耳机两类。入耳式耳机在使用过程中会堵塞用户耳部,且用户在长时间佩戴时容易产生堵塞、异物、胀痛等感受。开放式耳机可以开放用户耳部,有利于长期佩戴,但当外界噪声较大时,其降噪效果不明显,降低用户听觉体验。
因此,希望提供一种耳机和降噪方法,可以开放用户双耳以及提高用户听觉体验。
发明内容
本申请实施例提供了一种耳机,包括:固定结构,被配置为将所述耳机固定在用户耳部附近且不堵塞用户耳道的位置,所述固定结构包括:钩状部和机体部,其中,在所述用户佩戴所述耳机时,所述钩状部挂设在所述用户耳部的第一侧与头部之间,所述机体部接触所述耳部的第二侧;第一麦克风阵列,位于所述机体部,被配置为拾取环境噪声;处理器,位于所述钩状部或所述机体部,被配置为:利用所述第一麦克风阵列对目标空间位置的声场进行估计,所述目标空间位置比所述第一麦克风阵列中任一麦克风更加靠近所述用户耳道,以及基于所述目标空间位置的声场估计生成降噪信号;以及扬声器,位于所述机体部,被配置为:根据所述降噪信号输出目标信号,所述目标信号通过出声孔传递至所述耳机的外部,用于降低所述环境噪声。
在一些实施例中,所述机体部包括连接部和保持部,其中,在所述用户佩戴所述耳机时,所述保持部接触所述耳部的第二侧,所述连接部连接所述钩状部和所述保持部。
在一些实施例中,所述用户佩戴所述耳机时,所述连接部从所述耳部的第一侧向所述耳部的第二侧延伸,所述连接部与所述钩状部配合为所述保持部提供对所述耳部的第二侧的压紧力,以及所述连接部与所述保持部配合为所述钩状部提供对所述耳部的第一侧的压紧力。
在一些实施例中,在从所述钩状部与所述连接部之间的第一连接点到所述钩状部的自由端的方向上,所述钩状部向所述耳部的第一侧弯折,并与所述耳部的第一侧形成第一接触点,所述保持部与所述耳部的第二侧形成第二接触点,其中,在自然状态下所述第一接触点和所述第二接触点沿所述连接部的延伸方向的距离小于在佩戴状态下所述第一接触点和所述第二接触点沿所述连接部的延伸方向的距离,进而为所述保持部提供对所述耳部的第二侧的压紧力,以及为所述钩状部提供对所述耳部的第一侧的压紧力。
在一些实施例中,在从所述钩状部和所述连接部之间的第一连接点到所述钩状部的自由端的方向上,所述钩状部向所述头部弯折,并与所述头部形成第一接触点和第三接触点,其中所述第一接触点位于所述第三接触点与所述第一连接点之间,进而使得所述钩状部形成以所述第一接触点为支点的杠杆结构,所述头部在所述第三接触点处提供的指向所述头部外侧的作用力经所述杠杆结构转化为所述第一连接点处的指向所述头部的作用力,进而经所述连接部为所述保持部提供对所述耳部的第二侧的压紧力。
在一些实施例中,所述扬声器设置在所述保持部,所述保持部为多段结构,以调节所述扬声器在所述耳机的整体结构上的相对位置。
在一些实施例中,所述保持部包括依次首尾连接的第一保持段、第二保持段和第三保持段,所述第一保持段背离所述第二保持段的一端与所述连接部连接,所述第二保持段相对于所述第一保持段回折,并具有一间距,以使得所述第一保持段和所述第二保持段呈U字形结构,所述扬声器设置在所述第三保持段。
在一些实施例中,所述保持部包括依次首尾连接的第一保持段、第二保持段和第三保持段,所述第一保持段背离所述第二保持段的一端与所述连接部连接,所述第二保持段相对于所述第一保持段弯折,所述第三保持段与所述第一保持段彼此并排设置,且具有一间距,所述扬声器设置在所述第三保持段。
在一些实施例中,所述保持部朝向所述耳部的一侧设有所述出声孔,以使所述扬声器输出的所述目标信号通过所述出声孔向所述耳部传递。
在一些实施例中,所述保持部朝向所述耳部的一侧包括第一区域和第二区域,所述第一区域设有出声孔,所述第二区域相较于所述第一区域更远离所述连接部,且相较于所述第一区域朝向所述耳部凸起,以允许所述出声孔在佩戴状态下与所述耳部间隔。
在一些实施例中,所述用户佩戴所述耳机时,所述出声孔与所述用户耳道之间的间距小于10毫米。
在一些实施例中,所述保持部沿垂直轴方向且靠近所述用户头顶的一侧设有泄压孔,所述泄压孔相对于所述出声孔更加远离所述用户耳道。
在一些实施例中,所述用户佩戴所述耳机时,所述泄压孔与所述用户耳道之间的间距为5毫米至15毫米。
在一些实施例中,所述泄压孔与所述出声孔之间的连线与所述保持部的厚度方向之间的夹角为0°至50°。
在一些实施例中,所述泄压孔和所述出声孔形成声学偶极子,所述第一麦克风阵列设置在第一目标区域,所述第一目标区域为所述偶极子辐射声场的声学零点位置。
在一些实施例中,所述第一麦克风阵列位于所述连接部。
在一些实施例中,所述第一麦克风阵列和所述出声孔之间的连线与所述出声孔和所述泄压孔之间的连线具有第一夹角,所述第一麦克风阵列和所述泄压孔之间的连线与所述出声孔和所述泄压孔之间的连线具有第二夹角,所述第一夹角与所述第二夹角的差值不大于30°。
在一些实施例中,所述第一麦克风阵列和所述出声孔之间具有第一距离,所述第一麦克风阵列和所述泄压孔之间具有第二距离,所述第一距离与所述第二距离的差值不大于6毫米。
在一些实施例中,所述基于所述目标空间位置的声场估计生成降噪信号包括:基于所述拾取的环境噪声估计所述目标空间位置的噪声;以及基于所述目标空间位置的噪声和所述目标空间位置的声场估计生成所述降噪信号。
在一些实施例中,所述耳机进一步包括一个或多个传感器,位于所述钩状部和/或所述机体部,被配置为:获取所述耳机的运动信息,以及所述处理器进一步被配置为:基于所述运动信息更新所述目标空间位置的噪声和所述目标空间位置的声场估计;以及基于所述更新后的目标空间位置的噪声和所述更新后的目标空间位置的声场估计生成所述降噪信号。
在一些实施例中,所述基于所述拾取的环境噪声估计所述目标空间位置的噪声包括:确定一个或多个与所述拾取的环境噪声有关的空间噪声源;以及基于所述空间噪声源,估计所述目标空间位置的噪声。
在一些实施例中,所述利用所述第一麦克风阵列对目标空间位置的声场进行估计包括:基于所述第一麦克风阵列构建虚拟麦克风,所述虚拟麦克风包括数学模型或机器学习模型,用于表示若所述目标空间位置处包括麦克风后所述麦克风采集的音频数据;以及基于所述虚拟麦克风对所述目标空间位置的声场进行估计。
在一些实施例中,所述基于所述目标空间位置的声场估计生成降噪信号包括:基于所述虚拟麦克风估计所述目标空间位置的噪声;以及基于所述目标空间位置的噪声和所述目标空间位置的声场估计生成所述降噪信号。
在一些实施例中,所述耳机包括第二麦克风,位于所述机体部,所述第二麦克风被配置为拾取所述环境噪声和所述目标信号;以及所述处理器被配置为基于所述第二麦克风拾取的声音信号更新所述目标信号。
在一些实施例中,所述第二麦克风至少包括一个比所述第一麦克风阵列中任意麦克风更加靠近所述用户耳道的麦克风。
在一些实施例中,所述第二麦克风设置于第二目标区域,所述第二目标区域是所述保持部上靠近所述用户耳道的区域。
在一些实施例中,所述用户佩戴所述耳机时,所述第二麦克风与所述用户耳道之间的距离小于10毫米。
在一些实施例中,在所述用户的矢状面上,所述第二麦克风与所述出声孔沿矢状轴方向的距离小于10毫米。
在一些实施例中,在所述用户的矢状面上,所述第二麦克风与所述出声孔沿垂直轴方向的距离为2毫米至5毫米。
在一些实施例中,所述基于所述第二麦克风拾取的声音信号更新所述降噪信号包括:基于所述第二麦克风拾取的声音信号,对所述用户耳道处的声场进行估计;以及根据所述用户耳道处的声场,更新所述降噪信号。
在一些实施例中,基于所述目标空间位置的声场估计生成降噪信号包括:将所述拾取的环境噪声划分为多个频带,所述多个频带对应不同的频率范围;以及基于所述多个频带中的至少一个,生成与所述至少一个频带中的每一个对应的所述降噪信号。
在一些实施例中,所述基于所述多个频带中的至少一个,生成与所述至少一个频带中的每一个对应的所述降噪信号包括:获取所述多个频带的声压级;基于所述多个频带的所述声压级和所述多个频带的所述频率范围,仅生成与部分频带对应的所述降噪信号。
在一些实施例中,所述第一麦克风阵列或所述第二麦克风包括骨导麦克风,所述骨导麦克风被配置为:拾取所述用户的说话声音,所述处理器基于所述拾取的环境噪声估计所述目标空间位置的噪声包括:从所述拾取的环境噪声中去除与所述骨导麦克风拾取的信号相关联的成分,以更新所述环境噪声;以及根据所述更新后的环境噪声估计所述目标空间位置的噪声。
在一些实施例中,所述耳机进一步包括调节模块,被配置为:获取用户输入;以及所述处理器进一步被配置为根据所述用户输入调整所述降噪信号。
附图说明
本申请将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本申请的一些实施例所示的示例性耳机的框架图;
图2是根据本申请的一些实施例所示的示例性耳部的示意图;
图3是根据本申请的一些实施例所示的示例性耳机的结构图;
图4是根据本申请的一些实施例所示的示例性耳机的佩戴图;
图5是根据本申请的一些实施例所示的示例性耳机的结构图;
图6是根据本申请的一些实施例所示的示例性耳机的佩戴图;
图7是根据本申请的一些实施例所示的示例性耳机的结构图;
图8是根据本申请的一些实施例所示的示例性耳机的佩戴图;
图9A是根据本申请的一些实施例所示的示例性耳机的结构图;
图9B是根据本申请的一些实施例所示的示例性耳机的结构图;
图10是根据本申请的一些实施例所示的示例性耳机朝向耳部一侧的结构图;
图11是根据本申请的一些实施例所示的示例性耳机背离耳部一侧的结构图;
图12是根据本申请的一些实施例所示的示例性耳机的俯视图;
图13是根据本申请的一些实施例所示的示例性耳机的截面结构示意图;
图14是根据本申请的一些实施例所示的耳机的示例性降噪流程图;
图15是根据本申请的一些实施例所示的估计目标空间位置的噪声的示例性流程图;
图16是根据本申请的一些实施例所示的估计目标空间位置的声场和噪声示例性流程图;
图17是根据本申请的一些实施例所示的更新降噪信号的示例性流程图;
图18是根据本申请的一些实施例所示的耳机的示例性降噪流程图;
图19是根据本申请的一些实施例所示的估计目标空间位置的噪声的示例性流程图。
具体实施方式
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级 别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本申请和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本申请中使用了流程图用来说明根据本申请的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
本说明书一些实施例中提供了一种耳机。所述耳机可以是开放式耳机。开放式耳机可以通过固定结构将扬声器固定于用户耳部附近且不堵塞用户耳道的位置。在一些实施例中,耳机可以包括固定结构、第一麦克风阵列、处理器以及扬声器。固定结构可以被配置为将耳机固定在用户耳部附近且不堵塞用户耳道。第一麦克风阵列、处理器和扬声器可以位于固定结构处,以实现耳机的主动降噪功能。在一些实施例中,固定结构可以包括钩状部和机体部,在用户佩戴耳机时,钩状部可以挂设在用户耳部的第一侧与头部之间,机体部接触耳部的第二侧。在一些实施例中,机体部可以包括连接部和保持部,在用户佩戴所述耳机时,保持部接触耳部的第二侧,连接部连接钩状部和保持部。连接部从耳部的第一侧向耳部的第二侧延伸,连接部与钩状部配合为保持部提供对耳部的第二侧的压紧力,以及连接部与保持部配合为钩状部提供对耳部的第一侧的压紧力,进而使得耳机可以夹持用户耳部,保证耳机在佩戴方面的稳定性。在一些实施例中,第一麦克风阵列可以位于耳机的机体部,用于拾取环境噪声。处理器位于耳机的钩状部或机体部,用于对目标空间位置的声场进行估计。目标空间位置可以包括靠近用户耳道特定距离的空间位置,例如,目标空间位置可以比第一麦克风阵列中任一麦克风更加靠近用户耳道。可以理解的是,第一麦克风阵列中的各麦克风可以分布于用户耳道附近的不同位置,处理器可以根据第一麦克风阵列中的各麦克风所采集的环境噪声来估计靠近用户耳道位置处(例如,目标空间位置)的声场。扬声器可以位于机体部(保持部),并根据降噪信号输出目标信号。该目标信号可以通过保持部上的出声孔传递至耳机的外部,用于降低用户听到的环境噪声。
在一些实施例中,为了更好地降低用户所听到的环境噪声,机体部可以包括第二麦克风。相比而言,第二麦克风可以比第一麦克风阵列更加靠近用户耳道,其所采集的声音信号更加接近并可以反映用户所听到的声音。处理器可以根据第二麦克风所采集的声音信号更新上述降噪信号,从而达到更理想的降噪效果。
需要知道的是,本说明书实施例中提供的耳机可以通过固定结构固定在用户耳部附近且不堵塞用户耳道的位置,开放了用户双耳,提高了耳机在佩戴方面的稳定性和舒适度。同时,利用位于固定结构(如机体部)处的第一麦克风阵列/第二麦克风,和处理器对靠近用户耳道处(例如,目标空间位置)的声场进行估计,并通过扬声器输出的目标信号降低用户耳道处的环境噪声,从而实现了耳机的主动降噪,提高了用户在使用该耳机过程中的听觉体验。
图1是根据本申请的一些实施例所示的示例性耳机的框架图。
在一些实施例中,耳机100可以包括固定结构110、第一麦克风阵列120、处理器130和扬声器140。第一麦克风阵列120、处理器130和扬声器140可以位于固定结构110处。耳机100可以通过固定结构110夹持用户耳部以将耳机100固定在用户耳部附近且不堵塞用户耳道。在一些实施例中,位于固定结构110(如,机体部)处的第一麦克风阵列120可以拾取外界的环境噪声,并将该环境噪声转换为电信号传递至处理器130进行处理。处理器130耦接(如电连接)第一麦克风阵列120和扬声器140。处理器130可以接收第一麦克风阵列120传递的电信号并对其进行处理以生成降噪信号,并将生成的降噪信号传递至扬声器140。扬声器140可以根据降噪信号输出目标信号。该目标信号可以通过固定结构110(如保持部)上的出声孔传递至耳机100的外部,并用于降低或抵消用户耳道位置处(例如,目标空间位置)的环境噪声,从而实现耳机100的主动降噪,提高用户在使用耳机100过程中的听觉体验。
在一些实施例中,固定结构110可以包括钩状部111和机体部112。在用户佩戴耳机100时,钩状部111可以挂设在用户耳部的第一侧与头部之间,机体部112接触耳部的第二侧。耳部的第一侧可以是用户耳部的背侧,用户耳部的第二侧可以是用户耳部的前侧。用户耳部的前侧指的用户耳部包括耳甲艇、三角窝、对耳轮、耳舟、耳轮等部位所在的一侧(耳部结构可以参见图2)。用户耳部的背侧指的是用户耳部背向前侧的一侧,也就是与前侧相反的一侧。
在一些实施例中,机体部112可以包括连接部和保持部。在用户佩戴耳机100时,保持部接触耳部的第二侧,连接部连接钩状部和保持部。连接部从耳部的第一侧向耳部的第二侧延伸,连接部与钩状部配合为保持部提供对耳部的第二侧的压紧力,以及连接部与保持部配合为钩状部提供对耳部的第一侧的压紧力,从而使得耳机100可以利用固定结构110夹持于用户耳部附近,保证耳机100在佩戴方面的稳定性。
在一些实施例中,钩状部111和/或机体部112(连接部和/或保持部)与用户耳部接触的部位可以由质地较软的材料、质地较硬的材料等或其组合制成。质地较软的材料是指硬度(例如,邵氏硬度)小于第一硬度阈值(例如,15A、20A、30A、35A、40A等)的材料。例如,质地较软的材料的邵氏硬度可以为45-85A,30-60D。质地较硬的材料是指硬度(例如,邵氏硬度)大于第二硬度阈值(例如,65D、70D、80D、85D、90D等)的材料。质地较软的材料可以包括但不限于聚氨酯(Polyurethanes,PU)(例如,热塑性聚氨酯弹性体橡胶(Thermoplastic polyurethanes,TPU))、聚碳酸酯(Polycarbonate,PC)、聚酰胺(Polyamides,PA)、丙烯腈-丁二烯-苯乙烯共聚物(Acrylonitrile Butadiene Styrene,ABS)、聚苯乙烯(Polystyrene,PS)、高冲击聚苯乙烯(High Impact Polystyrene,HIPS)、聚丙烯(Polypropylene,PP)、聚对苯二甲酸乙二酯(Polyethylene Terephthalate,PET)、聚氯乙烯(Polyvinyl Chloride,PVC)、聚氨酯(Polyurethanes,PU)、聚乙烯(Polyethylene,PE)、酚醛树脂(Phenol Formaldehyde,PF)、尿素-甲醛树脂(Urea-Formaldehyde,UF)、三聚氰胺-甲醛树脂(Melamine-Formaldehyde,MF)、硅胶等或其组合。质地较硬的材料可以包括但不限于聚醚砜树酯(Poly(ester sulfones),PES)、聚二氯乙烯(Polyvinylidene chloride,PVDC)、聚甲基丙烯酸甲酯(Polymethyl Methacrylate,PMMA)、聚醚醚酮(Poly-ether-ether-ketone,PEEK)等或其组合,或者其与玻璃纤维、碳纤维等增强剂形成的混合物。在一些实施例中,固定结构110的钩状部111和/或机体部112与用户耳部接触的部位的材质可以根据具体情况选择。在一些实施例中,质地较软的材料可以提高用户佩戴耳机100的舒适度,质地较硬的材料可以提高耳机100的强度,通过合理的配置耳机100各部件的材质,可以在提高用户舒适度的同时提高耳机100的强度。
第一麦克风阵列120可以位于固定结构110的机体部112(如连接部或保持部),用于拾取环境噪声。在一些实施例中,环境噪声是指用户所处环境中的多种外界声音的组合。在一些实施例中,通过将第一麦克风阵列120安装在固定结构110的机体部112,可以使得第一麦克风阵列120位于用户耳道附近。以如此方式获取的环境噪声为基础,处理器130可以更加准确地计算出实际传递到用户耳道的噪声,更有利于后续对用户听到的环境噪声进行主动降噪。
在一些实施例中,环境噪声可以包括用户讲话的声音。例如,第一麦克风阵列120可以根据耳机100的工作状态拾取环境噪声。耳机100的工作状态可以指用户佩戴耳机100时所使用的用途状态。仅作为示例,耳机100的工作状态可以包括但不限于通话状态、未通话状态(例如,音乐播放状态)、发送语音消息状态等。当耳机100处于未通话状态时,用户自身说话产生的声音可以被视为环境噪声,第一麦克风阵列120可以拾取用户自身说话的声音以及其他环境噪声。当耳机100处于通话状态时,用户自身说话产生的声音可以不被视为环境噪声,第一麦克风阵列120可以拾取除用户自身说话的声音之外的环境噪声。例如,第一麦克风阵列120可以拾取距离第一麦克风阵列120一定距离(例如,0.5米、1米)之外的噪声源发出的噪声。
在一些实施例中,第一麦克风阵列120可以包括一个或多个气导麦克风。例如,用户在使用耳机100听取音乐时,气导麦克风可以同时获取外界环境的噪声和用户说话时的声音并将获取的外界环境的噪声和用户说话时的声音一起作为环境噪声。在一些实施例中,第一麦克风阵列120还可以包括一个或多个骨导麦克风。骨导麦克风可以直接与用户的皮肤接触,用户说话时骨骼或肌肉产生的振动信号可以直接传递给骨导麦克风,进而骨导麦克风将振动信号转换为电信号,并将电信号传递至处理器130进行处理。骨导麦克风也可以不与人体直接接触,用户说话时骨骼或肌肉产生的振动信号可以先传递至耳机100的固定结构110,再由固定结构110传递至骨导麦克风。在一些实施例中,用户在通话状态时,处理器130可以将气导麦克风采集的声音信号作为环境噪声并利用该环境噪声进行降噪,骨导麦克风采集的声音信号作为语音信号传输至终端设备,从而保证用户通话时的通话质量。
在一些实施例中,处理器130可以基于耳机100的工作状态控制骨导麦克风和气导麦克风的开关状态。在一些实施例中,第一麦克风阵列120拾取环境噪声时,第一麦克风阵列120中的骨导麦克风的开关状态和气导麦克风的开关状态可以根据耳机100的工作状态决定。例如,用户佩戴耳机100进行音乐播放时,骨导麦克风的开关状态可以为待机状态,气导麦克风的开关状态可以为 工作状态。又例如,用户佩戴耳机100进行发送语音消息时,骨导麦克风的开关状态可以为工作状态,气导麦克风的开关状态可以为工作状态。在一些实施例中,处理器130可以通过发送控制信号控制第一麦克风阵列120中的麦克风(例如,骨导麦克风、气导麦克风)的开关状态。
在一些实施例中,根据麦克风的工作原理,第一麦克风阵列120可以包括动圈式麦克风、带式麦克风、电容式麦克风、驻极体式麦克风、电磁式麦克风、碳粒式麦克风等,或其任意组合。在一些实施例中,第一麦克风阵列120的排布方式可以包括线性阵列(例如,直线形、曲线形)、平面阵列(例如,十字形、圆形、环形、多边形、网状形等规则和/或不规则形状)、立体阵列(例如,圆柱状、球状、半球状、多面体等)等,或其任意组合。
处理器130可以位于固定结构110的钩状部111或机体部112,处理器130可以利用第一麦克风阵列120对目标空间位置的声场进行估计。目标空间位置的声场可以指声波在目标空间位置处或目标空间位置附近的分布和变化(例如,随时间的变化,随位置的变化)。描述声场的物理量可以包括声压级、声音频率、声音幅值、声音相位、声源振动速度、或媒质(例如空气)密度等。通常,这些物理量可以是位置和时间的函数。目标空间位置可以指靠近用户耳道特定距离的空间位置。这里的特定距离可以是固定的距离,例如,2mm、5mm、10mm等。该目标空间位置可以比第一麦克风阵列120中任一麦克风更加靠近用户耳道。在一些实施例中,目标空间位置可以与第一麦克风阵列120中各麦克风的数量、相对于用户耳道的分布位置相关。通过调整第一麦克风阵列120中各麦克风的数量和/或相对于用户耳道的分布位置可以对目标空间位置进行调整。例如,通过增加第一麦克风阵列120中麦克风的数量可以使目标空间位置更加靠近用户耳道。又例如,还可以通过减小第一麦克风阵列120中各麦克风的间距使目标空间位置更加靠近用户耳道。再例如,还可以通过改变第一麦克风阵列120中各麦克风的排列方式使目标空间位置更加靠近用户耳道。
在一些实施例中,处理器130可以进一步被配置为基于目标空间位置的声场估计生成降噪信号。具体地,处理器130可以接收第一麦克风阵列120所获取的环境噪声并对其进行处理以获取环境噪声的参数(例如,幅值、相位等),并基于环境噪声的参数对目标空间位置的声场进行估计。进一步,处理器130基于目标空间位置的声场估计生成降噪信号。该降噪信号的参数(例如,幅值、相位等)与目标空间位置处的环境噪声有关。仅作为示例,降噪信号的幅值可以与目标空间位置处环境噪声的幅值近似相等,降噪信号的相位可以与目标空间位置处环境噪声的相位近似相反。
在一些实施例中,处理器130可以包括硬件模块和软件模块。仅作为示例,硬件模块可以包括但不限于数字信号处理(Digital Signal Processor,DSP)、高级精简指令集机器(Advanced RISC Machines,ARM)、中央处理单元(CPU)、专用集成电路(ASIC)、物理处理单元(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编程逻辑设备(PLD)、控制器、微处理器等,或其任意组合。软件模块可以包括算法模块。
扬声器140可以位于固定结构110的保持部,当用户佩戴耳机100时,扬声器140位于用户耳部的附近位置。扬声器140可以根据降噪信号输出目标信号。该目标信号可以通过保持部的出声孔向用户的耳部传递,以降低或消除传递到用户耳道的环境噪声。在一些实施例中,根据扬声器的工作原理,扬声器140可以包括电动式扬声器(例如,动圈式扬声器)、磁式扬声器、离子扬声器、静电式扬声器(或电容式扬声器)、压电式扬声器等中的一种或多种。在一些实施例中,根据扬声器输出的声音的传播方式,扬声器140可以包括气导扬声器、骨导扬声器。在一些实施例中,扬声器140的数量可以为一个或多个。当扬声器140的数量为一个时,该扬声器可以输出目标信号以消除环境噪声,并且同时向用户传递有效声音信息(例如,设备媒体音频、通话远端音频)。例如,当扬声器140的数量为一个且为气导扬声器时,该气导扬声器可以用于输出目标信号以消除环境噪声。在这种情况下,目标信号可以为声波(即,空气的振动),该声波可以通过空气传递到目标空间位置处并与环境噪声在目标空间位置处相互抵消。同时,该气导扬声器所输出的声波中还包括有效声音信息。又例如,当扬声器140的数量为一个且为骨导扬声器时,该骨导扬声器可以用于输出目标信号以消除环境噪声。在这种情况下,目标信号可以为振动信号,该振动信号可以通过骨头或组织传递到用户的基底膜并与环境噪声在用户的基底膜处相互抵消。同时,该骨导扬声器所输出的振动信号中还包括有效声音信息。在一些实施例中,当扬声器140的数量为多个时,多个扬声器140中的一部分可以用于输出目标信号以消除环境噪声,另一部分可以用于向用户传递有效声音信息(例如,设备媒体音频、通话远端音频)。例如,当扬声器140的数量为多个且包括骨导扬声器和气导扬声器时,气导扬声器可以用于输出声波以降低或消除环境噪声,骨导扬声器可以用于向用户传递有效声音信息。相比于气导扬声器,骨导扬声器可以将机械振动直 接通过用户的身体(例如,骨骼、皮肤组织等)传递至用户的听觉神经,在此过程中对于拾取环境噪声的气导麦克风的干扰较小。
在一些实施例中,扬声器340和第一麦克风阵列120均位于耳机300的机体部112,扬声器340输出的目标信号也可能被第一麦克风阵列120拾取,而该目标信号是不期望被拾取的,也即是,目标信号不应视为环境噪声的一部分。这种情况下,为了降低扬声器340输出的目标信号对第一麦克风阵列120的影响,第一麦克风阵列120可以设置于第一目标区域。第一目标区域可以是扬声器340所发出的声音在空间中强度较小甚至最小的区域。例如,第一目标区域可以是耳机100(例如,出声孔、泄压孔)形成的声学偶极子的辐射声场的声学零点位置,或者距离声学零点位置一定距离阈值范围内的位置。
应当注意的是,以上关于图1的描述仅仅是出于说明的目的而提供的,并不旨在限制本申请的范围。对于本领域的普通技术人员来说,根据本申请的指导可以做出多种变化和修改。例如,耳机100的固定结构110可以替换为壳体结构,该壳体结构具有适配人耳的形状(如C状、半圆状等),以便耳机100可以挂靠在用户的耳朵附近。在一些实施例中,耳机100中的一个部件可以拆分成多个子部件,或者多个部件可以合并为单个部件。这些变化和修改不会背离本申请的范围。
图2是根据本申请的一些实施例所示的示例性耳部的示意图。
参见图2,耳部200可以包括外耳道201、耳甲腔202、耳甲艇203、三角窝204、对耳轮205、耳舟206、耳轮207、耳垂208以及耳轮脚209。在一些实施例中,可以借助耳部200的一个或多个部位实现耳机(例如,耳机100)的佩戴和稳定。在一些实施例中,外耳道201、耳甲腔202、耳甲艇203、三角窝204等部位在三维空间中具有一定的深度及容积,可以用于实现耳机的佩戴需求。在一些实施例中,开放式耳机(例如,耳机100)可以借助耳甲艇203、三角窝204、对耳轮205、耳舟206、耳轮207等部位或其组合实现开放式耳机的佩戴。在一些实施例中,为了改善耳机在佩戴方面的舒适度及可靠性,也可以进一步借助用户的耳垂208等部位。通过借助耳部200中除外耳道201之外的其他部位,实现耳机的佩戴和声音的传播,可以“解放”用户的外耳道201,降低耳机对用户耳朵健康的影响。当用户在道路上佩戴耳机时,耳机不会堵塞用户外耳道201,用户既可以接收来自耳机的声音又可以接收来自环境中的声音(例如,鸣笛声、车铃声、周围人声、交通指挥声等),从而能够降低交通意外的发生概率。例如,在用户佩戴耳机时,耳机的整体或者部分结构可以位于耳轮脚209的前侧(例如,图2中虚线围成的区域J)。又例如,在用户佩戴耳机时,耳机的整体或者部分结构可以与外耳道201的上部(例如,耳轮脚209、耳甲艇203、三角窝204、对耳轮205、耳舟206、耳轮207等一个或多个部位所在的位置)接触。再例如,在用户佩戴耳机时,耳机的整体或者部分结构可以位于耳部的一个或多个部位(例如,耳甲腔202、耳甲艇203、三角窝204等)内(例如,图2中虚线围成的区域M)。
关于上述耳部200的描述仅是出于阐述的目的,并不旨在限制本申请的范围。对于本领域的普通技术人员来说,可以根据本申请的描述,做出各种各样的变化和修改。例如,对于不同的用户,耳部200中一个或多个部位的结构、形状、大小、厚度等可以不同。又例如,耳机的部分结构可以遮蔽外耳道201的部分或者全部。这些变化和修改仍处于本申请的保护范围之内。
图3是根据本申请的一些实施例所示的示例性耳机的结构图。图4是根据本申请的一些实施例所示的示例性耳机的佩戴图。
参见图3-图4,耳机300可以包括固定结构310、第一麦克风阵列320、处理器330和扬声器340。其中,第一麦克风阵列320、处理器330和扬声器340位于固定结构310处。在一些实施例中,固定结构310可以用于将耳机300挂设在用户耳部附近且不堵塞用户耳道。在一些实施例中,固定结构310可以包括钩状部311和机体部312。在一些实施例中,钩状部311可以包括任何适合用户佩戴的形状,例如,C状、钩状等。在用户佩戴耳机300时,钩状部311可以挂设在用户耳部的第一侧和头部之间。在一些实施例中,机体部312可以包括连接部3121和保持部3122,其中,连接部3121用于连接钩状部311和保持部3122。在用户佩戴耳机300时,保持部3122接触耳部的第二侧,连接部3121从耳部的第一侧向耳部的第二侧延伸,连接部3121的两端分别与钩状部311和保持部3122连接。连接部3121与钩状部311配合可以为保持部3122提供对耳部的第二侧的压紧力,连接部3121与保持部3122配合可以为连接部3121提供对耳部的第一侧的压紧力。
在一些实施例中,耳机300处于非佩戴状态(也即是自然状态)时,连接部3121连接钩状部311与保持部3122,以使得固定结构310在三维空间中呈弯曲状。也可以理解为,在三维空间中,钩状部311、连接部3121、保持部3122不共面。这种设置方式下,可以使得耳机300处于佩戴状态时,如图4所示,钩状部311可以挂设在用户耳部100的第一侧与头部之间,保持部3122接 触用户的耳部100的第二侧,进而使得保持部3122和钩状部311配合以夹持耳部。在一些实施例中,连接部3121可以从头部向头部的外侧(即,从耳部100第一侧向耳部第二侧)延伸,进而与钩状部311配合为保持部3122提供对耳部100的第二侧的压紧力。同时,根据力的相互作用可知,连接部3121从头部向头部的外侧延伸时,也可以与保持部3122配合为钩状部311提供对耳部100的第一侧的压紧力,从而使得固定结构310可以夹持用户耳部100,实现耳机300的佩戴。
在一些实施例中,保持部3122在压紧力的作用下可以抵压耳部,例如,抵压于耳甲艇、三角窝、对耳轮等部位所在的区域,以使得耳机300处于佩戴状态时不遮挡耳部的外耳道。仅作为示例性描述,耳机300处于佩戴状态时,保持部3122在用户的耳部的投影可以落在耳部的耳轮范围内;进一步地,保持部3122可以位于耳部的外耳道靠近用户头顶一侧,并与耳轮和/或对耳轮接触。这种设置方式下,一方面,可以避免保持部3122遮挡外耳道,进而解放用户的双耳。同时,还可以增加保持部3122与耳部之间的接触面积,进而改善耳机300的佩戴舒适性。另一方面,保持部3122位于耳部的外耳道靠近用户头顶一侧时,可以使得位于保持部3122处的扬声器340能够更加靠近用户的耳道,提升用户使用耳机300时的听觉体验。
在一些实施例中,为了提高用户佩戴耳机300的稳定性和舒适性,耳机300还可以弹性夹持耳部。例如,在一些实施例中,耳机300的钩状部311可以包括与连接部3121连接的弹性部(未示出)。弹性部可以具有一定的弹性形变能力,使得钩状部311在外力作用下能够发生形变,进而相对于保持部3122产生位移,以允许钩状部311和保持部3122配合以弹性夹持耳部。具体地,用户在佩戴耳机300的过程中,可以先用力使得钩状部311偏离保持部3122,以便于耳部伸入保持部3122与钩状部311之间;待佩戴位置合适之后,松手以允许耳机300弹性夹持耳部。用户还可以根据实际的佩戴情况进一步调整耳机300在耳部上的位置。
在一些实施例中,不同的用户在年龄、性别、基因控制的性状表达等方面可能存在较大的差异,导致不同的用户的耳部及头部可能大小不一、形状不一。为此,在一些实施例中,钩状部311可以设置为相对于连接部3121可转动,或者保持部3122相对于连接部3121可转动,或者连接部3121中一部分相对于另一部分可转动,以使得钩状部311、连接部3121、保持部3122在三维空间中的相对位置关系可调节,以便于耳机300适配不同的用户,也即是增加耳机300在佩戴方面对用户的适用范围。同时,将钩状部311、连接部3121、保持部3122在三维空间中的相对位置关系设为可调节,还可以调整第一麦克风阵列320和扬声器340相对于用户耳部(如外耳道)的位置,从而提高耳机300的主动降噪的效果。在一些实施例中,连接部3121可以由软钢丝等可形变材料制成,用户弯折连接部3121使之一部分相对于另一部分转动,从而调节钩状部311、连接部3121、保持部3122在三维空间中的相对位置,进而满足其佩戴需求。在一些实施例中,连接部3121还可以设置有转轴机构31211,用户通过转轴机构31211调节钩状部311、连接部3121、保持部3122在三维空间中的相对位置,进而满足其佩戴需求。
需要说明的是,考虑到耳机300在佩戴方面的稳定性和舒适性,还可以对耳机300(固定结构310)进行多种变化和修改,关于耳机300的更多描述,可以参见申请号为PCT/CN2021/109154的相关申请,其内容通过引用的方式并入本申请中。
在一些实施例中,耳机300可以利用第一麦克风阵列320和处理器330对用户耳道处(例如,目标空间位置)的声场进行估计,并通过扬声器340输出目标信号以降低用户耳道处的环境噪声,从而实现耳机300的主动降噪。在一些实施例中,第一麦克风阵列320可以位于固定结构310的机体部312,使得用户佩戴耳机300时,第一麦克风阵列320可以位于用户耳道的附近位置。第一麦克风阵列320可以拾取用户耳道附近的环境噪声,处理器330可以根据该用户耳道附近的环境噪声,进一步估计出目标空间位置处的环境噪声,例如,用户耳道处的环境噪声。在一些实施例中,扬声器340输出的目标信号也会被第一麦克风阵列320拾取,为了降低扬声器340输出的目标信号对第一麦克风阵列320拾取的环境噪声的影响,第一麦克风阵列320可以位于扬声器340所发出的声音在空间中强度较小甚至最小的区域,例如,耳机300(例如,出声孔和泄压孔)形成的声学偶极子的辐射声场的声学零点位置。关于第一麦克风阵列320的位置的具体内容可以参见本说明书的其他地方,例如,图10-图13及其相关描述。
在一些实施例中,处理器330可以位于固定结构310的钩状部311或机体部312。处理器330与第一麦克风阵列320电连接。处理器330可以基于第一麦克风阵列320拾取的环境噪声对目标空间位置的声场进行估计,并基于目标空间位置的声场估计生成降噪信号。关于处理器330利用第一麦克风阵列320估计目标空间位置的声场的具体内容可以参见本说明书图14-图16,及其相关描述。
在一些实施例中,处理器330也可以用于控制扬声器340的发声。处理器330可以根据用户输入的指令控制扬声器340的发声。或者,处理器330可以根据耳机300的一个或多个组件的信息生成控制扬声器340的指令。在一些实施例中,处理器330可以控制耳机300的其他组件(例如,电池)。在一些实施例中,处理器330可以设置于固定结构310的任意部位。例如,处理器330可以设置于保持部3122。这种情况下,处理器330与设置在保持部3122上的其他部件(例如,扬声器340、按键开关等)之间的走线距离可以缩短,以减少走线之间的信号干扰,降低走线之间发生短路的可能性。
在一些实施例中,扬声器340可以位于机体部312的保持部3122,使得用户佩戴耳机300时,扬声器340可以位于用户耳道的附近位置。扬声器340可以基于处理器330生成的降噪信号输出目标信号。目标信号可以通过保持部3122上的出声孔(未示出)传递至耳机300的外部,用于降低用户耳道处的环境噪声。保持部3122上的出声孔可以位于保持部3122朝向用户耳部的一侧,如此,出声孔可以足够靠近用户的耳道,其发出的声音可以更好地被用户听到。
在一些实施例中,耳机300还可以包括电池350等部件。电池350可以为耳机300的其他部件(如,第一麦克风阵列320、扬声器340等)提供电能。在一些实施例中,第一麦克风阵列320、处理器330、扬声器340以及电池350中任意两者可以通过多种方式通信,例如,有线连接、无线连接等或其组合。在一些实施例中,有线连接可以包括金属电缆、光学电缆或者金属和光学的混合电缆等。以上描述的例子仅作为方便说明之用,有线连接的媒介还可以是其它类型,例如,其它电信号或光信号等的传输载体。无线连接可以包括无线电通信、自由空间光通信、声通讯、电磁感应等。
在一些实施例中,电池350可以设置在钩状部311远离连接部3121的一端,并在耳机300处于佩戴状态时位于用户的耳部的后侧与头部之间。这种设置方式下,可以增加电池350的容量,改善耳机300的续航能力。同时,还可以对耳机300的重量进行均衡,以便于克服保持部3122及其内处理器330、扬声器340等结构的自重从而改善耳机300在佩戴方面的稳定性和舒适度。在一些实施例中,电池350也可以将自身的状态信息传送到处理器330并接收处理器330的指令,执行相应操作。电池350的状态信息可以包括开/关状态、剩余电量、剩余电量使用时间、充电时间等,或其组合。
为了便于描述耳机(例如,耳机300)各部分的相互关系以及耳机与用户的关系,本说明书中建立了一个或多个坐标系。在一些实施例中,可以类似于医学领域定义人体的矢状面(Sagittal Plane)、冠状面(Coronal Plane)和横断面(Horizontal Plane)三个基本切面以及矢状轴(Sagittal Axis)、冠状轴(Coronal Axis)和垂直轴(Vertical Axis)三个基本轴。参见图2-图4中的坐标轴,其中,矢状面是指沿身体前后方向所作的与地面垂直的切面,它将人体分为左右两部分,在本说明书实施例中,矢状面可以是指YZ平面,即,X轴垂直于用户的矢状面;冠状面是指沿身体左右方向所作的与地面垂直的切面,它将人体分为前后两部分,在本说明书实施例中,冠状面可以是指XZ平面,即,Y轴垂直于用户的冠状面;横断面是指沿身体上下方向所作的与地面平行的切面,它将人体分为上下两部分,在本说明书实施例中,横断面可以是指XY平面即,即,Z轴垂直于用户的横断面。相应地,矢状轴是指沿身体前后方向垂直通过冠状面的轴,在本说明书实施例中,矢状轴可以是指Y轴;冠状轴是指沿身体左右方向垂直通过矢状面的轴,在本说明书实施例中,冠状轴可以是指X轴;垂直轴是指沿身体上下方向垂直通过水平面的轴,在本说明书实施例中,垂直轴可以是指Z轴。
图5是根据本申请的一些实施例所示的示例性耳机的结构图。图6是根据本申请的一些实施例所示的示例性耳机的佩戴图。
参见图5-图6,在一些实施例中,钩状部311可以靠近保持部3122,以在耳机300处于佩戴状态时,如图6所示,钩状部311背离连接部3121的自由端作用于用户的耳部100的第一侧(后侧)。
在一些实施例中,参见图4-图6,连接部3121与钩状部311连接,连接部3121与钩状部311形成第一连接点C。在从钩状部311与连接部3121之间的第一连接点C到钩状部311的自由端的方向上,钩状部311向耳部100的后侧弯折,并与耳部100的后侧形成第一接触点B,保持部3122与耳部100的第二侧(前侧)形成第二接触点F。其中,在自然状态(也即是非佩戴状态)下,第一接触点B和第二接触点F沿连接部3121的延伸方向的距离小于在佩戴状态下第一接触点B和第二接触点F沿连接部3121的延伸方向的距离,进而为保持部3122提供对耳部100的第二侧(前侧)的压紧力,以及为钩状部311提供对耳部100的第一侧(后侧)的压紧力。也可以理解为,耳 机300在自然状态下第一接触点B和第二接触点F沿连接部3121的延伸方向的距离小于用户的耳部100的厚度,以使得耳机300在佩戴状态下能够像“夹子”一样夹在用户的耳部100。
在一些实施例中,钩状部311还可以沿背离连接部3121的方向延伸,也即是延长钩状部311的整体长度,以在耳机300处于佩戴状态时,钩状部311还可以与耳部100的后侧形成第三接触点A,第一接触点B位于第一连接点C与第三接触点A之间,并靠近第一连接点C。其中,在自然状态下第一接触点B和第三接触点A在垂直于连接部3121的延伸方向的参考平面(如YZ平面)上的投影之间的距离可以小于在佩戴状态下第一接触点B和第三接触点A在垂直于连接部3121的延伸方向的参考平面(如YZ平面)上的投影之间的距离。这种设置方式下,钩状部311的自由端抵压于用户的耳部100的后侧,可以使得第三接触点A位于耳部100靠近耳垂的区域,进而使得钩状部311能够在竖直方向(Z轴方向)上夹持用户的耳部100,以克服保持部3122的自重。在一些实施例中,钩状部311在整体长度得以延长之后,在竖直方向上夹持用户的耳部100的同时,还可以增加钩状部311与用户的耳部100之间的接触面积,也即是增加钩状部311与用户的耳部100之间的摩擦力,进而改善耳机300在佩戴方面的稳定性。
在一些实施例中,在耳机300的钩状部311与保持部3122之间设置连接部3121,使得耳机300处于佩戴状态时连接部3121与钩状部311配合可以为保持部3122提供对耳部的第一侧的压紧力,从而使得耳机300处于佩戴状态时能够牢牢地紧贴于用户的耳部,进而改善耳机300在佩戴方面的稳定性,以及耳机300在发声方面的可靠性。
图7是根据本申请的一些实施例所示的示例性耳机的结构图。图8是根据本申请的一些实施例所示的示例性耳机的佩戴图。
在一些实施例中,图7-图8所示的耳机300与图5-图6所示的耳机300大致相同,区别之处在于钩状部311的弯折方向不同。在一些实施例中,参见图7-图8,在从钩状部311和连接部3121之间的第一连接点C到钩状部311的自由端(远离连接部3121的一端)的方向上,钩状部311向用户的头部弯折,并与头部形成第一接触点B和第三接触点A。其中,第一接触点B位于第三接触点A和第一连接点C之间。如此设置,可以使得钩状部311形成以第一接触点B为支点的杠杆结构。此时,钩状部311的自由端抵压于用户的头部,用户的头部则在第三接触点A处提供指向头部外侧的作用力,该作用力经杠杆结构转化为第一连接点C处的指向头部的作用力,进而经连接部3121为保持部3122提供对耳部100的第一侧的压紧力。
在一些实施例中,用户的头部在第三接触点A处提供指向头部外侧的作用力的大小与钩状部311的自由端在耳机300处于非佩戴状态时与YZ平面之间形成的夹角的大小成正相关。具体地,钩状部311的自由端在耳机300处于非佩戴状态时与YZ平面之间形成的夹角越大,钩状部311的自由端在耳机300处于佩戴状态时能够越好地抵压于用户的头部,用户的头部能够在第三接触点A处提供指向头部外侧的作用力也相应地越大。在一些实施例中,为了使钩状部311的自由端在耳机300处于佩戴状态时能够抵压于用户的头部,并使得用户的头部能够在第三接触点A处提供指向头部外侧的作用力,钩状部311的自由端在耳机300处于非佩戴状态时与YZ平面之间形成的夹角可以大于钩状部311的自由端在耳机300处于佩戴状态时与YZ平面之间形成的夹角。
在一些实施例中,钩状部311的自由端抵压于用户的头部时,除了使得用户的头部在第三接触点A处提供指向头部外侧的作用力之外,还会使得钩状部311的至少对耳部100的第一侧形成另一压紧力,并能够与保持部3122对耳部100的第二侧形成的压紧力相互配合,以对用户的耳部100形成“前后夹击”的压紧效果,进而改善耳机300在佩戴方面的稳定性。
需要说明的是,在实际配戴时,由于不同用户的头部、耳部等生理构造存在差异,对耳机300的实际佩戴会有一定的影响,耳机300与用户头部或耳部的接触点(例如,第一接触点B、第二接触点F、第三接触点A等)的位置可以发生相应的变化。
在一些实施例中,扬声器340位于保持部3122时,由于不同用户的头部、耳部等生理构造存在差异会对耳机300的实际佩戴有一定的影响,因此,不同用户佩戴耳机300时,扬声器340与用户耳部的相对位置会发生改变。在一些实施例中,可以通过设置保持部3122的结构,以调整扬声器340在耳机300整体结构上的位置,进而调整扬声器340相对于用户耳道的距离。
图9A是根据本申请的一些实施例所示的示例性耳机的结构图。图9B是根据本申请的一些实施例所示的示例性耳机的结构图。
参见图9A和图9B,可以设计保持部3122为多段结构,以调节扬声器340在耳机300的整体结构上的相对位置。在一些实施例中,保持部3122为多段结构,可以使得耳机300处于佩戴状态,能够不遮挡耳部的外耳道的同时,又可以使得扬声器340尽可能地靠近外耳道,提高用户使 用耳机300时的听觉体验。
参见图9A,在一些实施例中,保持部3122可以包括依次首尾连接的第一保持段3122-1、第二保持段3122-2和第三保持段3122-3。其中,第一保持段3122-1背离第二保持段3122-2的一端与连接部3121连接,第二保持段3122-2相对于第一保持段3122-1回折,使得第二保持段3122-2与第一保持段3122-1之间具有间距。在一些实施例中,第二保持段3122-2与第一保持段3122-1之间可以呈U字型结构。第三保持段3122-3与第二保持段3122-2背离第一保持段3122-1的一端连接,第三保持段3122-3可以用于设置扬声器340等结构件。
在一些实施例中,参见图9A,这种设置方式下,可以通过调整第二保持段3122-2与第一保持段3122-1之间的间距、第二保持段3122-2相对于第一保持段3122-1回折的回折长度(第二保持段3122-2沿Y轴方向的长度)等,以调节第三保持段3122-3在耳机300的整体结构上的位置,从而调整位于第三保持段3122-3的扬声器340相对于用户耳道的位置或距离。在一些实施例中,第二保持段3122-2与第一保持段3122-1之间的间距、第二保持段3122-2相对于第一保持段3122-1回折的回折长度可以根据不同用户的耳部特征(如,形状、大小等)进行相应的设置,在此不做具体限定。
参见图9B,在一些实施例中,保持部3122可以包括依次首尾连接的第一保持段3122-1、第二保持段3122-2和第三保持段3122-3。其中,第一保持段3122-1背离第二保持段3122-2的一端与连接部3121连接,第二保持段3122-2相对于第一保持段3122-1弯折,并使得第三保持段3122-3与第一保持段3122-1之间具有间距。第三保持段3122-3可以用于设置扬声器340等结构件。
在一些实施例中,参见图9B,这种设置方式下,可以通过调整第三保持段3122-3与第一保持段3122-1之间的间距、第二保持段3122-2相对于第一保持段3122-1弯折的弯折长度(第二保持段3122-2沿Z轴方向的长度)等,调节第三保持段3122-3在耳机300的整体结构上的位置,从而调整位于第三保持段3122-3的扬声器340相对于用户耳道的位置或距离。在一些实施例中,第三保持段3122-3与第一保持段3122-1之间的间距、第二保持段3122-2相对于第一保持段3122-1弯折的弯折长度可以根据不同用户的耳部特征(如,形状、大小等)进行相应的设置,在此不做具体限定。
图10是根据本申请的一些实施例所示的示例性耳机朝向耳部一侧的结构图。
在一些实施例中,参见图10,保持部3122朝向耳部的一侧可以设有出声孔301,扬声器340输出的目标信号可以通过出声孔301向用户耳部传递。在一些实施例中,保持部3122朝向耳部的一侧可以包括第一区域3122A和第二区域3122B,第二区域3122B相较于第一区域3122A更远离连接部3121,也即是第二区域3122B可以位于保持部3122远离连接部3121的自由端。在一些实施例中,第一区域3122A和第二区域3122B之间可以平滑过渡。在一些实施例中,第一区域3122A可以设有出声孔301,第二区域3122B相较于第一区域3122A朝向耳部凸起,使得第二区域3122B与耳部接触,以允许出声孔301在佩戴状态下与耳部间隔。
在一些实施例中,保持部3122的自由端可以设置成凸包结构,在保持部3122靠近用户耳部的侧面上,凸包结构相对于该侧面向外(即朝向用户耳部方向)凸起。由于扬声器340能够产生经出声孔301向耳部传输的声音(例如,目标信号),凸包结构可以避免耳部堵住出声孔301而导致扬声器340产生的声音减弱,甚至是无法输出。在一些实施例中,在保持部3122的厚度方向(X轴方向)上,凸包结构的凸起高度可以用第二区域3122B相对于第一区域3122A的最大凸起高度表示。在一些实施例中,第二区域3122B相对于第一区域3122A的最大凸起高度可以大于或者等于1mm。在一些实施例中,在保持部3122的厚度方向上,第二区域3122B相对于第一区域3122A的最大凸起高度可以大于或者等于0.8mm。在一些实施例中,在保持部3122的厚度方向上,第二区域3122B相对于第一区域3122A的最大凸起高度可以大于或者等于0.5mm。
在一些实施例中,通过设置保持部3122的结构,可以使得用户佩戴耳机300时,出声孔301与用户耳道之间的间距小于10毫米。在一些实施例中,通过设置保持部3122的结构,可以使得用户佩戴耳机300时,出声孔301与用户耳道之间的间距小于8毫米。在一些实施例中,通过设置保持部3122的结构,可以使得用户佩戴耳机300时,出声孔301与用户耳道之间的间距小于7毫米。在一些实施例中,通过设置保持部3122的结构,可以使得用户佩戴耳机300时,出声孔301与用户耳道之间的间距小于6毫米。
需要说明的是,如果仅为了出声孔301在佩戴状态下与耳部间隔,那么相较于第一区域3122A朝向耳部凸起区域也可以位于保持部3122的其他区域,例如出声孔301与连接部3121之间的区域。在一些实施例中,由于耳甲腔和耳甲艇具有一定的深度,并与耳孔连通,出声孔301沿保 持部3122厚度方向在耳部上的正投影可以至少部分落在耳甲腔和/或耳甲艇内。仅作为示例性描述,用户佩戴耳机300时,保持部3122可以位于耳孔靠近用户头顶一侧,并与对耳轮接触,此时出声孔301沿保持部3122厚度方向在耳部上的正投影可以至少部分落在耳甲艇内。
图11是根据本申请的一些实施例所示的示例性耳机背离耳部一侧的结构图。图12是根据本申请的一些实施例所示的示例性耳机的俯视图。
参见图11-图12,保持部3122沿垂直轴(Z轴)方向且靠近用户头顶的一侧可以设有泄压孔302,泄压孔302相对于出声孔301更加远离用户耳道。在一些实施例中,泄压孔302的开口方向可以朝向用户头顶,泄压孔302的开口方向与垂直轴(Z轴)之间可以具有特定夹角,以允许泄压孔302更远离用户耳道,进而使得用户难以听到经泄压孔302输出并传递至用户耳部的声音。在一些实施例中,泄压孔302开口方向与垂直轴(Z轴)之间的夹角可以为0°至10°。在一些实施例中,泄压孔302开口方向与垂直轴(Z轴)之间的夹角可以为0°至8°。在一些实施例中,泄压孔302开口方向与垂直轴(Z轴)之间的夹角可以为0°至5°。
在一些实施例中,通过设置保持部3122的结构以及泄压孔302的开口方向与垂直轴(Z轴)之间的夹角的角度,可以使得用户佩戴耳机300时,泄压孔302与用户耳道之间的间距在合适的范围内。在一些实施例中,用户佩戴耳机300时,泄压孔302与用户耳道之间的间距可以为5毫米至20毫米。在一些实施例中,用户佩戴耳机300时,泄压孔302与用户耳道之间的间距可以为5毫米至18毫米。在一些实施例中,用户佩戴耳机300时,泄压孔302与用户耳道之间的间距可以为5毫米至15毫米。在一些实施例中,用户佩戴耳机300时,泄压孔302与用户耳道之间的间距可以为6毫米至14毫米。在一些实施例中,用户佩戴耳机300时,泄压孔302与用户耳道之间的间距可以为8毫米至10毫米。
图13是根据本申请的一些实施例所示的示例性耳机的截面结构示意图。
图13中示出了耳机(例如,耳机300)的保持部(例如保持部3122)形成的声学结构,包括:出声孔301、泄压孔302、调声孔303、前腔304和后腔305。
在一些实施例中,结合图11及图13,保持部3122可以在扬声器340的相背两侧分别形成前腔304和后腔305。前腔304通过出声孔301与耳机300的外部连通,并向耳部输出声音(例如,目标信号、音频信号等)。后腔305通过泄压孔302与耳机300的外部连通,泄压孔302相较于出声孔301更远离用户耳道。在一些实施例中,泄压孔302可以允许空气自由地进出后腔305,以使得前腔304中空气压强的变化能够尽可能地不被后腔305阻滞,进而改善经出声孔301向耳部输出的声音的音质。
在一些实施例中,泄压孔302与出声孔301之间的连线与保持部3122的厚度方向(X轴方向)之间的夹角可以为0°至50°。在一些实施例中,泄压孔302与出声孔301之间的连线与保持部3122的厚度方向之间的夹角可以为5°至45°。在一些实施例中,泄压孔302与出声孔301之间的连线与保持部3122的厚度方向之间的夹角可以为10°至40°。在一些实施例中,泄压孔302与出声孔301之间的连线与保持部3122的厚度方向之间的夹角可以为15°至35°。需要说明的是,泄压孔302与出声孔301之间的连线与保持部3122的厚度方向之间的夹角可以是泄压孔302的中心与出声孔301的中心之间的连线与保持部3122的厚度方向之间的夹角。
在一些实施例中,结合图11和图13,出声孔301和泄压孔302可以看作是两个向外辐射声音的声源,其辐射声音的幅值相同,相位相反。两个声源可以近似构成声学偶极子或类似声学偶极子,因而其向外辐射声音具有明显的指向性,形成一个“8”字形声音辐射区域。在两个声源连线所在的直线方向,两个声源辐射的声音最大,其余方向辐射声音明显减小,两个声源连线的中垂线处辐射的声音最小。即在泄压孔302和出声孔301连线所在的直线方向,泄压孔302和出声孔301辐射的声音最大,其余方向辐射声音明显减小,泄压孔302和出声孔301连线的中垂线处辐射的声音最小。在一些实施例中,泄压孔302和出声孔301形成的声学偶极子,可以降低扬声器340的漏音。
在一些实施例中,结合图11和图13,保持部3122还可以设有与后腔305连通的调声孔303,调声孔303可以用于破坏后腔305中声场的高压区,使得后腔305中驻波的波长变短,进而使得经泄压孔302输出至耳机300外部的声音的谐振频率尽可能地高,例如大于4kHz,从而降低扬声器340的漏音。在一些实施例中,调声孔303和泄压孔302可以分别位于扬声器340的相对两侧,例如在Z轴方向上相背设置,最大程度上破坏后腔305中声场的高压区。在一些实施例中,调声孔303相较于泄压孔302可以更远离出声孔301,以尽可能地增大调声孔303与出声孔301之间的距离,进而减弱经调声孔303输出至耳机300外部的声音与经出声孔301向耳部传输的声音之间 的反相相消。
在一些实施例中,扬声器340通过出声孔301和/或泄压孔302输出的目标信号也会被第一麦克风阵列320拾取,而该目标信号会影响处理器330对目标空间位置的声场的估计,即由扬声器340输出的目标信号是不期望被拾取的。这种情况下,为了降低扬声器340输出的目标信号对第一麦克风阵列320的影响,第一麦克风阵列320可以设置在扬声器340所输出声音尽可能小的第一目标区域。在一些实施例中,第一目标区域可以是泄压孔302和出声孔301形成的声学偶极子的辐射声场的声学零点位置或其附近的位置。在一些实施例中,第一目标区域可以是图10中所示的区域G。当用户佩戴耳机300时,区域G位于出声孔301和/或泄压孔302的前方(此处的前方指用户所面朝的方向),即区域G更加靠近用户的眼睛。可选地,区域G可以是固定结构310的连接部3121上的部分区域。也即是,第一麦克风阵列320可以位于连接部3121处。例如,第一麦克风阵列320可以位于连接部3121靠近保持部3122的位置。在一些可替换的实施例中,区域G也可以位于出声孔301和/或泄压孔302的后方(此处的前方指用户所面朝的方向的反方向)。例如,区域G可以位于保持部3122上远离连接部3121的端部。
在一些实施例中,参见图10-图11,为了降低扬声器340输出的目标信号对第一麦克风阵列320的影响,提高耳机300的主动降噪的效果,可以合理的设置第一麦克风阵列320与出声孔301和泄压孔302之间的相对位置。这里所说的第一麦克风阵列320的位置可以是第一麦克风阵列320中任一麦克风所在的位置。在一些实施例中,第一麦克风阵列320和出声孔301之间的连线与出声孔301和泄压孔302之间的连线形成第一夹角,第一麦克风阵列320和泄压孔302之间的连线与出声孔301和泄压孔302之间的连线形成第二夹角。在一些实施例中,第一夹角与第二夹角的差值可以不大于30°。在一些实施例中,第一夹角与第二夹角的差值可以不大于25°。在一些实施例中,第一夹角与第二夹角的差值可以不大于20°。在一些实施例中,第一夹角与第二夹角的差值可以不大于15°。在一些实施例中,第一夹角与第二夹角的差值可以不大于10°。
在一些实施例中,第一麦克风阵列320和出声孔301之间具有第一距离,第一麦克风阵列320和泄压孔302之间具有第二距离。为了保证扬声器340输出的目标信号对第一麦克风阵列320的影响较小,第一距离与第二距离的差值可以不大于6毫米。在一些实施例中,第一距离与第二距离的差值可以不大于5毫米。在一些实施例中,第一距离与第二距离的差值可以不大于4毫米。在一些实施例中,第一距离与第二距离的差值可以不大于3毫米。
可以理解的是,本文所述的第一麦克风阵列320与出声孔301和泄压孔302之间具有的位置关系,可以是指第一麦克风阵列320中任一麦克风与出声孔301的中心和泄压孔302的中心之间的位置关系。例如,第一麦克风阵列320和出声孔301之间的连线与出声孔301和泄压孔302之间的连线形成第一夹角,可以是指第一麦克风阵列320中任一麦克风和出声孔301的中心的连线与出声孔301的中心和泄压孔302的中心的连线形成第一夹角。又例如,第一麦克风阵列320和出声孔301之间具有第一距离,可以是指第一麦克风阵列320中任一麦克风和出声孔301的中心具有第一距离。
在一些实施例中,第一麦克风阵列320位于出声孔301与泄压孔302形成的声学偶极子的声学零点位置,可以使第一麦克风阵列320受到扬声器340输出的目标信号的影响最小,进而使得第一麦克风阵列320可以更加精准的拾取用户耳道附近的环境噪声。进一步地,处理器330可以基于第一麦克风阵列320拾取的环境噪声更加准确地估计用户耳道处的环境噪声并生成降噪信号,从而更好地实现耳机300的主动降噪。关于利用第一麦克风阵列320实现耳机300的主动降噪的具体描述可以参见图14-图16,及其相关描述。
图14是根据本申请的一些实施例所示的耳机的示例性降噪流程图。在一些实施例中,流程1400可以由耳机300执行。如图14所示,流程1400可以包括:
在步骤1410中,拾取环境噪声。在一些实施例中,该步骤可以由第一麦克风阵列320执行。
在一些实施例中,环境噪声可以指用户所处环境中的多种外界声音(例如,交通噪声、工业噪声、建筑施工噪声、社会噪声)的组合。在一些实施例中,第一麦克风阵列320可以位于耳机300的机体部312而靠近用户耳道的附近位置,用于拾取用户耳道附近位置的环境噪声。进一步,第一麦克风阵列320可以将拾取的环境噪声信号转换为电信号并传递至处理器330进行处理。
在步骤1420中,基于拾取的环境噪声估计目标空间位置的噪声。在一些实施例中,该步骤可以由处理器330执行。
在一些实施例中,处理器330可以对拾取的环境噪声进行信号分离。在一些实施例中,第 一麦克风阵列320拾取的环境噪声可以包括各种声音。处理器330可以对第一麦克风阵列320拾取的环境噪声进行信号分析,以分离各种声音。具体地,处理器330可以根据各种声音在空间、时域、频域等不同维度的统计分布特性及结构化特征,自适应调整滤波器的参数,估计环境噪声中各个声音信号的参数信息,并根据各个声音信号的参数信息完成信号分离过程。在一些实施例中,噪声的统计分布特性可以包括概率分布密度、功率谱密度、自相关函数、概率密度函数、方差、数学期望等。在一些实施例中,噪声的结构化特征可以包括噪声分布、噪声强度、全局噪声强度、噪声率等,或其任意组合。全局噪声强度可以指平均噪声强度或加权平均噪声强度。噪声率可以指噪声分布的分散程度。仅作为示例,第一麦克风阵列320拾取的环境噪声可以包括第一信号、第二信号、第三信号。处理器330获取第一信号、第二信号、第三信号在空间(例如,信号所处位置)、时域(例如,延迟)、频域(例如,幅值、相位)的差异,并根据三种维度上的差异将第一信号、第二信号、第三信号分离,得到相对纯净的第一信号、第二信号、第三信号。进一步,处理器330可以根据分离得到的信号的参数信息(例如,频率信息、相位信息、幅值信息)更新环境噪声。例如,处理器330可以根据第一信号的参数信息确定第一信号为用户的通话声音,并从环境噪声中去除第一信号从而更新环境噪声。在一些实施例中,被去除第一信号可以被传输至通话远端。例如,用户佩戴耳机300进行语音通话时,第一信号可以被传输至通话远端。
目标空间位置是基于第一麦克风阵列320确定的位于用户耳道或用户耳道附近的位置。目标空间位置可以指靠近用户耳道(例如,耳孔)特定距离(例如,2mm、3mm、5mm等)的空间位置。在一些实施例中,目标空间位置比第一麦克风阵列320中任一麦克风更加靠近用户耳道。在一些实施例中,目标空间位置与第一麦克风阵列320中各麦克风的数量、相对于用户耳道的分布位置相关,通过调整第一麦克风阵列320中各麦克风的数量和/或相对于用户耳道的分布位置可以对目标空间位置进行调整。在一些实施例中,基于拾取的环境噪声(或更新后的环境噪声)估计目标空间位置的噪声还可以包括确定一个或多个与拾取的环境噪声有关的空间噪声源,基于空间噪声源估计目标空间位置的噪声。第一麦克风阵列320拾取的环境噪声可以是来自不同方位、不同种类的空间噪声源。每一个空间噪声源对应的参数信息(例如,频率信息、相位信息、幅值信息)是不同的。在一些实施例中,处理器330可以根据不同类型的噪声在不同维度(例如,空域、时域、频域等)的统计分布和结构化特征将目标空间位置的噪声进行信号分离提取,从而获取不同类型(例如不同频率、不同相位等)的噪声,并估计每种噪声所对应的参数信息(例如,幅值信息、相位信息等)。在一些实施例中,处理器330还可以将根据目标空间位置处不同类型噪声对应的参数信息确定目标空间位置的噪声的整体参数信息。关于基于一个或多个空间噪声源估计目标空间位置的噪声的更多内容可以参考本说明书其它地方,例如,图15及其相应描述。
在一些实施例中,基于拾取的环境噪声(或更新后的环境噪声)估计目标空间位置的噪声还可以包括基于第一麦克风阵列320构建虚拟麦克风以及基于虚拟麦克风估计目标空间位置的噪声。关于基于虚拟麦克风估计目标空间位置的噪声的更多内容可以参考本说明书其它地方,例如图16及其相应描述。
在步骤1430中,基于目标空间位置的噪声生成降噪信号。在一些实施例中,该步骤可以由处理器330执行。
在一些实施例中,处理器330可以基于步骤1420中获得的目标空间位置的噪声的参数信息(例如,幅值信息、相位信息等)生成降噪信号。在一些实施例中,降噪信号的相位与目标空间位置的噪声的相位的相位差可以小于或等于预设相位阈值。该预设相位阈值可以处于90-180度范围内。该预设相位阈值可以根据用户的需要在该范围内进行调整。例如,当用户不希望被周围环境的声音打扰时,该预设相位阈值可以为较大值,例如180度,即降噪信号的相位与目标空间位置的噪声的相位相反。又例如,当用户希望对周围环境保持敏感时,该预设相位阈值可以为较小值,例如90度。需要注意的是,用户希望接收越多周围环境的声音,该预设相位阈值可以越接近90度,用户希望接收越少周围环境的声音,该预设相位阈值可以越接近180度。在一些实施例中,当降噪信号的相位与目标空间位置的噪声的相位一定的情况下(例如相位相反),目标空间位置的噪声的幅值与该降噪信号的幅值的幅值差可以小于或等于预设幅值阈值。例如,当用户不希望被周围环境的声音打扰时,该预设幅值阈值可以为较小值,例如0dB,即降噪信号的幅值与目标空间位置的噪声的幅值相等。又例如,当用户希望对周围环境保持敏感时,该预设幅值阈值可以为较大值,例如约等于目标空间位置的噪声的幅值。需要注意的是,用户希望接收越多周围环境的声音,该预设幅值阈值可以越接近目标空间位置的噪声的幅值,用户希望接收越少周围环境的声音,该预设幅值阈值可以越接近0dB。
在一些实施例中,扬声器340可以基于处理器330生成的降噪信号输出目标信号。例如,扬声器340可以基于其振动组件将降噪信号(例如,电信号)转化为目标信号(即振动信号),该目标信号通过耳机300上的出声孔301向用户耳部传递,并在用户耳道处与环境噪声相互抵消。在一些实施例中,目标空间位置的噪声为多个空间噪声源时,扬声器340可以基于降噪信号输出与多个空间噪声源相对应的目标信号。例如,多个空间噪声源包括第一空间噪声源和第二空间噪声源,扬声器340可以输出与第一空间噪声源的噪声相位近似相反、幅值近似相等的第一目标信号以抵消第一空间噪声源的噪声,与第二空间噪声源的噪声相位近似相反、幅值近似相等的第二目标信号以抵消第二空间噪声源的噪声。在一些实施例中,当扬声器340为气导扬声器时,目标信号与环境噪声向抵消的位置可以为目标空间位置。目标空间位置与用户耳道之间的间距较小,目标空间位置的噪声可以近似视为用户耳道位置的噪声,因此,降噪信号与目标空间位置的噪声相互抵消,可以近似为传递至用户耳道的环境噪声被消除,实现耳机300的主动降噪。在一些实施例中,当扬声器340为骨导扬声器时,目标信号与环境噪声向抵消的位置可以为基底膜。目标信号与环境噪声在用户的基底膜被抵消,从而实现耳机300的主动降噪。
在一些实施例中,当耳机300的位置发生变化,例如,佩戴耳机300的用户的头部发生转动时,环境噪声(例如噪声方向、幅值、相位)随之发生变化,耳机300执行降噪的速度难以跟上环境噪声改变的速度,导致耳机300的主动降噪功能减弱。为此,耳机300还可以包括一个或多个传感器,一个或多个传感器可以位于耳机300的任意位置,例如,钩状部311和/或连接部3121和/或保持部3122。一个或多个传感器可以与耳机300的其他部件(例如,处理器330)电连接。在一些实施例中,一个或多个传感器可以用于获取耳机300的物理位置和/或运动信息。仅作为示例,一个或多个传感器可以包括惯性测量单元(Inertial Measurement Unit,IMU)、全球定位系统(Global Position System,GPS)、雷达等。运动信息可以包括运动轨迹、运动方向、运动速度、运动加速度、运动角速度、运动相关的时间信息(例如运动开始时间,结束时间)等,或其任意组合。以IMU为例,IMU可以包括微电子机械系统(Micro electro Mechanical System,MEMS)。该微电子机械系统可以包括多轴加速度计、陀螺仪、磁力计等,或其任意组合。IMU可以用于检测耳机300的物理位置和/或运动信息,以启用基于物理位置和/或运动信息对耳机300的控制。
在一些实施例中,处理器330可以基于耳机300的一个或多个传感器获取的耳机300的运动信息(例如,运动轨迹、运动方向、运动速度、运动加速度、运动角速度、运动相关的时间信息)更新目标空间位置的噪声和目标空间位置的声场估计。进一步,基于更新后的目标空间位置的噪声和目标空间位置的声场估计,处理器330可以生成降噪信号。一个或多个传感器可以记录耳机300的运动信息,进而处理器330可以对降噪信号进行快速的更新,这可以提高耳机300的噪声跟踪性能,使得降噪信号可以更加精准的消除环境噪声,进一步提高降噪效果和用户的听觉体验。
应当注意的是,上述有关流程1400的描述仅仅是为了示例和说明,而不限定本申请的适用范围。对于本领域技术人员来说,在本申请的指导下可以对流程1400进行各种修正和改变。例如,还可以增加、省略或合并流程1400中的步骤。这些修正和改变仍在本申请的范围之内。
图15是根据本申请的一些实施例所示的估计目标空间位置的噪声的示例性流程图。如图15所示,流程1500可以包括:
在步骤1510中,确定一个或多个与第一麦克风阵列320拾取的环境噪声有关的空间噪声源。在一些实施例中,该步骤可以由处理器330执行。如本文中所述,确定空间噪声源指的是确定空间噪声源相关信息,例如,空间噪声源的位置(包括空间噪声源的方位、空间噪声源与目标空间位置的距离等)、空间噪声源的相位以及空间噪声源的幅值等。
在一些实施例中,与环境噪声有关的空间噪声源是指其声波可传递至用户耳道处(例如,目标空间位置)或靠近用户耳道处的噪声源。在一些实施例中,空间噪声源可以为用户身体不同方向(例如,前方、后方等)的噪声源。例如,用户身体前方存在人群喧闹噪声、用户身体左方存在车辆鸣笛噪声,这种情况下,空间噪声源包括用户身体前方的人群喧闹噪声源和用户身体左方的车辆鸣笛噪声源。在一些实施例中,第一麦克风阵列320可以拾取用户身体各个方向的空间噪声,并将空间噪声转化为电信号传递至处理器330,处理器330可以将空间噪声对应的电信号进行分析,得到所述拾取的各个方向的空间噪声的参数信息(例如,频率信息、幅值信息、相位信息等)。处理器330根据各个方向的空间噪声的参数信息确定各个方向的空间噪声源的信息,例如,空间噪声源的方位、空间噪声源的距离、空间噪声源的相位以及空间噪声源的幅值等。在一些实施例中,处理器330可以基于第一麦克风阵列320拾取的空间噪声通过噪声定位算法确定空间噪声源。噪声定位算法可以包括波束形成算法、超分辨空间谱估计算法、到达时差算法(也可以称为 时延估计算法)等中的一种或多种。
在一些实施例中,处理器330可以将拾取的环境噪声按照特定的频带宽度(例如,每500Hz作为一个频带)划分为多个频带,每个频带可以分别对应不同的频率范围,并在至少一个频带上确定与该频带对应的空间噪声源。例如,处理器330可以对环境噪声划分的频带进行信号分析,得到每个频带对应的环境噪声的参数信息,并根据参数信息确定与每个频带对应的空间噪声源。
在步骤1520中,基于空间噪声源,估计目标空间位置的噪声。在一些实施例中,该步骤可以由处理器330执行。如本文中所述,估计目标空间位置的噪声指的是估计目标空间位置处的噪声的参数信息,例如,频率信息、幅值信息、相位信息等。
在一些实施例中,处理器330可以基于步骤1510中得到的位于用户身体各个方向的空间噪声源的参数信息(例如,频率信息、幅值信息、相位信息等),估计各个空间噪声源分别传递至目标空间位置的噪声的参数信息,从而估计出目标空间位置的噪声。例如,用户身体第一方位(例如,前方)和第二方位(例如,后方)分别有一个空间噪声源,处理器330可以根据第一方位空间噪声源的位置信息、频率信息、相位信息或幅值信息,估计第一方位空间噪声源的噪声传递到目标空间位置时,第一方位空间噪声源的频率信息、相位信息或幅值信息。处理器330可以根据第二方位空间噪声源的位置信息、频率信息、相位信息或幅值信息,估计第二方位空间噪声源的噪声传递到目标空间位置时,第二方位空间噪声源的频率信息、相位信息或幅值信息。进一步,处理器330可以基于第一方位空间噪声源和第二方位空间噪声源的频率信息、相位信息或幅值信息,估计目标空间位置的噪声信息,从而估计目标空间位置的噪声的信息。仅作为示例,处理器330可以利用虚拟传声器技术或其他方法估计目标空间位置的噪声信息。在一些实施例中,处理器330可以通过特征提取的方法从麦克风阵列拾取的空间噪声源的频率响应曲线提取空间噪声源的噪声的参数信息。在一些实施例中,提取空间噪声源的噪声的参数信息的方法可以包括但不限于主成分分析(Principal Components Analysis,PCA)、独立成分分析(Independent Component Algorithm,ICA)、线性判别分析(Linear Discriminant Analysis,LDA)、奇异值分解(Singular Value Decomposition,SVD)等。
应当注意的是,上述有关流程1500的描述仅仅是为了示例和说明,而不限定本申请的适用范围。对于本领域技术人员来说,在本申请的指导下可以对流程1500进行各种修正和改变。例如,流程1500还可以包括对空间噪声源进行定位,提取空间噪声源的噪声的参数信息等步骤。这些修正和改变仍在本申请的范围之内。
图16是根据本申请的一些实施例所示的估计目标空间位置的声场和噪声示例性流程图。如图16所示,流程1600可以包括:
在步骤1610中,基于第一麦克风阵列320构建虚拟麦克风。在一些实施例中,该步骤可以由处理器330执行。
在一些实施例中,虚拟麦克风可以用于表示或模拟若目标空间位置处设置麦克风后所述麦克风采集的音频数据。即通过虚拟麦克风得到的音频数据可以近似或等效为若目标空间位置处放置物理麦克风后该物理麦克风所采集的音频数据。
在一些实施例中,虚拟麦克风可以包括数学模型。该数学模型可以体现目标空间位置的噪声或声场估计与麦克风阵列(例如,第一麦克风阵列320)拾取的环境噪声的参数信息(例如,频率信息、幅值信息、相位信息等)和麦克风阵列的参数之间的关系。麦克风阵列的参数可以包括麦克风阵列的排布方式、各个麦克风之间的间距、麦克风阵列中麦克风的数量和位置等中的一种或多种。该数学模型可以基于初始数学模型以及麦克风阵列的参数和麦克风阵列拾取的声音(例如环境噪声)的参数信息(例如,频率信息、幅值信息、相位信息等)通过计算获得。例如,初始数学模型可以包括对应麦克风阵列的参数和麦克风阵列拾取的环境噪声的参数信息的参数以及模型参数。将麦克风阵列的参数和麦克风阵列拾取的声音的参数信息和模型参数的初始值带入初始数学模型获得预测的目标空间位置的噪声或声场。然后将该预测噪声或声场与目标空间位置处设置的物理麦克风获得的数据(噪声和声场估计)进行比较以对数学模型的模型参数进行调整。基于上述调整方法,通过大量数据(例如,麦克风阵列的参数和麦克风阵列拾取的环境噪声的参数信息),多次调整,从而获得该数学模型。
在一些实施例中,虚拟麦克风可以包括机器学习模型。该机器学习模型可以基于麦克风阵列的参数和麦克风阵列拾取的声音(例如,环境噪声)的参数信息(例如,频率信息、幅值信息、相位信息等)通过训练获得。例如,将麦克风阵列的参数和麦克风阵列拾取的声音的参数信息作为训练样本对初始机器学习模型(例如,神经网络模型)进行训练获得该机器学习模型。具 体的,可以将麦克风阵列的参数和麦克风阵列拾取的声音的参数信息输入初始机器学习模型,并获得预测结果(例如,目标空间位置的噪声和声场估计)。然后,将该预测结果与目标空间位置处设置的物理麦克风获得的数据(噪声和声场估计)进行比较以对初始机器学习模型的参数进行调整。基于上述调整方法通过大量数据(例如,麦克风阵列的参数和麦克风阵列拾取的环境噪声的参数信息),经过多次迭代,优化初始机器学习模型的参数,直至初始机器学习模型的预测结果与目标空间位置处设置的物理麦克风获得的数据相同或近似相同时,获得机器学习模型。
虚拟麦克风技术可以将物理麦克风从难以放置麦克风的位置(例如,目标空间位置)移开。例如,为了实现开放用户双耳不堵塞用户耳道的目的,物理麦克风不能设置于用户耳孔的位置(例如,目标空间位置)。此时,可以通过虚拟麦克风技术将麦克风阵列设置于靠近用户耳朵且不堵塞耳道的位置,然后通过麦克风阵列构建处于用户耳孔的位置的虚拟麦克风。虚拟麦克风可以利用处于第一位置物理麦克风(例如,第一麦克风阵列320)来预测处于第二位置(例如,目标空间位置)的声音数据(例如,幅值、相位、声压、声场等)。在一些实施例中,虚拟麦克风预测得到的第二位置(也可以称为特定位置,例如目标空间位置)的声音数据可以根据虚拟麦克风与物理麦克风(第一麦克风阵列320)之间的距离、虚拟麦克风的类型(例如,数学模型虚拟麦克风、机器学习虚拟麦克风)等调整。例如,虚拟麦克风与物理麦克风之间的距离越近,虚拟麦克风预测得到的第二位置的声音数据越准确。又例如,在一些特定应用场景中,机器学习虚拟麦克风预测得到的第二位置的声音数据比数学模型虚拟麦克风的更准确。在一些实施例中,虚拟麦克风对应的位置(即第二位置,例如目标空间位置)可以在第一麦克风阵列320的附近,也可以远离第一麦克风阵列320。
在步骤1620中,基于虚拟麦克风估计目标空间位置的噪声和声场。在一些实施例中,该步骤可以由处理器330执行。
在一些实施例中,当虚拟麦克风为数学模型时,处理器330可以实时将第一麦克风阵列(例如,第一麦克风阵列320)拾取的环境噪声的参数信息(例如,频率信息、幅值信息、相位信息等)和第一麦克风阵列的参数(例如,第一麦克风阵列的排布方式、各个麦克风之间的间距、第一麦克风阵列中麦克风的数量)作为数学模型的参数输入数学模型以估计目标空间位置的噪声和声场。
在一些实施例中,当虚拟麦克风为机器学习模型时,处理器330可以实时将第一麦克风阵列拾取的环境噪声的参数信息(例如,频率信息、幅值信息、相位信息等)和第一麦克风阵列的参数(例如,第一麦克风阵列的排布方式、各个麦克风之间的间距、第一麦克风阵列中麦克风的数量)输入机器学习模型并基于机器学习模型的输出估计目标空间位置的噪声和声场。
应当注意的是,上述有关流程1600的描述仅仅是为了示例和说明,而不限定本申请的适用范围。对于本领域技术人员来说,在本申请的指导下可以对流程1600进行各种修正和改变。例如,步骤1620可以被分为两个步骤以分别估计目标空间位置的噪声和声场。这些修正和改变仍在本申请的范围之内。
在一些实施例中,扬声器340基于降噪信号输出目标信号,该目标信号与环境噪声相抵消后,用户耳道附近可能仍会存在一部分未相互抵消掉的声音信号,这些未抵消掉的声音信号可以是残余的环境噪声和/或残余的目标信号,因此用户耳道处仍存在一定的噪声。基于此,在一些实施例中,图1所示的耳机100、图3-图12所示的耳机300还可以包括第二麦克风360。第二麦克风360可以位于机体部312(如保持部3122)。第二麦克风360可以被配置为拾取环境噪声和目标信号。
在一些实施例中,第二麦克风360的数量可以为一个或多个。当第二麦克风360的数量为一个时,该第二麦克风可以用于拾取用户耳道处的环境噪声和目标信号,以监测目标信号与环境噪声抵消后用户耳道处的声场。当第二麦克风360的数量为多个时,多个第二麦克风可以用于拾取用户耳道处的环境噪声和目标信号,多个麦克风拾取的用户耳道处的声音信号的相关参数信息可以以平均或加权算法等方式对用户耳道处的噪声进行估计。在一些实施例中,当第二麦克风360的数量为多个时,多个麦克风中的部分麦克风可以用于拾取用户耳道处的环境噪声和目标信号,其余麦克风可以作为第一麦克风阵列320中的麦克风使用,此时,第一麦克风阵列320中的麦克风与第二麦克风360中的麦克风出现重叠或交叉。
在一些实施例中,参见图10,第二麦克风360可以设置于第二目标区域,第二目标区域可以是保持部3122上靠近用户耳道的区域。在一些实施例中,第二目标区域可以是图10中的区域H。区域H可以是保持部3122上靠近用户耳道的部分区域。也即是,第二麦克风360可以位于保持部 3122。例如,区域H可以是保持部3122朝向用户耳部一侧的第一区域3122A中的部分区域。通过将第二麦克风360设置于第二目标区域H,可以使得第二麦克风360位于用户耳道附近且相对于第一麦克风阵列320更加靠近用户耳道,进而保证第二麦克风360拾取的声音信号(例如,残余的环境噪声、残余的目标信号等)更加接近用户听到的声音,处理器330进一步根据第二麦克风360拾取的声音信号更新降噪信号,从而达到更理想的降噪效果。
在一些实施例中,为了保证第二麦克风360能够更加精准的拾取用户耳道处残余的环境噪声,可以通过调整第二麦克风360在保持部3122上的位置,使得第二麦克风360与用户耳道之间的距离在合适的范围内。在一些实施例中,用户佩戴耳机300时,第二麦克风360与用户耳道之间的距离可以小于10毫米。在一些实施例中,用户佩戴耳机300时,第二麦克风360与用户耳道之间的距离可以小于9毫米。在一些实施例中,用户佩戴耳机300时,第二麦克风360与用户耳道之间的距离可以小于8毫米。在一些实施例中,用户佩戴耳机300时,第二麦克风360与用户耳道之间的距离可以小于7毫米。
在一些实施例中,第二麦克风360需要拾取扬声器340通过出声孔301输出的目标信号与环境噪声抵消后残余的目标信号。为了保证第二麦克风360能够更加精准的拾取残余的目标信号,可以合理的设置第二麦克风360与出声孔301之间的距离。在一些实施例中,在用户的矢状面(YZ平面)上,第二麦克风360与出声孔301沿矢状轴(Y轴)方向的距离可以小于10毫米。在一些实施例中,在用户的矢状面(YZ平面)上,第二麦克风360与出声孔301沿矢状轴(Y轴)方向的距离可以小于9毫米。在一些实施例中,在用户的矢状面(YZ平面)上,第二麦克风360与出声孔301沿矢状轴(Y轴)方向的距离可以小于8毫米。在一些实施例中,在用户的矢状面(YZ平面)上,第二麦克风360与出声孔301沿矢状轴(Y轴)方向的距离可以小于7毫米。
在一些实施例中,在用户的矢状面上,第二麦克风360与出声孔301沿垂直轴(Z轴)方向的距离可以为3毫米至6毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与出声孔301沿垂直轴(Z轴)方向的距离可以为2.5毫米至5.5毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与出声孔301沿垂直轴(Z轴)方向的距离可以为3毫米至5毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与出声孔301沿垂直轴(Z轴)方向的距离可以为3.5毫米至4.5毫米。
在一些实施例中,为了保证耳机300的主动降噪效果,在用户的矢状面上,第二麦克风360与第一麦克风阵列320沿垂直轴(Z轴)方向的距离可以为2毫米至8毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与第一麦克风阵列320沿垂直轴(Z轴)方向的距离可以为3毫米至7毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与第一麦克风阵列320沿垂直轴(Z轴)方向的距离可以为4毫米至6毫米。
在一些实施例中,在用户的矢状面上,第二麦克风360与第一麦克风阵列320沿矢状轴(Y轴)方向的距离可以为2毫米至20毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与第一麦克风阵列320沿矢状轴(Y轴)方向的距离可以为4毫米至18毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与第一麦克风阵列320沿矢状轴(Y轴)方向的距离可以为5毫米至15毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与第一麦克风阵列320沿矢状轴(Y轴)方向的距离可以为6毫米至12毫米。在一些实施例中,在用户的矢状面上,第二麦克风360与第一麦克风阵列320沿矢状轴(Y轴)方向的距离可以为8毫米至10毫米。
在一些实施例中,在用户的横断面(XY平面)上,第二麦克风360与第一麦克风阵列320沿冠状轴(X轴)方向的距离可以小于3毫米。在一些实施例中,在用户的横断面(XY平面)上,第二麦克风360与第一麦克风阵列320沿冠状轴(X轴)方向的距离可以小于2.5毫米。在一些实施例中,在用户的横断面(XY平面)上,第二麦克风360与第一麦克风阵列320沿冠状轴(X轴)方向的距离可以小于2毫米。可以理解的是,上述第二麦克风360与第一麦克风阵列320之间的距离可以是第二麦克风360与第一麦克风阵列320中任一麦克风之间的距离。
在一些实施例中,第二麦克风360被配置为拾取环境噪声和目标信号,进一步地,处理器330可以基于第二麦克风360拾取的声音信号更新降噪信号,从而进一步提高耳机300的主动降噪效果。关于利用第二麦克风360更新降噪信号的具体描述可以参见图17,及其相关描述。
图17是根据本申请的一些实施例所示的更新降噪信号的示例性流程图。如图17所示,流程1700可以包括:
在步骤1710中,基于第二麦克风360拾取的声音信号,对用户耳道处的声场进行估计。
在一些实施例中,该步骤可以由处理器330执行。在一些实施例中,第二麦克风360拾取 的声音信号包括环境噪声和扬声器340输出的目标信号。在一些实施例中,环境噪声和扬声器340输出的目标信号相抵消后,用户耳道附近可能仍会存在一部分未相互抵消掉的声音信号,这些未抵消掉的声音信号可以是残余的环境噪声和/残余的目标信号,因此使得环境噪声和目标信号抵消后用户耳道处仍存在一定的噪声。处理器330可以根据第二麦克风360拾取的声音信号(例如,环境噪声、目标信号)进行处理,得到用户耳道处的声场的参数信息,例如,频率信息、幅值信息和相位信息等,从而实现对用户耳道处的声场估计。
在步骤1720中,根据用户耳道处的声场,更新所述降噪信号。
在一些实施例中,步骤1720可以由处理器330执行。在一些实施例中,处理器330可以根据步骤1710中得到的用户耳道处的声场的参数信息,调整降噪信号的参数信息(例如,频率信息、幅值信息和/或相位信息),使得更新后降噪信号的幅值信息、频率信息与用户耳道处的环境噪声的幅值信息、频率信息更加吻合,且更新后降噪信号的相位信息与用户耳道处的环境噪声的反相位信息更加吻合,从而使得更新后降噪信号可以更加精准的消除环境噪声。
应当注意的是,上述有关流程1700的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程1700进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。例如,拾取用户耳道处的声场的麦克风不限于第二麦克风360,还可以包括其它麦克风,例如第三麦克风、第四麦克风等,可以将多个麦克风拾取的用户耳道处的声场的相关参数信息以平均或加权算法等方式对用户耳道处的声场进行估计。
在一些实施例中,为了更加精准地获取用户耳道处的声场,第二麦克风360可以包括一个比第一麦克风阵列320中任意麦克风更加靠近用户耳道的麦克风。在一些实施例中,第一麦克风阵列320拾取的声音信号是环境噪声,第二麦克风360拾取的声音信号是环境噪声和目标信号。在一些实施例中,处理器330可以根据第二麦克风360拾取的声音信号对用户耳道处的声场进行估计,以更新降噪信号。第二麦克风360需要对降噪信号与环境噪声抵消后用户耳道处的声场进行监测,第二麦克风360包括一个比第一麦克风阵列320中任意麦克风更加靠近用户耳道的麦克风可以更加准确的表征用户听到的声音信号,通过第二麦克风360的声场进行估计以更新降噪信号,可以进一步提高降噪效果和用户的听觉体验感。
在一些实施例中,耳机300也可以不包括上述第一麦克风阵列,而仅利用第二麦克风360进行主动降噪。此时,处理器330可以将第二麦克风360拾取的环境噪声当做用户耳道处的噪声并以此生成反馈信号来调整降噪信号,以抵消或降低用户耳道处的环境噪声。又例如,第二麦克风360的数量为多个时,多个麦克风中的部分麦克风可以用于拾取用户耳道附近的环境噪声,其余麦克风用于拾取用户耳道处的环境噪声和目标信号,使得处理器330可以根据目标信号与环境噪声抵消后用户耳道处的声音信号更新降噪信号,进一步提高耳机300的主动降噪效果。
图18是根据本申请的一些实施例所示的耳机的示例性降噪流程图。如图18所示,流程1800可以包括:
在步骤1810中,将拾取的环境噪声划分为多个频带,所述多个频带对应不同的频率范围。
在一些实施例中,该步骤可以由处理器330执行。麦克风阵列(如第一麦克风阵列320)拾取的环境噪声包含不同的频率成分。在一些实施例中,处理器330在对环境噪声信号进行处理时,可以将环境噪声频带划分为多个频带,每个频带对应不同的频率范围。这里每个频带对应的频率范围可以是预先设定好的频率范围,例如,20-100Hz、100Hz-1000Hz、3000Hz-6000Hz、9000Hz-20000Hz等。
在步骤1820中,基于所述多个频带中的至少一个,生成与所述至少一个频带中的每一个对应的降噪信号。
在一些实施例中,该步骤可以由处理器330执行。处理器330可以对环境噪声划分的频带进行分析,得到每个频带对应的环境噪声的参数信息(如,频率信息、幅值信息、相位信息等)。处理器330根据参数信息生成与至少一个频带中的每一个对应的降噪信号。例如,在20Hz-100Hz这个频带上,处理器330可以基于频带20Hz-100Hz对应的环境噪声的参数信息(例如,频率信息、幅值信息、相位信息等)生成与频带20Hz-100Hz对应的降噪信号。进一步地,扬声器340基于频带20Hz-100Hz的降噪信号输出目标信号。例如,扬声器340可以输出与频带20Hz-100Hz的噪声相位近似相反、幅值近似相等的目标信号以抵消该频带的噪声。
在一些实施例中,基于所述多个频带中的至少一个,生成与所述至少一个频带中的每一个对应的降噪信号可以包括获取多个频带对应的声压级,以及基于多个频带对应的声压级和多个频带对应的频率范围,仅生成与部分频带对应的降噪信号。在一些实施例中,麦克风阵列(如第一麦 克风阵列320)拾取的不同频段的环境噪声的声压级可以是不同的。处理器330对环境噪声划分的频带进行分析,可以得到每个频带对应的声压级。在一些实施例中,考虑到开放式耳机(例如,耳机300)结构上的差异性,以及由于用户耳部结构差异导致耳机佩戴位置不同而导致的传递函数的变化,耳机300可以选择环境噪声频带中的部分频带进行主动降噪。处理器330基于多个频带的声压级和频率范围,仅生成与部分频带对应的降噪信号。例如,当环境噪声中的低频(例如,20Hz-100Hz)噪声较大(例如,声压级大于60dB)时,开放式耳机可能无法发出足够大的降噪信号以抵消该低频噪声。这种情况下,处理器330可以仅生成与环境噪声频带中频率较高的部分频带(例如,100Hz-1000Hz、3000Hz-6000Hz)对应的降噪信号。又例如,由于用户耳部结构差异引起的耳机佩戴位置不同会导致传递函数的变化,会使得开放式耳机难以对高频信号(例如,大于2000Hz)的环境噪声进行主动降噪,这种情况下,处理器330可以仅生成与环境噪声频带中频率较低的部分频带(例如,20Hz-100Hz)对应的降噪信号。
应当注意的是,上述有关流程1800的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程1800进行各种修正和改变。例如,将步骤1810和步骤1820进行合并。又例如,在流程1800中增加其他步骤。然而,这些修正和改变仍在本说明书的范围之内。
图19是根据本申请的一些实施例所示的估计目标空间位置的噪声的示例性流程图。如图19所示,流程1900可以包括:
在步骤1910中,从拾取的环境噪声中去除与骨导麦克风拾取的信号相关联的成分,以便更新环境噪声。
在一些实施例中,该步骤可以由处理器330执行。在一些实施例中,麦克风阵列(例如,第一麦克风阵列320)在拾取环境噪声时,用户自身的说话声音也会被麦克风阵列拾取,即,用户自身说话的声音也被视为环境噪声的一部分。这种情况下,扬声器(例如,扬声器340)输出的目标信号会将用户自身说话的声音抵消。在一些实施例中,特定场景下,用户自身说话的声音需要被保留,例如,用户进行语音通话、发送语音消息等场景中。在一些实施例中,耳机(例如耳机300)可以包括骨导麦克风,用户佩戴耳机进行语音通话或录制语音信息时,骨导麦克风可以通过拾取用户说话时面部骨骼或肌肉产生的振动信号来拾取用户说话的声音信号,并传递至处理器330。处理器330获取来自骨导麦克风拾取的声音信号的参数信息,并从麦克风阵列拾取的环境噪声中去除与骨导麦克风拾取的声音信号相关联的声音信号成分。处理器330根据剩余的环境噪声的参数信息更新环境噪声。更新后的环境噪声中不再包含用户自身说话的声音信号,即在用户进行语音通话时用户可以听到用户自身说话的声音信号。
在步骤1920中,根据更新后的环境噪声估计目标空间位置的噪声。
在一些实施例中,该步骤可以由处理器330执行。可以以与步骤1420类似的方式来执行步骤1920,并且在此不再重复相关的描述。
应当注意的是,上述有关流程1900的描述仅仅是为了示例和说明,而不限定本申请的适用范围。对于本领域技术人员来说,在本申请的指导下可以对流程1900进行各种修正和改变。例如,还可以对骨导麦克风拾取的信号相关联的成分进行预处理,并将骨导麦克风拾取的信号作为音频信号传输至终端设备。这些修正和改变仍在本申请的范围之内。
在一些实施例中,还可以根据用户的手动输入更新降噪信号。例如,在一些实施例中,不同用户由于耳部结构的差异或耳机300的佩戴状态不同,会使得耳机300的主动降噪的效果不同,造成听觉体验效果不理想。此时,用户可以根据自身的听觉效果手动调整降噪信号的参数信息(例如,频率信息、相位信息或者幅值信息),从而匹配不同用户佩戴耳机300的佩戴位置,提高耳机300的主动降噪性能。又例如,特殊用户(例如,听力受损用户或者年龄较大用户)在使用耳机300的过程中,听力能力与普通用户的听力能力存在差异,耳机300本身生成的降噪信号与特殊用户的听力能力不匹配,导致特殊用户的听觉体验较差。这种情况下,特殊用户可以根据自身的听觉效果手动调整降噪信号的频率信息、相位信息或者幅值信息,从而更新降噪信号以提高特殊用户的听觉体验。在一些实施例中,用户手动调整降噪信号的方式可以是通过耳机300上的键位进行手动调整。在一些实施例中,耳机300的固定结构310的任意位置(例如,保持部3122背离耳部的侧面)可以设有供用户调节的键位,以调节耳机300的主动降噪的效果,进而提高用户使用耳机300的听觉体验。在一些实施例中,用户手动调整降噪信号的方式也可以是通过终端设备进行手动输入调整。在一些实施例中,耳机300或者与耳机300通信连接的手机、平板电脑、电脑等电子产品上可以显示用户耳道处的声场,并反馈给用户建议的降噪信号的频率信息范围、幅值信息范 围或相位信息范围,用户可以根据建议的降噪信号的参数信息进行手动输入,然后再根据自身的听觉体验情况进行参数信息的微调。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本申请的限定。虽然此处并没有明确说明,本领域技术人员可能会对本申请进行各种修改、改进和修正。该类修改、改进和修正在本申请中被建议,所以该类修改、改进、修正仍属于本申请示范实施例的精神和范围。
同时,本申请使用了特定词语来描述本申请的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本申请至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本申请的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本申请各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran 2003、Perl、COBOL 2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或服务器上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本申请所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本申请流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本申请实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本申请披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本申请实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本申请对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本申请一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本申请引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本申请作为参考。与本申请内容不一致或产生冲突的申请历史文件除外,对本申请权利要求最广范围有限制的文件(当前或之后附加于本申请 中的)也除外。需要说明的是,如果本申请附属材料中的描述、定义、和/或术语的使用与本申请所述内容有不一致或冲突的地方,以本申请的描述、定义和/或术语的使用为准。
最后,应当理解的是,本申请中所述实施例仅用以说明本申请实施例的原则。其他的变形也可能属于本申请的范围。因此,作为示例而非限制,本申请实施例的替代配置可视为与本申请的教导一致。相应地,本申请的实施例不仅限于本申请明确介绍和描述的实施例。

Claims (34)

  1. 一种耳机,其特征在于,包括:
    固定结构,被配置为将所述耳机固定在用户耳部附近且不堵塞用户耳道的位置,所述固定结构包括:钩状部和机体部,其中,在所述用户佩戴所述耳机时,所述钩状部挂设在所述用户耳部的第一侧与头部之间,所述机体部接触所述耳部的第二侧;
    第一麦克风阵列,位于所述机体部,被配置为拾取环境噪声;
    处理器,位于所述钩状部或所述机体部,被配置为:
    利用所述第一麦克风阵列对目标空间位置的声场进行估计,所述目标空间位置比所述第一麦克风阵列中任一麦克风更加靠近所述用户耳道,以及
    基于所述目标空间位置的声场估计生成降噪信号;以及
    扬声器,位于所述机体部,被配置为:根据所述降噪信号输出目标信号,所述目标信号通过出声孔传递至所述耳机的外部,用于降低所述环境噪声。
  2. 根据权利要求1所述的耳机,其特征在于,所述机体部包括连接部和保持部,其中,在所述用户佩戴所述耳机时,所述保持部接触所述耳部的第二侧,所述连接部连接所述钩状部和所述保持部。
  3. 根据权利要求2所述的耳机,所述用户佩戴所述耳机时,所述连接部从所述耳部的第一侧向所述耳部的第二侧延伸,所述连接部与所述钩状部配合为所述保持部提供对所述耳部的第二侧的压紧力,以及
    所述连接部与所述保持部配合为所述钩状部提供对所述耳部的第一侧的压紧力。
  4. 根据权利要求3所述的耳机,其特征在于,在从所述钩状部与所述连接部之间的第一连接点到所述钩状部的自由端的方向上,所述钩状部向所述耳部的第一侧弯折,并与所述耳部的第一侧形成第一接触点,所述保持部与所述耳部的第二侧形成第二接触点,其中,在自然状态下所述第一接触点和所述第二接触点沿所述连接部的延伸方向的距离小于在佩戴状态下所述第一接触点和所述第二接触点沿所述连接部的延伸方向的距离,进而为所述保持部提供对所述耳部的第二侧的压紧力,以及
    为所述钩状部提供对所述耳部的第一侧的压紧力。
  5. 根据权利要求3所述的耳机,其特征在于,在从所述钩状部和所述连接部之间的第一连接点到所述钩状部的自由端的方向上,所述钩状部向所述头部弯折,并与所述头部形成第一接触点和第三接触点,其中所述第一接触点位于所述第三接触点与所述第一连接点之间,进而使得所述钩状部形成以所述第一接触点为支点的杠杆结构,所述头部在所述第三接触点处提供的指向所述头部外侧的作用力经所述杠杆结构转化为所述第一连接点处的指向所述头部的作用力,进而经所述连接部为所述保持部提供对所述耳部的第二侧的压紧力。
  6. 根据权利要求2所述的耳机,其特征在于,所述扬声器设置在所述保持部,所述保持部为多段结构,以调节所述扬声器在所述耳机的整体结构上的相对位置。
  7. 根据权利要求6所述的耳机,其特征在于,所述保持部包括依次首尾连接的第一保持段、第二保持段和第三保持段,所述第一保持段背离所述第二保持段的一端与所述连接部连接,所述第二保持段相对于所述第一保持段回折,并具有一间距,以使得所述第一保持段和所述第二保持段呈U字形结构,所述扬声器设置在所述第三保持段。
  8. 根据权利要求6所述的耳机,其特征在于,所述保持部包括依次首尾连接的第一保持段、第二保持段和第三保持段,所述第一保持段背离所述第二保持段的一端与所述连接部连接,所述第二保持段相对于所述第一保持段弯折,所述第三保持段与所述第一保持段彼此并排设置,且具有一间距,所述扬声器设置在所述第三保持段。
  9. 根据权利要求2所述的耳机,其特征在于,所述保持部朝向所述耳部的一侧设有所述出声孔,以使所述扬声器输出的所述目标信号通过所述出声孔向所述耳部传递。
  10. 根据权利要求9所述的耳机,其特征在于,所述保持部朝向所述耳部的一侧包括第一区域和第二区域,所述第一区域设有出声孔,所述第二区域相较于所述第一区域更远离所述连接部,且相较于所述第一区域朝向所述耳部凸起,以允许所述出声孔在佩戴状态下与所述耳部间隔。
  11. 根据权利要求10所述的耳机,其特征在于,所述用户佩戴所述耳机时,所述出声孔与所述用户耳道之间的间距小于10毫米。
  12. 根据权利要求2所述的耳机,其特征在于,所述保持部沿垂直轴方向且靠近所述用户头顶的一侧设有泄压孔,所述泄压孔相对于所述出声孔更加远离所述用户耳道。
  13. 根据权利要求12所述的耳机,其特征在于,所述用户佩戴所述耳机时,所述泄压孔与所述用户耳道之间的间距为5毫米至15毫米。
  14. 根据权利要求12所述的耳机,其特征在于,所述泄压孔与所述出声孔之间的连线与所述保持部的厚度方向之间的夹角为0°至50°。
  15. 根据权利要求12所述耳机,其特征在于,所述泄压孔和所述出声孔形成声学偶极子,所述第一麦克风阵列设置在第一目标区域,所述第一目标区域为所述偶极子辐射声场的声学零点位置。
  16. 根据权利要求12所述的耳机,其特征在于,所述第一麦克风阵列位于所述连接部。
  17. 根据权利要求12所述的耳机,其特征在于,所述第一麦克风阵列和所述出声孔之间的连线与所述出声孔和所述泄压孔之间的连线具有第一夹角,所述第一麦克风阵列和所述泄压孔之间的连线与所述出声孔和所述泄压孔之间的连线具有第二夹角,所述第一夹角与所述第二夹角的差值不大于30°。
  18. 根据权利要求12所述的耳机,其特征在于,所述第一麦克风阵列和所述出声孔之间具有第一距离,所述第一麦克风阵列和所述泄压孔之间具有第二距离,所述第一距离与所述第二距离 的差值不大于6毫米。
  19. 根据权利要求1所述的耳机,其特征在于,所述基于所述目标空间位置的声场估计生成降噪信号包括:
    基于所述拾取的环境噪声估计所述目标空间位置的噪声;以及
    基于所述目标空间位置的噪声和所述目标空间位置的声场估计生成所述降噪信号。
  20. 根据权利要求19所述的耳机,其特征在于,所述耳机进一步包括一个或多个传感器,位于所述钩状部和/或所述机体部,被配置为:获取所述耳机的运动信息,以及
    所述处理器进一步被配置为:
    基于所述运动信息更新所述目标空间位置的噪声和所述目标空间位置的声场估计;以及
    基于所述更新后的目标空间位置的噪声和所述更新后的目标空间位置的声场估计生成所述降噪信号。
  21. 根据权利要求19所述的耳机,其特征在于,所述基于所述拾取的环境噪声估计所述目标空间位置的噪声包括:
    确定一个或多个与所述拾取的环境噪声有关的空间噪声源;以及
    基于所述空间噪声源,估计所述目标空间位置的噪声。
  22. 根据权利要求1所述的耳机,其特征在于,所述利用所述第一麦克风阵列对目标空间位置的声场进行估计包括:
    基于所述第一麦克风阵列构建虚拟麦克风,所述虚拟麦克风包括数学模型或机器学习模型,用于表示若所述目标空间位置处包括麦克风后所述麦克风采集的音频数据;以及
    基于所述虚拟麦克风对所述目标空间位置的声场进行估计。
  23. 根据权利要求22所述的耳机,其特征在于,所述基于所述目标空间位置的声场估计生成降噪信号包括:
    基于所述虚拟麦克风估计所述目标空间位置的噪声;以及
    基于所述目标空间位置的噪声和所述目标空间位置的声场估计生成所述降噪信号。
  24. 根据权利要求1所述的耳机,其特征在于,所述耳机包括第二麦克风,位于所述机体部,所述第二麦克风被配置为拾取所述环境噪声和所述目标信号;以及
    所述处理器被配置为基于所述第二麦克风拾取的声音信号更新所述目标信号。
  25. 根据权利要求24所述的耳机,其特征在于,所述第二麦克风至少包括一个比所述第一麦克风阵列中任意麦克风更加靠近所述用户耳道的麦克风。
  26. 根据权利要求24所述的耳机,其特征在于,所述第二麦克风设置于第二目标区域,所述第二目标区域是所述保持部上靠近所述用户耳道的区域。
  27. 根据权利要求26所述的耳机,其特征在于,所述用户佩戴所述耳机时,所述第二麦克风与所述用户耳道之间的距离小于10毫米。
  28. 根据权利要求26所述的耳机,其特征在于,在所述用户的矢状面上,所述第二麦克风与 所述出声孔沿矢状轴方向的距离小于10毫米。
  29. 根据权利要求26所述的耳机,其特征在于,在所述用户的矢状面上,所述第二麦克风与所述出声孔沿垂直轴方向的距离为2毫米至5毫米。
  30. 根据权利要求24所述的耳机,其特征在于,所述基于所述第二麦克风拾取的声音信号更新所述降噪信号包括:
    基于所述第二麦克风拾取的声音信号,对所述用户耳道处的声场进行估计;以及
    根据所述用户耳道处的声场,更新所述降噪信号。
  31. 根据权利要求1所述的耳机,其特征在于,基于所述目标空间位置的声场估计生成降噪信号包括:
    将所述拾取的环境噪声划分为多个频带,所述多个频带对应不同的频率范围;以及
    基于所述多个频带中的至少一个,生成与所述至少一个频带中的每一个对应的所述降噪信号。
  32. 根据权利要求31所述的耳机,其特征在于,所述基于所述多个频带中的至少一个,生成与所述至少一个频带中的每一个对应的所述降噪信号包括:
    获取所述多个频带的声压级;
    基于所述多个频带的所述声压级和所述多个频带的所述频率范围,仅生成与部分频带对应的所述降噪信号。
  33. 根据权利要求1所述的耳机,其特征在于,所述第一麦克风阵列或所述第二麦克风包括骨导麦克风,所述骨导麦克风被配置为:拾取所述用户的说话声音,所述处理器基于所述拾取的环境噪声估计所述目标空间位置的噪声包括:
    从所述拾取的环境噪声中去除与所述骨导麦克风拾取的信号相关联的成分,以更新所述环境噪声;以及
    根据所述更新后的环境噪声估计所述目标空间位置的噪声。
  34. 根据权利要求1所述的耳机,其特征在于,所述耳机进一步包括调节模块,被配置为:
    获取用户输入;以及
    所述处理器进一步被配置为根据所述用户输入调整所述降噪信号。
PCT/CN2021/131927 2021-04-25 2021-11-19 一种耳机 WO2022227514A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2022580472A JP2023532489A (ja) 2021-04-25 2021-11-19 イヤホン
KR1020227044224A KR20230013070A (ko) 2021-04-25 2021-11-19 이어폰
BR112022023372A BR112022023372A2 (pt) 2021-04-25 2021-11-19 Fones de ouvido
EP21938133.2A EP4131997A4 (en) 2021-04-25 2021-11-19 EARPHONE
TW111111172A TW202243486A (zh) 2021-04-25 2022-03-24 一種耳機
US18/047,639 US20230063283A1 (en) 2021-04-25 2022-10-18 Earphones

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
PCT/CN2021/089670 WO2022226696A1 (zh) 2021-04-25 2021-04-25 一种开放式耳机
CNPCT/CN2021/089670 2021-04-25
CNPCT/CN2021/091652 2021-04-30
PCT/CN2021/091652 WO2022227056A1 (zh) 2021-04-25 2021-04-30 声学装置
CNPCT/CN2021/109154 2021-07-29
PCT/CN2021/109154 WO2022022618A1 (zh) 2020-07-29 2021-07-29 一种耳机

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/047,639 Continuation US20230063283A1 (en) 2021-04-25 2022-10-18 Earphones

Publications (1)

Publication Number Publication Date
WO2022227514A1 true WO2022227514A1 (zh) 2022-11-03

Family

ID=81456417

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131927 WO2022227514A1 (zh) 2021-04-25 2021-11-19 一种耳机

Country Status (8)

Country Link
US (4) US11328702B1 (zh)
EP (1) EP4131997A4 (zh)
JP (1) JP2023532489A (zh)
KR (1) KR20230013070A (zh)
CN (2) CN116918350A (zh)
BR (1) BR112022023372A2 (zh)
TW (2) TW202243486A (zh)
WO (1) WO2022227514A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11935513B2 (en) 2019-10-27 2024-03-19 Silentium Ltd. Apparatus, system, and method of Active Acoustic Control (AAC)
EP4210350A4 (en) * 2021-11-19 2023-12-13 Shenzhen Shokz Co., Ltd. OPEN ACOUSTIC DEVICE
US20230232239A1 (en) * 2022-01-14 2023-07-20 Qualcomm Incorporated Methods for Reconfigurable Intelligent Surface (RIS) Aided Cooperative Directional Security
KR102569637B1 (ko) * 2022-03-24 2023-08-25 올리브유니온(주) 이어 밴드에 마이크가 구성된 디지털 히어링 디바이스
WO2024003756A1 (en) * 2022-06-28 2024-01-04 Silentium Ltd. Apparatus, system, and method of neural-network (nn) based active acoustic control (aac)
US11956584B1 (en) * 2022-10-28 2024-04-09 Shenzhen Shokz Co., Ltd. Earphones
CN117956362A (zh) * 2022-10-28 2024-04-30 深圳市韶音科技有限公司 一种开放式耳机
WO2024088223A1 (zh) * 2022-10-28 2024-05-02 深圳市韶音科技有限公司 一种耳机
WO2024087487A1 (zh) * 2022-10-28 2024-05-02 深圳市韶音科技有限公司 一种耳机
US11877111B1 (en) 2022-10-28 2024-01-16 Shenzhen Shokz Co., Ltd. Earphones
CN220254654U (zh) * 2022-10-28 2023-12-26 深圳市韶音科技有限公司 一种开放式耳机
CN116614738B (zh) * 2023-07-21 2023-12-08 江西红声技术有限公司 一种骨传导送话器及骨传导送话器组件

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109565626A (zh) * 2016-07-29 2019-04-02 伯斯有限公司 具有主动降噪功能的声学开放式耳机
CN110430517A (zh) * 2019-04-15 2019-11-08 美律电子(深圳)有限公司 辅助听力装置
CN111954142A (zh) * 2020-08-29 2020-11-17 深圳市韶音科技有限公司 一种听力辅助装置
US20210067857A1 (en) * 2019-08-28 2021-03-04 Bose Corporation Open Audio Device

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7430300B2 (en) * 2002-11-18 2008-09-30 Digisenz Llc Sound production systems and methods for providing sound inside a headgear unit
GB2434708B (en) * 2006-01-26 2008-02-27 Sonaptic Ltd Ambient noise reduction arrangements
US8649526B2 (en) 2010-09-03 2014-02-11 Nxp B.V. Noise reduction circuit and method therefor
AR084091A1 (es) * 2010-12-03 2013-04-17 Fraunhofer Ges Forschung Adquisicion de sonido mediante la extraccion de informacion geometrica de estimativos de direccion de llegada
TW201228415A (en) 2010-12-23 2012-07-01 Merry Electronics Co Ltd Headset for communication with recording function
CN102306496B (zh) * 2011-09-05 2014-07-09 歌尔声学股份有限公司 一种多麦克风阵列噪声消除方法、装置及系统
CN102348151B (zh) 2011-09-10 2015-07-29 歌尔声学股份有限公司 噪声消除系统和方法、智能控制方法和装置、通信设备
US10231065B2 (en) * 2012-12-28 2019-03-12 Gn Hearing A/S Spectacle hearing device system
US10063958B2 (en) * 2014-11-07 2018-08-28 Microsoft Technology Licensing, Llc Earpiece attachment devices
EP3373602A1 (en) * 2017-03-09 2018-09-12 Oticon A/s A method of localizing a sound source, a hearing device, and a hearing system
CN108668188A (zh) 2017-03-30 2018-10-16 天津三星通信技术研究有限公司 在电子终端中执行的耳机的主动降噪的方法及其电子终端
CN107346664A (zh) 2017-06-22 2017-11-14 河海大学常州校区 一种基于临界频带的双耳语音分离方法
CN107452375A (zh) 2017-07-17 2017-12-08 湖南海翼电子商务股份有限公司 蓝牙耳机
US10706868B2 (en) * 2017-09-06 2020-07-07 Realwear, Inc. Multi-mode noise cancellation for voice detection
JP6972814B2 (ja) * 2017-09-13 2021-11-24 ソニーグループ株式会社 イヤホン装置、ヘッドホン装置及び方法
US10650798B2 (en) * 2018-03-27 2020-05-12 Sony Corporation Electronic device, method and computer program for active noise control inside a vehicle
EP3687193B1 (en) * 2018-05-24 2024-03-06 Sony Group Corporation Information processing device and information processing method
TWI690218B (zh) * 2018-06-15 2020-04-01 瑞昱半導體股份有限公司 耳機
KR102406572B1 (ko) 2018-07-17 2022-06-08 삼성전자주식회사 오디오 신호를 처리하는 오디오 장치 및 오디오 신호 처리 방법
BR112021021746A2 (pt) * 2019-04-30 2021-12-28 Shenzhen Voxtech Co Ltd Aparelho de saída acústica
US11197083B2 (en) * 2019-08-07 2021-12-07 Bose Corporation Active noise reduction in open ear directional acoustic devices
US10951970B1 (en) * 2019-09-11 2021-03-16 Bose Corporation Open audio device
US11478211B2 (en) * 2019-12-03 2022-10-25 Shanghai United Imaging Healthcare Co., Ltd. System and method for noise reduction
CN111935589B (zh) 2020-09-28 2021-02-12 深圳市汇顶科技股份有限公司 主动降噪的方法、装置、电子设备和芯片

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109565626A (zh) * 2016-07-29 2019-04-02 伯斯有限公司 具有主动降噪功能的声学开放式耳机
CN110430517A (zh) * 2019-04-15 2019-11-08 美律电子(深圳)有限公司 辅助听力装置
US20210067857A1 (en) * 2019-08-28 2021-03-04 Bose Corporation Open Audio Device
CN111954142A (zh) * 2020-08-29 2020-11-17 深圳市韶音科技有限公司 一种听力辅助装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4131997A4

Also Published As

Publication number Publication date
EP4131997A4 (en) 2023-12-06
US20230063283A1 (en) 2023-03-02
US20230317048A1 (en) 2023-10-05
JP2023532489A (ja) 2023-07-28
US20220343887A1 (en) 2022-10-27
TW202243486A (zh) 2022-11-01
TW202242855A (zh) 2022-11-01
US11328702B1 (en) 2022-05-10
KR20230013070A (ko) 2023-01-26
BR112022023372A2 (pt) 2024-02-06
CN116918350A (zh) 2023-10-20
EP4131997A1 (en) 2023-02-08
CN115243137A (zh) 2022-10-25
US11715451B2 (en) 2023-08-01

Similar Documents

Publication Publication Date Title
WO2022227514A1 (zh) 一种耳机
CN108600907B (zh) 定位声源的方法、听力装置及听力系统
CN105530580B (zh) 听力系统
EP3285500B1 (en) A binaural hearing system configured to localize a sound source
US20140270321A1 (en) Non-occluded personal audio and communication system
CN108156567B (zh) 无线听力设备
CN113329312A (zh) 确定话轮转换的助听器
CN112911477A (zh) 包括个人化波束形成器的听力系统
WO2023087565A1 (zh) 一种开放式声学装置
WO2022227056A1 (zh) 声学装置
WO2023087572A1 (zh) 声学装置及其传递函数确定方法
WO2023164954A1 (zh) 一种听力辅助设备
WO2022226792A1 (zh) 声学输入输出设备
RU2807021C1 (ru) Наушники
US11689845B2 (en) Open acoustic device
CN115250395A (zh) 声学输入输出设备
CN115250392A (zh) 声学输入输出设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021938133

Country of ref document: EP

Effective date: 20221101

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022023372

Country of ref document: BR

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938133

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227044224

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022580472

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112022023372

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20221117