WO2022227514A1 - 一种耳机 - Google Patents
一种耳机 Download PDFInfo
- Publication number
- WO2022227514A1 WO2022227514A1 PCT/CN2021/131927 CN2021131927W WO2022227514A1 WO 2022227514 A1 WO2022227514 A1 WO 2022227514A1 CN 2021131927 W CN2021131927 W CN 2021131927W WO 2022227514 A1 WO2022227514 A1 WO 2022227514A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- noise
- microphone
- earphone
- ear
- Prior art date
Links
- 210000000613 ear canal Anatomy 0.000 claims abstract description 142
- 230000009467 reduction Effects 0.000 claims abstract description 126
- 210000003128 head Anatomy 0.000 claims abstract description 50
- 230000007613 environmental effect Effects 0.000 claims description 41
- 210000000988 bone and bone Anatomy 0.000 claims description 36
- 230000005236 sound signal Effects 0.000 claims description 35
- 238000003825 pressing Methods 0.000 claims description 18
- 238000010801 machine learning Methods 0.000 claims description 16
- 238000013178 mathematical model Methods 0.000 claims description 15
- 230000000903 blocking effect Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 230000036961 partial effect Effects 0.000 claims description 6
- 238000000034 method Methods 0.000 description 48
- 230000000875 corresponding effect Effects 0.000 description 32
- 238000010586 diagram Methods 0.000 description 32
- 230000008569 process Effects 0.000 description 32
- 239000000463 material Substances 0.000 description 16
- 230000000694 effects Effects 0.000 description 15
- 238000012986 modification Methods 0.000 description 15
- 230000004048 modification Effects 0.000 description 15
- 238000009826 distribution Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000012937 correction Methods 0.000 description 9
- 210000005069 ears Anatomy 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- -1 Polypropylene Polymers 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 239000004814 polyurethane Substances 0.000 description 5
- 229920002635 polyurethane Polymers 0.000 description 5
- 210000000721 basilar membrane Anatomy 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 239000004698 Polyethylene Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 238000005452 bending Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 210000000624 ear auricle Anatomy 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 229920003229 poly(methyl methacrylate) Polymers 0.000 description 3
- 239000004926 polymethyl methacrylate Substances 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 229920000877 Melamine resin Polymers 0.000 description 2
- 239000004696 Poly ether ether ketone Substances 0.000 description 2
- 239000004952 Polyamide Substances 0.000 description 2
- 239000004695 Polyether sulfone Substances 0.000 description 2
- 239000004743 Polypropylene Substances 0.000 description 2
- 239000004433 Thermoplastic polyurethane Substances 0.000 description 2
- 229920001807 Urea-formaldehyde Polymers 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 229910052799 carbon Inorganic materials 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- IVJISJACKSSFGE-UHFFFAOYSA-N formaldehyde;1,3,5-triazine-2,4,6-triamine Chemical compound O=C.NC1=NC(N)=NC(N)=N1 IVJISJACKSSFGE-UHFFFAOYSA-N 0.000 description 2
- SLGWESQGEUXWJQ-UHFFFAOYSA-N formaldehyde;phenol Chemical compound O=C.OC1=CC=CC=C1 SLGWESQGEUXWJQ-UHFFFAOYSA-N 0.000 description 2
- 229920005669 high impact polystyrene Polymers 0.000 description 2
- 239000004797 high-impact polystyrene Substances 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 229920001568 phenolic resin Polymers 0.000 description 2
- 229920002647 polyamide Polymers 0.000 description 2
- 229920006393 polyether sulfone Polymers 0.000 description 2
- 229920002530 polyetherether ketone Polymers 0.000 description 2
- 229920000573 polyethylene Polymers 0.000 description 2
- 229920000139 polyethylene terephthalate Polymers 0.000 description 2
- 239000005020 polyethylene terephthalate Substances 0.000 description 2
- 229920001155 polypropylene Polymers 0.000 description 2
- 229920002803 thermoplastic polyurethane Polymers 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 229920000049 Carbon (fiber) Polymers 0.000 description 1
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 239000004793 Polystyrene Substances 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- NIXOWILDQLNWCW-UHFFFAOYSA-N acrylic acid group Chemical group C(C=C)(=O)O NIXOWILDQLNWCW-UHFFFAOYSA-N 0.000 description 1
- 239000004676 acrylonitrile butadiene styrene Substances 0.000 description 1
- GZCGUPFRVQAUEE-SLPGGIOYSA-N aldehydo-D-glucose Chemical compound OC[C@@H](O)[C@@H](O)[C@H](O)[C@@H](O)C=O GZCGUPFRVQAUEE-SLPGGIOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 238000009435 building construction Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000004917 carbon fiber Substances 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 210000000860 cochlear nerve Anatomy 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000005489 elastic deformation Effects 0.000 description 1
- 230000005520 electrodynamics Effects 0.000 description 1
- 230000005674 electromagnetic induction Effects 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 210000003054 facial bone Anatomy 0.000 description 1
- 210000001097 facial muscle Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000003365 glass fiber Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 150000002500 ions Chemical class 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 229920000052 poly(p-xylylene) Polymers 0.000 description 1
- 239000004417 polycarbonate Substances 0.000 description 1
- 229920000515 polycarbonate Polymers 0.000 description 1
- ODGAOXROABLFNM-UHFFFAOYSA-N polynoxylin Chemical compound O=C.NC(N)=O ODGAOXROABLFNM-UHFFFAOYSA-N 0.000 description 1
- 239000004800 polyvinyl chloride Substances 0.000 description 1
- 239000005033 polyvinylidene chloride Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000012744 reinforcing agent Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000000741 silica gel Substances 0.000 description 1
- 229910002027 silica gel Inorganic materials 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1781—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
- G10K11/17821—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions characterised by the analysis of the input signals only
- G10K11/17823—Reference signals, e.g. ambient acoustic environment
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1785—Methods, e.g. algorithms; Devices
- G10K11/17857—Geometric disposition, e.g. placement of microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/175—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
- G10K11/178—Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
- G10K11/1787—General system configurations
- G10K11/17873—General system configurations using a reference signal without an error signal, e.g. pure feedforward
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1008—Earpieces of the supra-aural or circum-aural type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1058—Manufacture or assembly
- H04R1/1066—Constructional aspects of the interconnection between earpiece and earpiece support
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1083—Reduction of ambient noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1091—Details not provided for in groups H04R1/1008 - H04R1/1083
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/406—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/02—Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R9/00—Transducers of moving-coil, moving-strip, or moving-wire type
- H04R9/06—Loudspeakers
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/10—Applications
- G10K2210/108—Communication systems, e.g. where useful sound is kept and noise is cancelled
- G10K2210/1081—Earphones, e.g. for telephones, ear protectors or headsets
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/30—Means
- G10K2210/301—Computational
- G10K2210/3023—Estimation of noise, e.g. on error signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/30—Means
- G10K2210/301—Computational
- G10K2210/3023—Estimation of noise, e.g. on error signals
- G10K2210/30231—Sources, e.g. identifying noisy processes or components
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/30—Means
- G10K2210/301—Computational
- G10K2210/3025—Determination of spectrum characteristics, e.g. FFT
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/30—Means
- G10K2210/301—Computational
- G10K2210/3035—Models, e.g. of the acoustic system
- G10K2210/30351—Identification of the environment for applying appropriate model characteristics
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/30—Means
- G10K2210/301—Computational
- G10K2210/3038—Neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/30—Means
- G10K2210/301—Computational
- G10K2210/3047—Prediction, e.g. of future values of noise
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K2210/00—Details of active noise control [ANC] covered by G10K11/178 but not provided for in any of its subgroups
- G10K2210/30—Means
- G10K2210/301—Computational
- G10K2210/3056—Variable gain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1041—Mechanical or electronic switches, or control elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/105—Earpiece supports, e.g. ear hooks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1058—Manufacture or assembly
- H04R1/1075—Mountings of transducers in earphones or headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/07—Applications of wireless loudspeakers or wireless microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/01—Hearing devices using active noise cancellation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/09—Non-occlusive ear tips, i.e. leaving the ear canal open, for both custom and non-custom tips
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/11—Aspects relating to vents, e.g. shape, orientation, acoustic properties in ear tips of hearing devices to prevent occlusion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2460/00—Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
- H04R2460/13—Hearing devices using bone conduction transducers
Definitions
- the present application relates to the field of acoustics, and in particular, to an earphone.
- Active noise cancellation technology is a method of canceling ambient noise by using the speaker of the headset to output sound waves that are opposite to the external ambient noise.
- Headphones can generally be divided into two categories: in-ear headphones and open-back headphones.
- In-ear headphones will block the user's ear during use, and the user is prone to experience blockage, foreign body, pain, etc. when wearing it for a long time.
- Open earphones can open the user's ears, which is conducive to long-term wearing, but when the external noise is large, the noise reduction effect is not obvious, which reduces the user's listening experience.
- An embodiment of the present application provides an earphone, comprising: a fixing structure configured to fix the earphone at a position near a user's ear without blocking the user's ear canal, the fixing structure comprising: a hook portion and a body portion, Wherein, when the user wears the earphone, the hook-shaped part is hung between the first side of the user's ear and the head, and the body part contacts the second side of the ear; A microphone array, located in the body part, is configured to pick up ambient noise; a processor, located in the hook-shaped part or the body part, is configured to: use the first microphone array to perform a sound field at a target spatial position.
- the body portion includes a connecting portion and a retaining portion, wherein the retaining portion contacts the second side of the ear portion when the user wears the earphone, the connecting portion connecting the the hook portion and the holding portion.
- the connecting portion when the user wears the earphone, the connecting portion extends from the first side of the ear portion to the second side of the ear portion, and the connecting portion cooperates with the hook-shaped portion providing the retaining portion with a compressive force against the second side of the ear portion, and the connecting portion cooperates with the retaining portion to provide the hook portion with a compressive force against the first side of the ear portion force.
- the hook in a direction from a first connection point between the hook and the connecting portion to a free end of the hook, is directed toward the first portion of the ear.
- the side is bent and forms a first contact point with the first side of the ear portion, and the retaining portion forms a second contact point with the second side of the ear portion, wherein the first contact point is in a natural state
- the distance between the point and the second contact point along the extension direction of the connection part is smaller than the distance between the first contact point and the second contact point along the extension direction of the connection part in the wearing state, and thus is the
- the retaining portion provides a compressive force against the second side of the ear portion, and the hook portion provides a compressive force against the first side of the ear portion.
- the hook portion is bent toward the head in a direction from the first connection point between the hook portion and the connecting portion to the free end of the hook portion, and form a first contact point and a third contact point with the head, wherein the first contact point is located between the third contact point and the first connection point, so that the hook-shaped part is formed to
- the first contact point is a lever structure of a fulcrum, and the force provided by the head at the third contact point directed to the outside of the head is converted into a force at the first connection point through the lever structure.
- the force directed at the head provides the retaining portion with a pressing force against the second side of the ear portion via the connecting portion.
- the speaker is disposed on the holding part, and the holding part is a multi-segment structure, so as to adjust the relative position of the speaker on the overall structure of the earphone.
- the retaining portion includes a first retaining segment, a second retaining segment and a third retaining segment that are connected end to end in sequence, and an end of the first retaining segment facing away from the second retaining segment is connected to the connecting portion connected, the second holding section is folded back relative to the first holding section, and has a distance so that the first holding section and the second holding section are in a U-shaped structure, and the speaker is arranged on the The third hold segment.
- the retaining portion includes a first retaining segment, a second retaining segment and a third retaining segment that are connected end to end in sequence, and an end of the first retaining segment facing away from the second retaining segment is connected to the connecting portion connected, the second holding section is bent relative to the first holding section, the third holding section and the first holding section are arranged side by side with a distance, and the speaker is arranged on the third holding section keep the segment.
- the sound outlet hole is provided on a side of the holding portion facing the ear portion, so that the target signal output by the speaker is transmitted to the ear portion through the sound outlet hole.
- a side of the holding portion facing the ear portion includes a first area and a second area, the first area is provided with a sound outlet, and the second area is relatively larger than the first area.
- the region is further away from the connecting portion, and protrudes toward the ear portion compared to the first region, so as to allow the sound outlet to be spaced from the ear portion in the wearing state.
- the distance between the sound outlet and the user's ear canal is less than 10 mm.
- a pressure relief hole is provided on a side of the holding portion along a vertical axis direction and close to the top of the user's head, and the pressure relief hole is farther away from the user's ear canal than the sound outlet hole.
- the distance between the pressure relief hole and the user's ear canal is 5 mm to 15 mm.
- the included angle between the connection line between the pressure relief hole and the sound outlet hole and the thickness direction of the holding portion is 0° to 50°.
- the pressure relief hole and the sound outlet hole form an acoustic dipole
- the first microphone array is disposed in a first target area
- the first target area radiates a sound field for the dipole the acoustic zero position.
- the first microphone array is located at the connection portion.
- a line between the first microphone array and the sound outlet and a line between the sound outlet and the pressure relief hole have a first included angle
- the first The connection line between the microphone array and the pressure relief hole and the connection line between the sound outlet hole and the pressure relief hole have a second included angle
- the first included angle and the second included angle are The difference is not more than 30°.
- first distance between the first microphone array and the sound outlet there is a first distance between the first microphone array and the sound outlet, a second distance between the first microphone array and the pressure relief hole, and the first distance is the same as The difference between the second distances is not more than 6 mm.
- the generating a noise reduction signal based on the sound field estimation of the target spatial location comprises: estimating noise at the target spatial location based on the picked-up ambient noise; and estimating the noise at the target spatial location based on the noise at the target spatial location and all
- the noise reduction signal is generated from a sound field estimate of the spatial location of the object.
- the headset further includes one or more sensors located on the hook portion and/or the body portion, configured to obtain motion information of the headset, and the processor is further configured to configured to: update the noise of the target spatial position and the sound field estimate of the target spatial position based on the motion information; and based on the updated noise of the target spatial position and the updated sound field estimate of the target spatial position
- the noise reduction signal is generated.
- the estimating noise at the target spatial location based on the picked-up ambient noise comprises: determining one or more spatial noise sources associated with the picked-up ambient noise; and based on the spatial noise sources , estimating the noise at the spatial location of the target.
- using the first microphone array to estimate the sound field of the target spatial position includes: constructing a virtual microphone based on the first microphone array, the virtual microphone comprising a mathematical model or a machine learning model for representing the audio data collected by the microphone if the target spatial position includes a microphone; and estimating the sound field of the target spatial position based on the virtual microphone.
- the generating a noise reduction signal based on the sound field estimation of the target spatial position comprises: estimating noise at the target spatial position based on the virtual microphone; and based on the target spatial position noise and the target The sound field estimation of the spatial location generates the noise reduction signal.
- the headset includes a second microphone located on the body portion, the second microphone configured to pick up the ambient noise and the target signal; and the processor is configured to pick up the target signal based on the The sound signal picked up by the second microphone updates the target signal.
- the second microphone includes at least one microphone that is closer to the user's ear canal than any microphone in the first microphone array.
- the second microphone is disposed in a second target area, and the second target area is an area on the holding portion close to the user's ear canal.
- the distance between the second microphone and the user's ear canal is less than 10 mm.
- the distance between the second microphone and the sound outlet along the sagittal axis direction is less than 10 mm.
- the distance between the second microphone and the sound outlet along the vertical axis direction is 2 mm to 5 mm.
- the updating the noise reduction signal based on the sound signal picked up by the second microphone comprises: estimating a sound field at the user's ear canal based on the sound signal picked up by the second microphone; and The noise reduction signal is updated according to the sound field at the user's ear canal.
- generating the noise reduction signal based on the sound field estimation of the target spatial location comprises: dividing the picked-up ambient noise into a plurality of frequency bands, the plurality of frequency bands corresponding to different frequency ranges; and based on the plurality of frequency bands; at least one of the frequency bands, the noise reduction signal corresponding to each of the at least one frequency band is generated.
- the generating, based on at least one of the plurality of frequency bands, the noise reduction signal corresponding to each of the at least one frequency band comprises: obtaining sound pressure levels of the plurality of frequency bands; Based on the sound pressure levels of the plurality of frequency bands and the frequency ranges of the plurality of frequency bands, only the noise reduction signal corresponding to a partial frequency band is generated.
- the first microphone array or the second microphone includes a bone conduction microphone configured to pick up the user's speech
- the processor is based on the picked-up environment
- Noise estimating the noise of the target spatial position includes: removing a component associated with the signal picked up by the bone conduction microphone from the picked-up environmental noise to update the environmental noise; and according to the updated environmental noise Estimate the noise at the spatial location of the object.
- the headset further includes an adjustment module configured to: obtain user input; and the processor is further configured to adjust the noise reduction signal according to the user input.
- FIG. 1 is a frame diagram of an exemplary headset shown in accordance with some embodiments of the present application.
- Figure 2 is a schematic diagram of an exemplary ear shown in accordance with some embodiments of the present application.
- FIG. 3 is a block diagram of an exemplary earphone according to some embodiments of the present application.
- FIG. 4 is a wearing diagram of an exemplary headset according to some embodiments of the present application.
- FIG. 5 is a block diagram of an exemplary earphone according to some embodiments of the present application.
- FIG. 6 is a wearing diagram of an exemplary headset according to some embodiments of the present application.
- FIG. 7 is a block diagram of an exemplary earphone according to some embodiments of the present application.
- FIG. 8 is a wearing diagram of an exemplary headset according to some embodiments of the present application.
- FIG. 9A is a block diagram of an exemplary earphone according to some embodiments of the present application.
- 9B is a block diagram of an exemplary earphone according to some embodiments of the present application.
- FIG. 10 is a structural diagram of an ear-facing side of an exemplary earphone according to some embodiments of the present application.
- FIG. 11 is a structural diagram of an exemplary earphone facing away from the ear according to some embodiments of the present application.
- Figure 12 is a top view of an exemplary headset shown in accordance with some embodiments of the present application.
- FIG. 13 is a schematic cross-sectional structural diagram of an exemplary earphone according to some embodiments of the present application.
- FIG. 14 is an exemplary noise reduction flow diagram of an earphone according to some embodiments of the present application.
- FIG. 15 is an exemplary flowchart of estimating noise at a spatial location of a target according to some embodiments of the present application.
- FIG. 16 is an exemplary flowchart for estimating the sound field and noise of a target spatial location according to some embodiments of the present application.
- FIG. 17 is an exemplary flowchart of updating a noise reduction signal according to some embodiments of the present application.
- FIG. 18 is an exemplary noise reduction flow diagram of an earphone according to some embodiments of the present application.
- FIG. 19 is an exemplary flowchart for estimating noise at a spatial location of a target, according to some embodiments of the present application.
- system means for distinguishing different components, elements, parts, sections or assemblies at different levels.
- device means for separating components, elements, parts, sections or assemblies at different levels.
- module means for separating components, elements, parts, sections or assemblies at different levels.
- other words may be replaced by other expressions if they serve the same purpose.
- the earphones may be open-back earphones.
- the open earphone can fix the speaker near the user's ear without blocking the user's ear canal through the fixing structure.
- the headset may include a stationary structure, a first microphone array, a processor, and a speaker.
- the securing structure may be configured to secure the earphone near the user's ear without blocking the user's ear canal.
- the first microphone array, processor and speaker may be located at the fixed structure to implement the active noise reduction function of the earphone.
- the fixing structure may include a hook portion and a body portion.
- the hook portion When the user wears the earphone, the hook portion may be hung between the first side of the user's ear and the head, and the body portion contacts the first side of the ear. two sides.
- the body portion may include a connecting portion that contacts the second side of the ear portion when the earphone is worn by a user, and a retaining portion that connects the hook portion and the retaining portion.
- the connecting portion extends from the first side of the ear portion to the second side of the ear portion, the connecting portion cooperates with the hook-shaped portion to provide the holding portion with a pressing force on the second side of the ear portion, and the connecting portion cooperates with the holding portion as a hook
- the shaped part provides a pressing force on the first side of the ear part, so that the earphone can clamp the user's ear part and ensures the stability of the earphone in wearing.
- the first microphone array may be located on the body portion of the headset for picking up ambient noise.
- the processor is located on the hook or body of the earphone and is used to estimate the sound field at the target spatial location.
- the target spatial location may include a spatial location close to the user's ear canal by a certain distance, eg, the target spatial location may be closer to the user's ear canal than any microphone in the first microphone array.
- the microphones in the first microphone array may be distributed at different positions near the ear canal of the user, and the processor may estimate the position close to the ear canal of the user according to the ambient noise collected by the microphones in the first microphone array ( For example, the sound field of the target spatial location).
- the speaker may be located in the body part (holding part), and output the target signal according to the noise reduction signal.
- the target signal can be transmitted to the outside of the earphone through the sound outlet on the holding part, so as to reduce the environmental noise heard by the user.
- the body portion may include a second microphone.
- the second microphone may be closer to the ear canal of the user than the first microphone array, and the sound signal collected by the second microphone is closer and may reflect the sound heard by the user.
- the processor can update the noise reduction signal according to the sound signal collected by the second microphone, so as to achieve a more ideal noise reduction effect.
- the earphones provided in the embodiments of the present specification can be fixed near the user's ear through the fixing structure without blocking the user's ear canal, which opens the user's ears and improves the stability and comfort of the earphone in wearing.
- the ambient noise at the user's ear canal is reduced, thereby realizing the active noise reduction of the earphone, and improving the user's hearing experience in the process of using the earphone.
- FIG. 1 is a block diagram of an exemplary headset shown in accordance with some embodiments of the present application.
- the headset 100 may include a stationary structure 110 , a first microphone array 120 , a processor 130 and a speaker 140 .
- the first microphone array 120 , the processor 130 and the speaker 140 may be located at the fixed structure 110 .
- the earphone 100 can clamp the user's ear through the fixing structure 110 to fix the earphone 100 near the user's ear without blocking the user's ear canal.
- the first microphone array 120 located at the fixed structure 110 eg, the body part
- the processor 130 is coupled (eg, electrically connected) to the first microphone array 120 and the speaker 140 .
- the processor 130 may receive and process the electrical signal transmitted by the first microphone array 120 to generate a noise reduction signal, and transmit the generated noise reduction signal to the speaker 140 .
- the speaker 140 may output the target signal according to the noise reduction signal.
- the target signal can be transmitted to the outside of the earphone 100 through the sound outlet on the fixed structure 110 (eg, the holding part), and used to reduce or cancel the ambient noise at the position of the user's ear canal (eg, the target spatial position), so as to realize the earphone 100
- the active noise reduction can improve the user's listening experience in the process of using the headset 100.
- the securing structure 110 may include a hook portion 111 and a body portion 112 .
- the hook portion 111 may be hung between the first side of the user's ear and the head, and the body portion 112 contacts the second side of the ear.
- the first side of the ear may be the back side of the user's ear
- the second side of the user's ear may be the front side of the user's ear.
- the front side of the user's ear refers to the side of the user's ear including the concha, the triangular fossa, the antihelix, the concha, and the helix (see Figure 2 for the structure of the ear).
- the back side of the user's ear refers to the side of the user's ear that is away from the front side, that is, the side opposite to the front side.
- the body portion 112 may include a connecting portion and a retaining portion.
- the holding portion contacts the second side of the ear portion, and the connecting portion connects the hook portion and the holding portion.
- the connecting portion extends from the first side of the ear portion to the second side of the ear portion, the connecting portion cooperates with the hook-shaped portion to provide the holding portion with a pressing force on the second side of the ear portion, and the connecting portion cooperates with the holding portion as a hook
- the shaped portion provides a pressing force on the first side of the ear, so that the earphone 100 can be clamped near the user's ear by the fixing structure 110 , thereby ensuring the stability of the earphone 100 in wearing.
- the part where the hook portion 111 and/or the body portion 112 (the connecting portion and/or the retaining portion) contacts the ear of the user may be made of a softer material, a harder material, etc., or a combination thereof to make.
- a softer material refers to a material having a hardness (eg, Shore hardness) less than a first hardness threshold (eg, 15A, 20A, 30A, 35A, 40A, etc.).
- a softer material may have a Shore hardness of 45-85A, 30-60D.
- Softer materials may include, but are not limited to, Polyurethanes (PU) (eg, Thermoplastic Polyurethanes (TPU)), Polycarbonate (PC), Polyamides (PA), acrylic Acrylonitrile Butadiene Styrene (ABS), Polystyrene (PS), High Impact Polystyrene (HIPS), Polypropylene (PP), Parylene Polyethylene Terephthalate (PET), Polyvinyl Chloride (PVC), Polyurethane (Polyurethanes, PU), Polyethylene (Polyethylene, PE), Phenol Formaldehyde (PF), Urea-Formaldehyde Resin (Urea-Formaldehyde, UF), melamine-formaldehyde (Melamine-Formaldeh
- PU Polyurethanes
- TPU Thermoplastic Polyurethanes
- PC Polycarbonate
- PA Polyamides
- ABS Acrylic Acrylonitrile Butadiene Styren
- Harder materials may include, but are not limited to, polyethersulfone (Poly (ester sulfones), PES), polyvinylidene chloride (PVDC), polymethyl methacrylate (Polymethyl Methacrylate, PMMA), Poly-ether-ether-ketone (PEEK), etc. or a combination thereof, or a mixture thereof formed with reinforcing agents such as glass fiber and carbon fiber.
- the material of the portion where the hook portion 111 of the fixing structure 110 and/or the body portion 112 is in contact with the user's ear can be selected according to specific conditions.
- the softer material can improve the user's comfort when wearing the earphone 100, and the harder material can improve the strength of the earphone 100.
- the materials of the components of the earphone 100 can be improved. while increasing the strength of the headset 100.
- the first microphone array 120 may be located on the body portion 112 (eg, the connecting portion or the holding portion) of the fixed structure 110 for picking up ambient noise.
- ambient noise refers to a combination of multiple external sounds in the environment in which the user is located.
- the first microphone array 120 may be located near the user's ear canal. Based on the ambient noise obtained in this way, the processor 130 can more accurately calculate the noise actually transmitted to the user's ear canal, which is more conducive to subsequent active noise reduction of the ambient noise heard by the user.
- the ambient noise may include the sound of the user speaking.
- the first microphone array 120 may pick up ambient noise according to the working state of the earphone 100 .
- the working state of the earphone 100 may refer to the usage state used when the user wears the earphone 100 .
- the working state of the headset 100 may include, but is not limited to, a call state, a non-call state (eg, a music playing state), a voice message sending state, and the like.
- the headset 100 is not in a call state, the sound produced by the user's own speech may be regarded as environmental noise, and the first microphone array 120 may pick up the user's own speech and other environmental noises.
- the sound produced by the user's own speech may not be regarded as ambient noise, and the first microphone array 120 may pick up ambient noise other than the user's own speaking sound.
- the first microphone array 120 may pick up noise emitted by a noise source located at a distance (eg, 0.5 meters, 1 meter) away from the first microphone array 120 .
- the first microphone array 120 may include one or more air conduction microphones.
- the air conduction microphone can simultaneously acquire the noise of the external environment and the voice of the user while speaking, and use the acquired noise of the external environment and the voice of the user as the ambient noise.
- the first microphone array 120 may also include one or more bone conduction microphones. The bone conduction microphone can be in direct contact with the user's skin, and the vibration signal generated by the bones or muscles when the user speaks can be directly transmitted to the bone conduction microphone, and then the bone conduction microphone converts the vibration signal into an electrical signal, and transmits the electrical signal to the processor 130 to be processed.
- the bone conduction microphone may not be in direct contact with the human body, and the vibration signal generated by the bones or muscles when the user speaks can be transmitted to the fixed structure 110 of the earphone 100 first, and then transmitted to the bone conduction microphone by the fixed structure 110 .
- the processor 130 may use the sound signal collected by the air conduction microphone as environmental noise and use the environmental noise for noise reduction, and the sound signal collected by the bone conduction microphone may be transmitted to the terminal device as a voice signal , so as to ensure the call quality of the user during the call.
- the processor 130 may control the switch states of the bone conduction microphone and the air conduction microphone based on the working state of the headset 100 .
- the switch state of the bone conduction microphone and the switch state of the air conduction microphone in the first microphone array 120 may be determined according to the working state of the earphone 100 .
- the switch state of the bone conduction microphone may be the standby state, and the switch state of the air conduction microphone may be the working state.
- the switch state of the bone conduction microphone may be in the working state
- the switch state of the air conduction microphone may be the working state.
- the processor 130 may control the switch states of the microphones (eg, bone conduction microphones, air conduction microphones) in the first microphone array 120 by sending a control signal.
- the first microphone array 120 may include a dynamic microphone, a ribbon microphone, a condenser microphone, an electret microphone, an electromagnetic microphone, a carbon particle microphone, etc., or any of them. combination.
- the arrangement of the first microphone array 120 may include a linear array (eg, a straight line, a curve), a planar array (eg, a cross, a circle, a ring, a polygon, a mesh, etc., regular and and/or irregular shapes), stereoscopic arrays (eg, cylindrical, spherical, hemispherical, polyhedral, etc.), etc., or any combination thereof.
- the processor 130 may be located on the hook portion 111 or the body portion 112 of the fixed structure 110 , and the processor 130 may use the first microphone array 120 to estimate the sound field of the target spatial position.
- the sound field of a target spatial location may refer to the distribution and variation of sound waves at or near the target spatial location (eg, as a function of time, as a function of location).
- the physical quantities describing the sound field may include sound pressure level, sound frequency, sound amplitude, sound phase, sound source vibration velocity, or medium (eg air) density, and the like. In general, these physical quantities can be functions of position and time.
- the target spatial location may refer to a spatial location close to the user's ear canal by a specific distance.
- the specific distance here may be a fixed distance, for example, 2mm, 5mm, 10mm, and the like.
- the target spatial location may be closer to the user's ear canal than any microphone in the first microphone array 120 .
- the target spatial position may be related to the number of each microphone in the first microphone array 120 and the distribution position relative to the user's ear canal.
- the target spatial position can be adjusted by adjusting the number and/or the distribution position of each microphone in the first microphone array 120 relative to the user's ear canal. For example, by increasing the number of microphones in the first microphone array 120, the target spatial position can be made closer to the user's ear canal.
- the target spatial position can also be made closer to the ear canal of the user by reducing the distance between the microphones in the first microphone array 120 .
- the arrangement of the microphones in the first microphone array 120 can also be changed to make the target spatial position closer to the user's ear canal.
- the processor 130 may be further configured to generate a noise reduction signal based on the sound field estimate of the target spatial location.
- the processor 130 may receive the ambient noise acquired by the first microphone array 120 and process it to acquire parameters (eg, amplitude, phase, etc.) of the ambient noise, and determine the spatial position of the target based on the parameters of the ambient noise.
- the sound field is estimated.
- the processor 130 generates a noise reduction signal based on the sound field estimation of the target spatial location.
- the parameters of the noise reduction signal (eg, amplitude, phase, etc.) are related to the ambient noise at the target spatial location.
- the magnitude of the noise reduction signal may be approximately equal to the magnitude of the ambient noise at the target spatial location
- the phase of the noise reduction signal may be approximately opposite to the phase of the ambient noise at the target spatial location.
- the speaker 140 may be located at the holding portion of the fixing structure 110, and when the user wears the earphone 100, the speaker 140 is located near the user's ear.
- the speaker 140 may output the target signal according to the noise reduction signal.
- the target signal can be transmitted to the user's ear through the sound outlet hole of the holding part, so as to reduce or eliminate the environmental noise transmitted to the user's ear canal.
- the speaker 140 may include an electrodynamic speaker (eg, a moving coil speaker), a magnetic speaker, an ion speaker, an electrostatic speaker (or a condenser speaker), a piezoelectric speaker, etc., depending on the working principle of the speaker one or more of.
- the speaker 140 may include an air conduction speaker or a bone conduction speaker according to the transmission mode of the sound output by the speaker.
- the number of speakers 140 may be one or more.
- the speaker can output the target signal to cancel the ambient noise, and at the same time deliver effective sound information (eg, device media audio, call far-end audio) to the user.
- the air conduction speaker can be used to output a target signal to cancel ambient noise.
- the target signal may be a sound wave (ie, the vibration of the air), which may be transmitted through the air to the target spatial location and cancel each other with ambient noise at the target spatial location.
- the sound wave output by the air conduction speaker also includes effective sound information.
- the bone conduction speaker can be used to output the target signal to eliminate ambient noise.
- the target signal may be a vibration signal, which may be transmitted through the bone or tissue to the user's basilar membrane and cancel each other out with ambient noise at the user's basilar membrane.
- the vibration signal output by the bone conduction speaker also includes effective sound information.
- a part of the multiple speakers 140 may be used to output the target signal to eliminate ambient noise, and the other part may be used to deliver effective sound information (eg, device media) to the user audio, call remote audio).
- the air conduction speakers can be used to output sound waves to reduce or eliminate ambient noise, and the bone conduction speakers can be used to deliver effective sound information to the user.
- bone conduction speakers can directly transmit mechanical vibrations through the user's body (eg, bones, skin tissue, etc.) to the user's auditory nerves, and the interference to the air conduction microphones that pick up ambient noise is relatively high during this process. Small.
- the speaker 340 and the first microphone array 120 are both located on the body portion 112 of the earphone 300, and the target signal output by the speaker 340 may also be picked up by the first microphone array 120, but the target signal is not expected to be picked up, That is, the target signal should not be considered part of the ambient noise.
- the first microphone array 120 may be disposed in the first target area.
- the first target area may be an area where the intensity of the sound emitted by the speaker 340 is low or even the smallest in space.
- the first target area may be the acoustic zero position of the radiated sound field of the acoustic dipole formed by the earphone 100 (eg, sound outlet, pressure relief hole), or a position within a certain distance from the acoustic zero position within a threshold range.
- the fixing structure 110 of the earphone 100 can be replaced with a housing structure having a shape suitable for human ears (eg, C-shape, semicircle, etc.), so that the earphone 100 can be hung near the user's ear.
- a component in headset 100 may be split into multiple sub-components, or multiple components may be combined into a single component.
- FIG. 2 is a schematic diagram of an exemplary ear shown in accordance with some embodiments of the present application.
- the ear 200 may include an external auditory canal 201 , a concha cavity 202 , a concha 203 , a triangular fossa 204 , an antihelix 205 , a concha 206 , a helix 207 , an earlobe 208 and a helix 209 .
- wearing and stabilization of an earphone eg, earphone 100
- the external auditory canal 201 , the concha cavity 202 , the concha 203 , the triangular fossa 204 and other parts have a certain depth and volume in the three-dimensional space, which can be used to meet the wearing requirements of the earphone.
- an open earphone eg, earphone 100
- parts such as the user's earlobe 208 may also be used.
- the user's external auditory canal 201 can be "liberated", and the impact of the earphone on the user's ear health can be reduced.
- the earphone will not block the user's external ear canal 201, and the user can receive both the sound from the earphone and the sound from the environment (for example, honking, car bells, surrounding human voices, traffic command sounds) etc.), thereby reducing the probability of traffic accidents.
- the whole or part of the structure of the earphone may be located on the front side of the helix 209 (eg, the area J enclosed by the dotted line in FIG. 2 ).
- the whole or part of the structure of the earphone may be connected with the upper part of the external auditory canal 201 (for example, the helix 209, the concha 203, the triangular fossa 204, the antihelix 205, the concha 206, the helix 207, etc. or where multiple parts are located) contact.
- the whole or part of the structure of the earphone may be located in one or more parts of the ear (for example, the concha cavity 202, the concha 203, the triangular fossa 204, etc.) (for example, in FIG. 2 ) Area M) enclosed by dashed lines.
- ear 200 is for illustrative purposes only and is not intended to limit the scope of the present application.
- various changes and modifications can be made based on the description of the present application.
- the structure, shape, size, thickness, etc. of one or more parts of the ear 200 may be different for different users.
- a part of the structure of the earphone may shield part or all of the external auditory canal 201 .
- FIG. 3 is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
- 4 is a donning diagram of an exemplary headset shown in accordance with some embodiments of the present application.
- the earphone 300 may include a fixing structure 310 , a first microphone array 320 , a processor 330 and a speaker 340 .
- the first microphone array 320 , the processor 330 and the speaker 340 are located at the fixed structure 310 .
- the fixing structure 310 can be used to hang the earphone 300 near the user's ear without blocking the user's ear canal.
- the securing structure 310 may include a hook portion 311 and a body portion 312 .
- the hook portion 311 may comprise any shape suitable for being worn by a user, eg, a C shape, a hook shape, and the like.
- the hook portion 311 When the user wears the earphone 300, the hook portion 311 may be hung between the first side of the user's ear and the head.
- the body part 312 may include a connecting part 3121 and a holding part 3122 , wherein the connecting part 3121 is used for connecting the hook part 311 and the holding part 3122 .
- the holding part 3122 contacts the second side of the ear part, the connecting part 3121 extends from the first side of the ear part to the second side of the ear part, and the two ends of the connecting part 3121 are respectively connected with the hook-shaped part 311 and the second side of the ear part.
- the holding portion 3122 is connected.
- connection part 3121 cooperates with the hook part 311 to provide the holding part 3122 with a pressing force on the second side of the ear part, and the connection part 3121 cooperates with the holding part 3122 to provide the connection part 3121 with a pressing force on the first side of the ear part. tight.
- the connecting portion 3121 connects the hook portion 311 and the holding portion 3122, so that the fixing structure 310 is curved in three-dimensional space. It can also be understood that in the three-dimensional space, the hook portion 311 , the connecting portion 3121 , and the holding portion 3122 are not coplanar. In this setting, when the earphone 300 is in the wearing state, as shown in FIG.
- the hook portion 311 can be hung between the first side of the user's ear 100 and the head, and the holding portion 3122 contacts the user's ear the second side of the part 100, so that the retaining part 3122 and the hook part 311 cooperate to clamp the ear part.
- the connecting portion 3121 may extend from the head to the outside of the head (ie, from the first side of the ear portion 100 to the second side of the ear portion), and then cooperate with the hook portion 311 to provide a pair of the retaining portion 3122. The compressive force of the second side of the ear 100 .
- the fixing structure 310 can clamp the user's ear 100 to realize the wearing of the earphone 300 .
- the holding portion 3122 can press against the ear under the action of the pressing force, for example, against the area where the concha, the triangular fossa, the antihelix and other parts are located, so that the earphone 300 is in the wearing state.
- the external auditory canal of the ear is not covered.
- the projection of the holding portion 3122 on the user's ear may fall within the range of the helix of the ear; further, the holding portion 3122 may be located on the side of the external auditory canal of the ear close to the top of the user's head , and in contact with the helix and/or the antihelix.
- the holding portion 3122 can be prevented from covering the external auditory canal, thereby liberating the user's ears.
- the contact area between the holding portion 3122 and the ear portion can also be increased, thereby improving the wearing comfort of the earphone 300 .
- the speaker 340 located at the holding part 3122 can be closer to the user's ear canal, improving the user's listening experience when using the headset 300 .
- the earphone 300 in order to improve the stability and comfort of the user wearing the earphone 300, can also elastically clamp the ear.
- the hook portion 311 of the earphone 300 may include an elastic portion (not shown) connected with the connection portion 3121 .
- the elastic portion may have a certain elastic deformation capability, so that the hook portion 311 can be deformed under the action of an external force, and then displaced relative to the holding portion 3122 to allow the hook portion 311 and the holding portion 3122 to cooperate to elastically clamp the ear portion.
- the user can first force the hook-shaped part 311 to deviate from the holding part 3122, so that the ear part can extend between the holding part 3122 and the hook-shaped part 311;
- the earphone 300 is allowed to elastically grip the ear.
- the user can further adjust the position of the earphone 300 on the ear according to the actual wearing situation.
- the hook portion 311 may be rotatable relative to the connecting portion 3121 , or the retaining portion 3122 may be rotatable relative to the connecting portion 3121 , or a part of the connecting portion 3121 may be rotatable relative to another portion, so as to The relative positional relationship of the hook portion 311 , the connecting portion 3121 , and the holding portion 3122 in the three-dimensional space can be adjusted, so that the earphone 300 can be adapted to different users, that is, the application range of the earphone 300 to users is increased.
- the relative positional relationship of the hook portion 311, the connecting portion 3121, and the holding portion 3122 in the three-dimensional space is set to be adjustable, and the positions of the first microphone array 320 and the speaker 340 relative to the user's ear (eg, the external auditory canal) can also be adjusted. , thereby improving the active noise reduction effect of the earphone 300 .
- the connecting part 3121 can be made of a deformable material such as soft steel wire. The user bends the connecting part 3121 to rotate one part relative to the other part, so as to adjust the hook-shaped part 311 , the connecting part 3121 and the holding part 3122 The relative position in the three-dimensional space, and then meet its wearing needs.
- the connecting portion 3121 may also be provided with a rotating shaft mechanism 31211, through which the user adjusts the relative positions of the hook portion 311, the connecting portion 3121, and the holding portion 3122 in three-dimensional space to meet their wearing requirements.
- the earphone 300 can use the first microphone array 320 and the processor 330 to estimate the sound field at the user's ear canal (eg, the target spatial position), and output the target signal through the speaker 340 to reduce the sound field at the user's ear canal ambient noise, so as to achieve active noise reduction of the earphone 300 .
- the first microphone array 320 may be located on the body portion 312 of the fixed structure 310 , so that when the user wears the headset 300 , the first microphone array 320 may be located near the user's ear canal.
- the first microphone array 320 can pick up the environmental noise near the user's ear canal, and the processor 330 can further estimate the environmental noise at the target spatial position according to the environmental noise near the user's ear canal, for example, the environmental noise at the user's ear canal.
- the target signal output by the speaker 340 is also picked up by the first microphone array 320.
- the first microphone array 320 may be located at The sound emitted by the loudspeaker 340 is in an area with low intensity or even the smallest intensity in space, for example, the acoustic zero point position of the radiated sound field of the acoustic dipole formed by the earphone 300 (eg, the sound outlet and the pressure relief hole).
- the acoustic zero point position of the radiated sound field of the acoustic dipole formed by the earphone 300 eg, the sound outlet and the pressure relief hole.
- the processor 330 may be located on the hook portion 311 or the body portion 312 of the fixation structure 310 .
- the processor 330 is electrically connected to the first microphone array 320 .
- the processor 330 may estimate the sound field of the target spatial position based on the ambient noise picked up by the first microphone array 320, and generate a noise reduction signal based on the sound field estimation of the target spatial position.
- FIGS. 14-16 of this specification For the specific content of the sound field of the target spatial position estimated by the processor 330 using the first microphone array 320, reference may be made to FIGS. 14-16 of this specification, and related descriptions thereof.
- the processor 330 may also be used to control the sound production of the speaker 340 .
- the processor 330 can control the sound of the speaker 340 according to the instruction input by the user.
- the processor 330 may generate instructions to control the speaker 340 based on information of one or more components of the headset 300 .
- the processor 330 may control other components of the headset 300 (eg, the battery).
- the processor 330 may be disposed on any part of the fixed structure 310 .
- the processor 330 may be provided in the holding portion 3122 .
- the wiring distance between the processor 330 and other components (eg, the speaker 340, the key switch, etc.) disposed on the holding part 3122 can be shortened, so as to reduce the signal interference between the wirings and reduce the wiring the possibility of a short circuit between them.
- the speaker 340 may be located in the holding portion 3122 of the body portion 312, such that when the user wears the headset 300, the speaker 340 may be located in the vicinity of the user's ear canal.
- the speaker 340 may output a target signal based on the noise reduction signal generated by the processor 330 .
- the target signal may be transmitted to the outside of the earphone 300 through a sound outlet (not shown) on the holding part 3122 for reducing ambient noise at the user's ear canal.
- the sound hole on the holding part 3122 may be located on the side of the holding part 3122 facing the user's ear, so that the sound hole can be close enough to the user's ear canal, and the sound emitted by the sound hole can be better heard by the user.
- the headset 300 may also include components such as a battery 350 .
- the battery 350 may provide power for other components of the headset 300 (eg, the first microphone array 320, the speaker 340, etc.).
- any two of the first microphone array 320, the processor 330, the speaker 340, and the battery 350 may communicate in a variety of ways, eg, wired connection, wireless connection, etc., or a combination thereof.
- wired connections may include metallic cables, optical cables, or hybrid metallic and optical cables, among others. The examples described above are only used for convenience of illustration, and the medium of the wired connection may also be other types, for example, other transmission carriers of electrical signals or optical signals.
- Wireless connections may include radio communications, free space optical communications, acoustic communications, electromagnetic induction, and the like.
- the battery 350 may be disposed at an end of the hook portion 311 away from the connecting portion 3121 and located between the backside of the user's ear and the head when the earphone 300 is in a wearing state. In this setting mode, the capacity of the battery 350 can be increased, and the battery life of the earphone 300 can be improved. At the same time, the weight of the earphone 300 can also be balanced so as to overcome the self-weight of the holding part 3122 , its internal processor 330 , the speaker 340 and other structures to improve the wearing stability and comfort of the earphone 300 . In some embodiments, the battery 350 may also transmit its own state information to the processor 330 and receive instructions from the processor 330 to perform corresponding operations. The status information of the battery 350 may include on/off status, remaining power, remaining power usage time, charging time, etc., or a combination thereof.
- One or more coordinate systems are established in this specification in order to facilitate the description of the interrelationship of various parts of the headset (eg, the headset 300 ) and the relationship between the headset and the user.
- three basic planes of sagittal plane (Sagittal Plane), coronal plane (Coronal Plane) and transverse plane (Horizontal Plane) of the human body, as well as sagittal axis (Sagittal Axis), coronal axis (coronal plane) can be defined similar to the medical field (Coronal Axis) and vertical axis (Vertical Axis) three basic axes. Referring to the coordinate axes in Fig. 2-Fig.
- the sagittal plane refers to a cut plane perpendicular to the ground along the front-back direction of the body, which divides the human body into two parts: left and right.
- the sagittal plane can be It refers to the YZ plane, that is, the X axis is perpendicular to the sagittal plane of the user;
- the coronal plane refers to the cut plane perpendicular to the ground along the left and right directions of the body, which divides the human body into two parts: front and rear.
- the plane can refer to the XZ plane, that is, the Y axis is perpendicular to the coronal plane of the user; the cross section refers to the cut plane parallel to the ground made along the upper and lower directions of the body, which divides the human body into upper and lower parts.
- a cross-section may refer to a cross-section in the XY plane, ie, the Z-axis perpendicular to the user.
- the sagittal axis refers to an axis that vertically passes through the coronal plane along the anterior-posterior direction of the body.
- the sagittal axis may refer to the Y-axis; the coronal axis refers to an axis that vertically passes through the sagittal plane along the left-right direction of the body. , in the embodiment of the present specification, the coronal axis may refer to the X axis; the vertical axis refers to the axis that vertically passes through the horizontal plane along the up-down direction of the body, and in the embodiment of the present specification, the vertical axis may refer to the Z axis.
- FIG. 5 is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
- 6 is a donning diagram of an exemplary headset shown in accordance with some embodiments of the present application.
- the hook portion 311 may be close to the holding portion 3122, so that when the earphone 300 is in the wearing state, as shown in Fig. 6, the hook portion 311 acts away from the free end of the connecting portion 3121 on the first side (rear side) of the user's ear 100 .
- the connecting portion 3121 is connected with the hook-shaped portion 311 , and the connecting portion 3121 and the hook-shaped portion 311 form a first connection point C.
- the hook portion 311 is bent toward the rear side of the ear portion 100 , and is connected to the rear of the ear portion 100 .
- the side forms a first contact point B
- the holding portion 3122 forms a second contact point F with the second side (front side) of the ear portion 100 .
- the distance between the first contact point B and the second contact point F along the extending direction of the connecting portion 3121 is smaller than that in the wearing state.
- the distance between the first contact point B and the second contact point F of the earphone 300 along the extension direction of the connecting portion 3121 in the natural state is smaller than the thickness of the user's ear portion 100, so that the earphone 300 can be like in the wearing state.
- the user's ear 100 is clamped like a "clip".
- the hook portion 311 may also extend in a direction away from the connecting portion 3121 , that is, the entire length of the hook portion 311 is extended, so that when the earphone 300 is in a wearing state, the hook portion 311 can also be connected to the ear
- the rear side of the part 100 forms a third contact point A, and the first contact point B is located between the first connection point C and the third contact point A, and is close to the first connection point C. As shown in FIG.
- the distance between the projections of the first contact point B and the third contact point A on a reference plane (such as the YZ plane) perpendicular to the extension direction of the connecting portion 3121 in the natural state may be smaller than that of the first contact in the wearing state
- the free end of the hook portion 311 is pressed against the back side of the user's ear portion 100 , so that the third contact point A is located in the area of the ear portion 100 close to the earlobe, so that the hook portion 311 can be placed in the vertical position.
- the user's ear portion 100 is clamped in the straight direction (Z-axis direction) to overcome the self-weight of the holding portion 3122 .
- the contact area between the hook-shaped portion 311 and the user's ear portion 100 can be increased while clamping the user's ear portion 100 in the vertical direction. , that is, to increase the frictional force between the hook portion 311 and the user's ear portion 100 , thereby improving the wearing stability of the earphone 300 .
- a connecting portion 3121 is provided between the hook portion 311 of the earphone 300 and the holding portion 3122 , so that when the earphone 300 is in a wearing state, the connecting portion 3121 cooperates with the hook portion 311 to provide an opposite ear for the holding portion 3122
- the pressing force on the first side of the earphone 300 can be firmly attached to the user's ear when the earphone 300 is in the wearing state, thereby improving the stability of the earphone 300 in wearing and the reliability of the earphone 300 in sound production.
- FIG. 7 is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
- 8 is a donning diagram of an exemplary headset shown in accordance with some embodiments of the present application.
- the earphone 300 shown in FIGS. 7-8 is substantially the same as the earphone 300 shown in FIGS. 5-6 , the difference being that the bending direction of the hook portion 311 is different.
- the hook portion 311 in the direction from the first connection point C between the hook part 311 and the connection part 3121 to the free end of the hook part 311 (the end away from the connection part 3121 ) , the hook portion 311 is bent toward the user's head, and forms a first contact point B and a third contact point A with the head.
- the first contact point B is located between the third contact point A and the first connection point C.
- the hook portion 311 can form a lever structure with the first contact point B as a fulcrum.
- the free end of the hook portion 311 is pressed against the user's head, and the user's head provides a force directed to the outside of the head at the third contact point A, and the force is converted into the first connection through the lever structure
- the head-directed force at point C provides the holding portion 3122 with a pressing force on the first side of the ear portion 100 via the connecting portion 3121 .
- the magnitude of the force directed to the outside of the head by the user's head at the third contact point A and the relationship between the free end of the hook portion 311 and the YZ plane when the headset 300 is in the non-wearing state The size of the included angle is positively correlated. Specifically, the larger the angle formed between the free end of the hook portion 311 and the YZ plane when the earphone 300 is in the non-wearing state, the better the free end of the hook portion 311 can press when the earphone 300 is in the wearing state. For the user's head, the force that the user's head can provide at the third contact point A toward the outside of the head is correspondingly larger.
- the angle formed between the free end of the hook portion 311 and the YZ plane when the earphone 300 is in the non-wearing state can be greater than the angle formed between the free end of the hook portion 311 and the YZ plane when the earphone 300 is in the wearing state angle.
- the hook when the free end of the hook-shaped portion 311 is pressed against the user's head, in addition to making the user's head provide a force directed to the outside of the head at the third contact point A, the hook will also be At least the first side of the ear portion 100 of the shaped portion 311 forms another pressing force, and can cooperate with the pressing force formed by the holding portion 3122 against the second side of the ear portion 100 to form a pressing force against the ear portion 100 of the user.
- the pressing effect of the "front and rear pinch" improves the stability of the earphone 300 in wearing.
- the actual wearing of the earphone 300 will have a certain impact.
- the contact point between the earphone 300 and the user's head or ear (For example, the positions of the first contact point B, the second contact point F, the third contact point A, etc.) may be changed accordingly.
- the speaker 340 when the speaker 340 is located in the holding part 3122, the actual wearing of the earphone 300 will be affected to a certain extent due to the differences in the physiological structures of the head and ears of different users. Therefore, when the earphone 300 is worn by different users, the speaker The relative position of 340 to the user's ear will change. In some embodiments, the position of the speaker 340 on the overall structure of the earphone 300 can be adjusted by setting the structure of the holding portion 3122, thereby adjusting the distance of the speaker 340 relative to the user's ear canal.
- 9A is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
- 9B is a block diagram of an exemplary earphone shown in accordance with some embodiments of the present application.
- the holding part 3122 can be designed as a multi-segment structure to adjust the relative position of the speaker 340 on the overall structure of the earphone 300 .
- the holding portion 3122 is a multi-segment structure, which can make the earphone 300 in the wearing state, and can make the speaker 340 as close to the ear canal as possible while not covering the external auditory canal of the ear, so as to improve the hearing of the user when using the earphone 300 experience.
- the retaining portion 3122 may include a first retaining segment 3122-1, a second retaining segment 3122-2, and a third retaining segment 3122-3 connected end to end in sequence.
- One end of the first holding section 3122-1 away from the second holding section 3122-2 is connected to the connecting portion 3121, and the second holding section 3122-2 is folded back relative to the first holding section 3122-1, so that the second holding section 3122- 2 and the first holding segment 3122-1 have a space therebetween.
- a U-shaped structure may be formed between the second holding section 3122-2 and the first holding section 3122-1.
- the third holding section 3122-3 is connected to an end of the second holding section 3122-2 facing away from the first holding section 3122-1, and the third holding section 3122-3 can be used for arranging structural components such as the speaker 340.
- the second retaining segment 3122-2 can be relative to the first retaining segment 3122-2.
- the folded back length of the holding section 3122-1 (the length of the second holding section 3122-2 along the Y-axis direction), etc., to adjust the position of the third holding section 3122-3 on the overall structure of the earphone 300, so as to adjust the position of the third holding section 3122-3 on the overall structure of the earphone 300.
- the position or distance of the speaker 340 of segment 3122-3 relative to the ear canal of the user is maintained.
- the distance between the second retaining segment 3122-2 and the first retaining segment 3122-1, and the folded length of the second retaining segment 3122-2 relative to the first retaining segment 3122-1 can be adjusted according to different users.
- the ear features (eg, shape, size, etc.) of the device should be set accordingly, which is not specifically limited here.
- the retaining portion 3122 may include a first retaining segment 3122-1, a second retaining segment 3122-2, and a third retaining segment 3122-3 connected end to end in sequence.
- one end of the first holding section 3122-1 facing away from the second holding section 3122-2 is connected to the connecting portion 3121, and the second holding section 3122-2 is bent relative to the first holding section 3122-1, so that the third holding section is There is a gap between 3122-3 and the first retaining segment 3122-1.
- the third holding section 3122-3 may be used to set structural members such as the speaker 340.
- the second retaining segment 3122-2 can be relative to the first retaining segment 3122-2.
- the bending length of the holding section 3122-1 (the length of the second holding section 3122-2 along the Z-axis direction), etc., adjust the position of the third holding section 3122-3 on the overall structure of the earphone 300, so as to adjust the position of the third holding section 3122-3 on the overall structure of the headset 300.
- the distance between the third retaining segment 3122-3 and the first retaining segment 3122-1, and the bending length of the second retaining segment 3122-2 relative to the first retaining segment 3122-1 may be determined according to
- the ear features (eg, shape, size, etc.) of different users are set correspondingly, which are not specifically limited here.
- FIG. 10 is a structural diagram of an ear-facing side of an exemplary earphone according to some embodiments of the present application.
- the side of the holding part 3122 facing the ear may be provided with a sound outlet 301 , and the target signal output by the speaker 340 may be transmitted to the user's ear through the sound outlet 301 .
- the side of the retaining portion 3122 facing the ear portion may include a first region 3122A and a second region 3122B, and the second region 3122B is farther away from the connecting portion 3121 than the first region 3122A, that is, the second region 3122B may be located at the free end of the holding portion 3122 away from the connecting portion 3121 .
- the first area 3122A may be provided with a sound outlet 301
- the second area 3122B is convex toward the ear compared with the first area 3122A, so that the second area 3122B is in contact with the ear to allow the sound outlet 301 is spaced from the ear in the wearing state.
- the free end of the holding part 3122 may be configured as a convex hull structure, and on the side of the holding part 3122 close to the user's ear, the convex hull structure protrudes outward (ie, toward the user's ear) relative to the side surface. . Since the speaker 340 can generate sound (eg, target signal) transmitted to the ear through the sound hole 301 , the convex hull structure can prevent the ear from blocking the sound hole 301 and the sound produced by the speaker 340 is weakened or even cannot be output.
- sound eg, target signal
- the protrusion height of the convex hull structure may be represented by the maximum protrusion height of the second region 3122B relative to the first region 3122A.
- the maximum raised height of the second region 3122B relative to the first region 3122A may be greater than or equal to 1 mm.
- the maximum raised height of the second region 3122B relative to the first region 3122A may be greater than or equal to 0.8 mm.
- the maximum raised height of the second region 3122B relative to the first region 3122A may be greater than or equal to 0.5 mm.
- the distance between the sound outlet hole 301 and the user's ear canal is less than 10 mm. In some embodiments, by setting the structure of the holding portion 3122, when the user wears the earphone 300, the distance between the sound outlet hole 301 and the user's ear canal is less than 8 mm. In some embodiments, by setting the structure of the holding portion 3122, when the user wears the earphone 300, the distance between the sound outlet hole 301 and the user's ear canal is less than 7 mm. In some embodiments, by setting the structure of the holding portion 3122, when the user wears the earphone 300, the distance between the sound outlet hole 301 and the user's ear canal is less than 6 mm.
- the raised area toward the ear compared to the first area 3122A may also be located in other areas of the holding portion 3122 , such as the sound outlet 301 . and the area between the connecting portion 3121.
- the orthographic projection of the sound outlet 301 on the ear along the thickness direction of the retaining portion 3122 may at least partially fall on the concha and the ear hole. / or in a concha boat.
- the holding part 3122 may be located on the side of the ear hole close to the top of the user's head and contact the antihelix. At least part of it fell within the concha boat.
- FIG. 11 is a structural diagram of a side of an exemplary earphone facing away from the ear according to some embodiments of the present application.
- 12 is a top view of an exemplary headset shown in accordance with some embodiments of the present application.
- a pressure relief hole 302 may be provided on the side of the holding portion 3122 along the vertical axis (Z axis) and close to the top of the user's head.
- the opening direction of the pressure relief hole 302 may be toward the top of the user's head, and there may be a specific angle between the opening direction of the pressure relief hole 302 and the vertical axis (Z-axis) to allow the pressure relief hole 302 to be farther away from the user's ear Therefore, it is difficult for the user to hear the sound outputted through the pressure relief hole 302 and transmitted to the user's ear.
- the included angle between the opening direction of the pressure relief hole 302 and the vertical axis (Z axis) may be 0° to 10°. In some embodiments, the included angle between the opening direction of the pressure relief hole 302 and the vertical axis (Z axis) may be 0° to 8°. In some embodiments, the included angle between the opening direction of the pressure relief hole 302 and the vertical axis (Z axis) may be 0° to 5°.
- the pressure relief hole 302 and the user can be made when the user wears the earphone 300 .
- the distance between the ear canals is within an appropriate range. In some embodiments, when the user wears the earphone 300, the distance between the pressure relief hole 302 and the user's ear canal may be 5 mm to 20 mm. In some embodiments, when the user wears the earphone 300, the distance between the pressure relief hole 302 and the user's ear canal may be 5 mm to 18 mm.
- the distance between the pressure relief hole 302 and the user's ear canal may be 5 mm to 15 mm. In some embodiments, when the user wears the earphone 300, the distance between the pressure relief hole 302 and the ear canal of the user may be 6 mm to 14 mm. In some embodiments, when the user wears the earphone 300, the distance between the pressure relief hole 302 and the ear canal of the user may be 8 mm to 10 mm.
- FIG. 13 is a schematic cross-sectional structural diagram of an exemplary earphone according to some embodiments of the present application.
- FIG. 13 shows the acoustic structure formed by the holding part (for example, holding part 3122 ) of the earphone (for example, earphone 300 ), including: sound outlet 301 , pressure relief hole 302 , sound adjustment hole 303 , front cavity 304 and rear cavity 305.
- the holding portion 3122 may respectively form a front cavity 304 and a rear cavity 305 on opposite sides of the speaker 340 .
- the front cavity 304 communicates with the outside of the earphone 300 through the sound outlet 301, and outputs sound (eg, target signal, audio signal, etc.) to the ear.
- the rear cavity 305 communicates with the outside of the earphone 300 through a pressure relief hole 302 , and the pressure relief hole 302 is farther away from the user's ear canal than the sound outlet hole 301 .
- the pressure relief hole 302 can allow air to freely enter and exit the rear cavity 305 , so that the change of air pressure in the front cavity 304 can not be blocked by the rear cavity 305 as much as possible, thereby improving the air flow through the sound outlet 301 .
- the quality of the sound output from the ear can allow air to freely enter and exit the rear cavity 305 , so that the change of air pressure in the front cavity 304 can not be blocked by the rear cavity 305 as much as possible, thereby improving the air flow through the sound outlet 301 .
- the quality of the sound output from the ear can allow air to freely enter and exit the rear cavity 305 , so that the change of air pressure in the front cavity 304 can not be blocked by the rear cavity 305 as much as possible, thereby improving the air flow through the sound outlet 301 .
- the quality of the sound output from the ear can allow air to freely enter and exit the rear cavity 305 , so that the change of air pressure in the front cavity 304 can not be blocked by the rear cavity 305 as much as
- the included angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction (X-axis direction) of the holding portion 3122 may be 0° to 50°. In some embodiments, the included angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction of the holding portion 3122 may be 5° to 45°. In some embodiments, the included angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction of the holding portion 3122 may be 10° to 40°. In some embodiments, the included angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction of the holding portion 3122 may be 15° to 35°.
- the angle between the connection line between the pressure relief hole 302 and the sound outlet hole 301 and the thickness direction of the holding portion 3122 may be the connection between the center of the pressure relief hole 302 and the center of the sound outlet hole 301 The angle between the line and the thickness direction of the holding portion 3122 .
- the sound outlet hole 301 and the pressure relief hole 302 can be regarded as two sound sources that radiate sound outward, and the radiated sounds have the same amplitude and opposite phases.
- the two sound sources can approximately form an acoustic dipole or similar acoustic dipoles, so the sound radiated outward has obvious directivity, forming a "8"-shaped sound radiation area.
- the sound radiated by the two sound sources is the largest, and the radiated sound in the other directions is obviously reduced, and the sound radiated at the perpendicular line between the two sound sources is the smallest.
- the sound radiated by the pressure relief hole 302 and the sound outlet hole 301 is the largest, and the radiated sound in other directions is obviously reduced, and the pressure relief hole 302 and the sound outlet hole 301 are connected.
- the radiated sound is the least at the mid-perpendicular of the line.
- the acoustic dipole formed by the pressure relief hole 302 and the sound outlet hole 301 can reduce the sound leakage of the speaker 340 .
- the holding portion 3122 may further be provided with a sound adjustment hole 303 communicating with the rear cavity 305 , and the sound adjustment hole 303 may be used to destroy the high-pressure area of the sound field in the rear cavity 305 , so that the rear cavity 305 is The wavelength of the standing wave in the cavity 305 is shortened, so that the resonance frequency of the sound output to the outside of the earphone 300 through the pressure relief hole 302 is as high as possible, eg, greater than 4 kHz, thereby reducing the sound leakage of the speaker 340 .
- the sound adjustment hole 303 and the pressure relief hole 302 may be located on opposite sides of the speaker 340 , for example, arranged opposite to each other in the Z-axis direction, so as to destroy the high pressure region of the sound field in the rear cavity 305 to the greatest extent.
- the sound adjustment hole 303 may be farther away from the sound outlet hole 301 than the pressure relief hole 302 , so as to increase the distance between the sound adjustment hole 303 and the sound outlet hole 301 as much as possible, thereby reducing the adjusted sound
- the target signal output by the speaker 340 through the sound outlet 301 and/or the pressure relief hole 302 will also be picked up by the first microphone array 320, and the target signal will affect the processor 330's perception of the sound field at the target spatial position. It is estimated that the target signal output by the speaker 340 is not expected to be picked up. In this case, in order to reduce the influence of the target signal output by the speaker 340 on the first microphone array 320, the first microphone array 320 may be set in the first target area where the sound output by the speaker 340 is as small as possible.
- the first target area may be a position of or near the acoustic zero point of the radiated sound field of the acoustic dipole formed by the pressure relief hole 302 and the sound outlet hole 301 .
- the first target area may be the area G shown in FIG. 10 .
- the area G is located in front of the sound outlet 301 and/or the pressure relief hole 302 (the front here refers to the direction the user faces), that is, the area G is closer to the user's eyes.
- the region G may be a partial region on the connecting portion 3121 of the fixing structure 310 . That is, the first microphone array 320 may be located at the connection part 3121 .
- the first microphone array 320 may be located at a position where the connecting part 3121 is close to the holding part 3122 .
- the area G may also be located behind the sound outlet 301 and/or the pressure relief hole 302 (the front here refers to the direction opposite to the direction the user faces).
- the region G may be located on the end of the holding portion 3122 away from the connecting portion 3121 .
- the first microphone array 320 and the The relative position between the sound outlet hole 301 and the pressure relief hole 302 may be the location where any microphone in the first microphone array 320 is located.
- the connection line between the first microphone array 320 and the sound outlet hole 301 and the connection line between the sound outlet hole 301 and the pressure relief hole 302 form a first included angle
- the first microphone array 320 and the pressure relief hole 302 form a first included angle.
- the connecting line between the holes 302 and the connecting line between the sound outlet hole 301 and the pressure relief hole 302 form a second included angle.
- the difference between the first included angle and the second included angle may not be greater than 30°.
- the difference between the first included angle and the second included angle may be no greater than 25°.
- the difference between the first included angle and the second included angle may be no greater than 20°.
- the difference between the first included angle and the second included angle may not be greater than 15°.
- the difference between the first included angle and the second included angle may not be greater than 10°.
- the difference between the first distance and the second distance may not be greater than 6 mm. In some embodiments, the difference between the first distance and the second distance may be no greater than 5 millimeters. In some embodiments, the difference between the first distance and the second distance may be no greater than 4 millimeters. In some embodiments, the difference between the first distance and the second distance may be no greater than 3 millimeters.
- the positional relationship between the first microphone array 320 and the sound outlet hole 301 and the pressure relief hole 302 described herein may refer to the center of any microphone in the first microphone array 320 and the sound outlet hole 301 and the positional relationship between the center of the pressure relief hole 302 .
- the connection line between the first microphone array 320 and the sound outlet hole 301 and the connection line between the sound outlet hole 301 and the pressure relief hole 302 form a first included angle, which may refer to any microphone in the first microphone array 320
- the line connecting the center of the sound outlet hole 301 and the line connecting the center of the sound outlet hole 301 and the center of the pressure relief hole 302 form a first included angle.
- the first distance between the first microphone array 320 and the sound hole 301 may mean that any microphone in the first microphone array 320 has a first distance from the center of the sound hole 301 .
- the first microphone array 320 is located at the acoustic zero position of the acoustic dipole formed by the sound outlet 301 and the pressure relief hole 302, so that the first microphone array 320 is minimally affected by the target signal output by the speaker 340, Thus, the first microphone array 320 can more accurately pick up the ambient noise near the user's ear canal. Further, the processor 330 may more accurately estimate the ambient noise at the user's ear canal based on the ambient noise picked up by the first microphone array 320 and generate a noise reduction signal, thereby better implementing the active noise reduction of the earphone 300 . For a specific description of implementing the active noise reduction of the earphone 300 by using the first microphone array 320, reference may be made to FIG. 14-FIG. 16, and related descriptions thereof.
- FIG 14 is an exemplary noise reduction flow diagram for a headset according to some embodiments of the present application.
- process 1400 may be performed by headset 300 .
- process 1400 may include:
- step 1410 ambient noise is picked up. In some embodiments, this step may be performed by the first microphone array 320 .
- ambient noise may refer to a combination of various external sounds (eg, traffic noise, industrial noise, building construction noise, social noise) in the user's environment.
- the first microphone array 320 may be located on the body portion 312 of the earphone 300 near the user's ear canal for picking up ambient noise near the user's ear canal. Further, the first microphone array 320 can convert the picked-up environmental noise signal into an electrical signal and transmit it to the processor 330 for processing.
- step 1420 the noise of the target spatial location is estimated based on the picked-up ambient noise. In some embodiments, this step may be performed by processor 330 .
- the processor 330 may perform signal separation on the picked-up ambient noise.
- the ambient noise picked up by the first microphone array 320 may include various sounds.
- the processor 330 may perform signal analysis on the ambient noise picked up by the first microphone array 320 to separate various sounds.
- the processor 330 can adaptively adjust the parameters of the filter according to the statistical distribution characteristics and structural characteristics of various sounds in different dimensions such as space, time domain, and frequency domain, and estimate the parameter information of each sound signal in the environmental noise, And complete the signal separation process according to the parameter information of each sound signal.
- the statistical distribution characteristics of noise may include probability distribution density, power spectral density, autocorrelation function, probability density function, variance, mathematical expectation, and the like.
- the structured features of noise may include noise distribution, noise intensity, global noise intensity, noise rate, etc., or any combination thereof.
- the global noise intensity may refer to an average noise intensity or a weighted average noise intensity.
- the noise rate may refer to the degree of dispersion of the noise distribution.
- the ambient noise picked up by the first microphone array 320 may include a first signal, a second signal, and a third signal.
- the processor 330 obtains the differences between the first signal, the second signal, and the third signal in the space (eg, where the signals are located), the time domain (eg, delay), and the frequency domain (eg, amplitude, phase), and according to the three
- the first signal, the second signal, and the third signal are separated by the difference in these dimensions, and the relatively pure first signal, the second signal, and the third signal are obtained.
- the processor 330 may update the environmental noise according to the parameter information (eg, frequency information, phase information, amplitude information) of the separated signal.
- the processor 330 may determine that the first signal is the user's call sound according to the parameter information of the first signal, and remove the first signal from the ambient noise to update the ambient noise.
- the removed first signal may be transmitted to the far end of the call.
- the first signal may be transmitted to the far end of the call.
- the target spatial location is a location determined based on the first microphone array 320 at or near the user's ear canal.
- the target spatial location may refer to a spatial location that is a certain distance (eg, 2mm, 3mm, 5mm, etc.) close to the user's ear canal (eg, ear canal).
- the target spatial location is closer to the user's ear canal than any microphone in the first microphone array 320 .
- the target spatial position is related to the number of each microphone in the first microphone array 320 and the distribution position relative to the user's ear canal. By adjusting the number of each microphone in the first microphone array 320 and/or relative to the user's ear canal The distribution position of the track can be adjusted to the target space position.
- estimating noise at the spatial location of the target based on the picked-up environmental noise may further include determining one or more spatial noise sources related to the picked-up environmental noise, estimating the target based on the spatial noise sources Noise in spatial location.
- the ambient noise picked up by the first microphone array 320 may come from different azimuths and different types of spatial noise sources.
- the parameter information eg, frequency information, phase information, and amplitude information corresponding to each spatial noise source is different.
- the processor 330 may perform signal separation and extraction on the noise at the target spatial location according to the statistical distribution and structural features of different types of noise in different dimensions (eg, spatial domain, time domain, frequency domain, etc.), so as to obtain Noise of different types (eg, different frequencies, different phases, etc.), and estimate the parameter information (eg, amplitude information, phase information, etc.) corresponding to each noise.
- the processor 330 may further determine the overall parameter information of the noise at the target spatial position according to the parameter information corresponding to different types of noise at the target spatial position. More information on estimating noise at a target spatial location based on one or more spatial noise sources can be found elsewhere in this specification, eg, FIG. 15 and its corresponding description.
- estimating noise at the target spatial location based on the picked-up ambient noise may further include constructing a virtual microphone based on the first microphone array 320 and estimating noise at the target spatial location based on the virtual microphone.
- estimating noise at a target spatial location based on a virtual microphone reference may be made to other places in this specification, such as FIG. 16 and its corresponding description.
- step 1430 a noise reduction signal is generated based on the noise at the target spatial location. In some embodiments, this step may be performed by processor 330 .
- the processor 330 may generate a noise reduction signal based on the parameter information (eg, amplitude information, phase information, etc.) of the noise at the target spatial location obtained in step 1420 .
- the phase difference between the phase of the noise reduction signal and the phase of the noise at the target spatial location may be less than or equal to a preset phase threshold.
- the preset phase threshold may be in the range of 90-180 degrees.
- the preset phase threshold can be adjusted within this range according to user needs. For example, when the user does not want to be disturbed by the sound of the surrounding environment, the preset phase threshold may be a larger value, such as 180 degrees, that is, the phase of the noise reduction signal is opposite to the phase of the noise at the target spatial location.
- the preset phase threshold may be a small value, such as 90 degrees. It should be noted that the more ambient sounds the user wishes to receive, the closer the preset phase threshold may be to 90 degrees, and the less ambient sounds the user wishes to receive, the closer the preset phase threshold may be to 180 degrees.
- the phase of the noise reduction signal is a certain phase (eg, the phase is opposite) to the noise at the target spatial position, the amplitude of the noise at the target spatial position is different from the amplitude of the noise reduction signal. Can be less than or equal to the preset amplitude threshold.
- the preset amplitude threshold may be a small value, such as 0 dB, that is, the amplitude of the noise reduction signal is equal to the amplitude of the noise at the target spatial position.
- the preset amplitude threshold may be a relatively large value, for example, approximately equal to the amplitude of the noise at the target spatial position.
- the preset amplitude threshold can be to the amplitude of the noise at the target spatial position, and the less the user wishes to receive the sound of the surrounding environment, the preset amplitude threshold can be The closer it is to 0dB.
- the speaker 340 may output the target signal based on the noise reduction signal generated by the processor 330 .
- the speaker 340 can convert a noise reduction signal (eg, an electrical signal) into a target signal (ie, a vibration signal) based on its vibration component, and the target signal is transmitted to the user's ear through the sound outlet 301 on the earphone 300, and is transmitted to the user's ear.
- the ear canal and ambient noise cancel each other out.
- the speaker 340 may output target signals corresponding to the plurality of spatial noise sources based on the noise reduction signal.
- the speaker 340 may output a first target signal having an approximately opposite phase and approximately equal amplitude to the noise of the first spatial noise source to cancel the first spatial noise.
- the noise of the noise source and the noise of the second spatial noise source are approximately opposite in phase and approximately equal in amplitude to the second target signal to cancel the noise of the second spatial noise source.
- the loudspeaker 340 is an air conduction loudspeaker
- the position where the target signal and the ambient noise are canceled may be the target spatial position.
- the distance between the target space position and the user's ear canal is small, and the noise at the target space position can be approximately regarded as the noise at the user's ear canal position. Therefore, the noise reduction signal and the noise at the target space position cancel each other out, and can be approximately transmitted to the user.
- the ambient noise of the ear canal is eliminated, and the active noise reduction of the earphone 300 is realized.
- the loudspeaker 340 is a bone conduction loudspeaker
- the position where the target signal and the ambient noise are canceled may be the basilar membrane.
- the target signal and ambient noise are canceled at the basilar membrane of the user, thereby realizing active noise reduction of the earphone 300 .
- the earphone 300 may further include one or more sensors, which may be located anywhere on the earphone 300 , for example, the hook portion 311 and/or the connecting portion 3121 and/or the holding portion 3122 .
- One or more sensors may be electrically connected to other components of headset 300 (eg, processor 330).
- one or more sensors may be used to obtain physical location and/or motion information of the headset 300 .
- the one or more sensors may include an Inertial Measurement Unit (IMU), a Global Positioning System (GPS), a radar, and the like.
- the motion information may include motion trajectory, motion direction, motion speed, motion acceleration, motion angular velocity, motion-related time information (eg, motion start time, end time), etc., or any combination thereof.
- the IMU may include a Micro Electro Mechanical System (MEMS).
- MEMS Micro Electro Mechanical System
- the microelectromechanical system may include multi-axis accelerometers, gyroscopes, magnetometers, etc., or any combination thereof.
- the IMU may be used to detect the physical location and/or motion information of the headset 300 to enable control of the headset 300 based on the physical location and/or motion information.
- the processor 330 may be based on motion information (eg, motion trajectory, motion direction, motion speed, motion acceleration, motion angular velocity, motion-related time information) of the headset 300 acquired by one or more sensors of the headset 300 . Update the noise at the target space location and the sound field estimate at the target space location. Further, the processor 330 may generate a noise reduction signal based on the updated noise at the target spatial location and the sound field estimate at the target spatial location.
- One or more sensors can record the motion information of the earphone 300, and then the processor 330 can quickly update the noise reduction signal, which can improve the noise tracking performance of the earphone 300, so that the noise reduction signal can more accurately eliminate environmental noise, and further. Improve noise reduction and user listening experience.
- FIG. 15 is an exemplary flowchart for estimating noise at a spatial location of a target, according to some embodiments of the present application. As shown in Figure 15, process 1500 may include:
- step 1510 one or more sources of spatial noise related to ambient noise picked up by the first microphone array 320 are determined. In some embodiments, this step may be performed by processor 330 .
- determining a spatial noise source refers to determining information related to the spatial noise source, such as the location of the spatial noise source (including the orientation of the spatial noise source, the distance between the spatial noise source and the target spatial location, etc.), the spatial noise source The phase and the amplitude of the spatial noise source, etc.
- a spatial noise source related to ambient noise refers to a noise source whose sound waves can be delivered to the user's ear canal (eg, a target spatial location) or near the user's ear canal.
- the spatial noise sources may be noise sources in different directions (eg, front, rear, etc.) of the user's body. For example, there is crowd noise in front of the user's body and vehicle whistle noise to the left of the user's body.
- the spatial noise sources include crowd noise sources in front of the user's body and vehicle whistle noise sources to the left of the user's body.
- the first microphone array 320 can pick up spatial noises in all directions of the user's body, convert the spatial noises into electrical signals, and transmit them to the processor 330.
- the processor 330 can analyze the electrical signals corresponding to the spatial noises to obtain Parameter information (eg, frequency information, amplitude information, phase information, etc.) of the picked up spatial noise in various directions.
- the processor 330 determines the information of the spatial noise sources in various directions according to the parameter information of the spatial noise in various directions, for example, the orientation of the spatial noise source, the distance of the spatial noise source, the phase of the spatial noise source, and the amplitude of the spatial noise source.
- the processor 330 may determine the source of the spatial noise through a noise localization algorithm based on the spatial noise picked up by the first microphone array 320 .
- the noise localization algorithm may include one or more of a beamforming algorithm, a super-resolution spatial spectrum estimation algorithm, a time difference of arrival algorithm (also referred to as a delay estimation algorithm), and the like.
- the processor 330 may divide the picked-up environmental noise into multiple frequency bands according to a specific frequency bandwidth (for example, every 500 Hz as a frequency band), each frequency band may correspond to a different frequency range, and at least one The spatial noise source corresponding to the frequency band is determined on the frequency band.
- the processor 330 may perform signal analysis on the frequency bands divided by the environmental noise, obtain parameter information of the environmental noise corresponding to each frequency band, and determine the spatial noise source corresponding to each frequency band according to the parameter information.
- step 1520 the noise of the target spatial location is estimated based on the spatial noise sources. In some embodiments, this step may be performed by processor 330 . As described herein, estimating the noise at the target spatial position refers to estimating parameter information of the noise at the target spatial position, such as frequency information, amplitude information, phase information, and the like.
- the processor 330 may estimate that each spatial noise source transmits, based on the parameter information (eg, frequency information, amplitude information, phase information, etc.) of the spatial noise sources located in various directions of the user's body obtained in step 1510, respectively.
- the parameter information of the noise to the target space position, so as to estimate the noise of the target space position.
- the processor 330 may estimate the frequency information of the second azimuth spatial noise source when the noise of the second azimuth spatial noise source is transmitted to the target spatial position according to the position information, frequency information, phase information or amplitude information of the second azimuth spatial noise source. , phase information or amplitude information. Further, the processor 330 may estimate the noise information of the target spatial position based on the frequency information, phase information or amplitude information of the first azimuth spatial noise source and the second azimuth spatial noise source, thereby estimating the noise information of the target spatial position.
- the processor 330 may estimate noise information for the target spatial location using virtual microphone techniques or other methods.
- the processor 330 may extract the parameter information of the noise of the spatial noise source from the frequency response curve of the spatial noise source picked up by the microphone array through a feature extraction method.
- the method for extracting the parameter information of the noise of the spatial noise source may include, but is not limited to, Principal Components Analysis (PCA), Independent Component Algorithm (ICA), Linear Discriminant Analysis (Linear Discriminant) Analysis, LDA), singular value decomposition (Singular Value Decomposition, SVD) and so on.
- PCA Principal Components Analysis
- ICA Independent Component Algorithm
- LDA Linear Discriminant Analysis
- SVD singular value decomposition
- process 1500 is only for example and illustration, and does not limit the scope of application of the present application.
- process 1500 may further include steps of locating the spatial noise source, extracting noise parameter information of the spatial noise source, and the like. Such corrections and changes are still within the scope of this application.
- FIG. 16 is an exemplary flowchart for estimating the sound field and noise of a target spatial location according to some embodiments of the present application. As shown in Figure 16, process 1600 may include:
- a virtual microphone is constructed based on the first microphone array 320. In some embodiments, this step may be performed by processor 330 .
- a virtual microphone may be used to represent or simulate audio data collected by the microphone if the microphone is placed at the target spatial location. That is, the audio data obtained by the virtual microphone can be approximated or equivalent to the audio data collected by the physical microphone if the physical microphone is placed at the target spatial position.
- the virtual microphone may include a mathematical model.
- the mathematical model can embody the noise or sound field estimation of the target spatial location and the parameter information (eg, frequency information, amplitude information, phase information, etc.) of the ambient noise picked up by the microphone array (eg, the first microphone array 320 ) and the microphone array relationship between parameters.
- the parameters of the microphone array may include one or more of the arrangement of the microphone array, the spacing between the microphones, the number and position of the microphones in the microphone array, and the like.
- the mathematical model can be obtained by calculation based on the initial mathematical model and parameters of the microphone array and parameter information (eg, frequency information, amplitude information, phase information, etc.) of the sound (eg, ambient noise) picked up by the microphone array.
- the initial mathematical model may include parameters and model parameters corresponding to parameters of the microphone array and parameter information of ambient noise picked up by the microphone array.
- the parameters of the microphone array and the parameter information of the sound picked up by the microphone array and the initial values of the model parameters are brought into the initial mathematical model to obtain the predicted noise or sound field of the target spatial position.
- This predicted noise or sound field is then compared with the data (noise and sound field estimates) obtained by physical microphones placed at the target spatial location to make adjustments to the model parameters of the mathematical model.
- the mathematical model is obtained by adjusting multiple times through a large amount of data (for example, parameters of the microphone array and parameter information of ambient noise picked up by the microphone array).
- the virtual microphone may include a machine learning model.
- the machine learning model may be obtained through training based on parameters of the microphone array and parameter information (eg, frequency information, amplitude information, phase information, etc.) of the sound (eg, ambient noise) picked up by the microphone array.
- the machine learning model is obtained by training an initial machine learning model (eg, a neural network model) using the parameters of the microphone array and the parameter information of the sound picked up by the microphone array as training samples.
- the parameters of the microphone array and the parameter information of the sound picked up by the microphone array can be input into the initial machine learning model, and the prediction results (for example, the noise and sound field estimation of the target spatial position) can be obtained.
- This prediction is then compared with data (noise and sound field estimates) obtained from physical microphones set up at the target spatial location to adjust the parameters of the initial machine learning model.
- data noise and sound field estimates
- the parameters of the initial machine learning model are optimized until the prediction results of the initial machine learning model are consistent with the target space.
- the machine learning model is obtained when the data obtained by the physical microphone set at the location is the same or approximately the same.
- Virtual microphone technology can move physical microphones away from locations where microphone placement is difficult (eg, target spatial locations). For example, in order to open the user's ears without blocking the user's ear canal, the physical microphone cannot be set at the position of the user's ear hole (eg, a target spatial position). At this time, the microphone array can be set at a position close to the user's ear without blocking the ear canal through the virtual microphone technology, and then a virtual microphone at the position of the user's ear hole can be constructed through the microphone array.
- the virtual microphone may utilize physical microphones (eg, first microphone array 320 ) at a first location to predict sound data (eg, amplitude, phase, sound pressure, sound field, etc.) at a second location (eg, a target spatial location).
- sound data eg, amplitude, phase, sound pressure, sound field, etc.
- the sound data of the second position (which may also be referred to as a specific position, such as a target spatial position) predicted by the virtual microphone may be based on the distance between the virtual microphone and the physical microphone (the first microphone array 320 ), the virtual Adjust the type of microphone (eg, mathematical model virtual microphone, machine learning virtual microphone), etc. For example, the closer the distance between the virtual microphone and the physical microphone, the more accurate the sound data of the second position predicted by the virtual microphone.
- the sound data of the second position predicted by the machine learning virtual microphone is more accurate than that of the mathematical model virtual microphone.
- the position corresponding to the virtual microphone ie, the second position, for example, the target spatial position
- the position corresponding to the virtual microphone may be near the first microphone array 320 , or may be far away from the first microphone array 320 .
- step 1620 the noise and sound field of the target spatial location is estimated based on the virtual microphone. In some embodiments, this step may be performed by processor 330 .
- the processor 330 may real-time analyze the parameter information (eg, frequency information, amplitude information, phase information, etc.) and the parameters of the first microphone array (for example, the arrangement of the first microphone array, the spacing between the individual microphones, the number of microphones in the first microphone array) are input into the mathematical model as parameters of the mathematical model to estimate the target Noise and sound field at spatial location.
- the parameter information eg, frequency information, amplitude information, phase information, etc.
- the parameters of the first microphone array for example, the arrangement of the first microphone array, the spacing between the individual microphones, the number of microphones in the first microphone array
- the processor 330 may combine the parameter information (eg, frequency information, amplitude information, phase information, etc.) of the ambient noise picked up by the first microphone array with the first microphone in real time
- the parameters of the array eg, the arrangement of the first microphone array, the spacing between the individual microphones, the number of microphones in the first microphone array
- the noise and sound field at the target spatial location are estimated based on the output of the machine learning model .
- process 1600 is only for example and description, and does not limit the scope of application of the present application.
- steps 1620 may be divided into two steps to estimate the noise and sound field of the target spatial location, respectively. Such corrections and changes are still within the scope of this application.
- the speaker 340 outputs a target signal based on the noise reduction signal. After the target signal is canceled with the ambient noise, there may still be a part of the sound signal near the user's ear canal that has not been canceled with each other. These sound signals have not been canceled. It may be residual ambient noise and/or residual target signal, so there is still a certain amount of noise in the user's ear canal.
- the earphone 100 shown in FIG. 1 and the earphone 300 shown in FIGS. 3 to 12 may further include a second microphone 360 .
- the second microphone 360 may be located on the body portion 312 (eg, the holding portion 3122).
- the second microphone 360 may be configured to pick up ambient noise and target signals.
- the number of the second microphones 360 may be one or more.
- the second microphone can be used to pick up the ambient noise and the target signal at the user's ear canal, so as to monitor the sound field at the user's ear canal after the target signal and the ambient noise are cancelled.
- the number of the second microphones 360 is multiple, the multiple second microphones can be used to pick up the ambient noise and the target signal at the user's ear canal, and the relevant parameter information of the sound signal at the user's ear canal picked up by the multiple microphones can be in the form of The noise at the user's ear canal is estimated by means of averaging or weighting algorithms.
- the number of the second microphones 360 when the number of the second microphones 360 is multiple, some of the multiple microphones can be used to pick up the ambient noise and the target signal at the user's ear canal, and the rest of the microphones can be used as the first microphone array 320 In this case, the microphones in the first microphone array 320 and the microphones in the second microphone 360 overlap or intersect.
- the second microphone 360 may be disposed in a second target area, and the second target area may be an area on the holding portion 3122 close to the user's ear canal.
- the second target area may be area H in FIG. 10 .
- the area H may be a partial area of the holding part 3122 close to the user's ear canal. That is, the second microphone 360 may be located at the holding part 3122.
- the region H may be a partial region in the first region 3122A on the side of the holding portion 3122 facing the user's ear.
- the second microphone 360 can be located near the ear canal of the user and closer to the ear canal of the user than the first microphone array 320, thereby ensuring that the sound signal (for example, residual ambient noise, residual target signal, etc.) are closer to the sound heard by the user, and the processor 330 further updates the noise reduction signal according to the sound signal picked up by the second microphone 360, so as to achieve a more ideal noise reduction effect.
- the sound signal For example, residual ambient noise, residual target signal, etc.
- the position of the second microphone 360 on the holding part 3122 can be adjusted so that the second microphone 360 is connected to the user's ear canal.
- the distance between them is within a suitable range.
- the distance between the second microphone 360 and the user's ear canal may be less than 10 mm.
- the distance between the second microphone 360 and the user's ear canal may be less than 9 mm.
- the distance between the second microphone 360 and the user's ear canal may be less than 8 mm.
- the distance between the second microphone 360 and the user's ear canal may be less than 7 mm.
- the second microphone 360 needs to pick up the target signal output by the speaker 340 through the sound outlet 301 and the residual target signal after canceling the ambient noise.
- the distance between the second microphone 360 and the sound outlet 301 can be set reasonably.
- the distance between the second microphone 360 and the sound exit hole 301 in the direction of the sagittal axis (Y axis) may be less than 10 mm.
- the distance between the second microphone 360 and the sound exit hole 301 in the direction of the sagittal axis (Y axis) may be less than 9 mm. In some embodiments, on the sagittal plane (YZ plane) of the user, the distance between the second microphone 360 and the sound exit hole 301 along the sagittal axis (Y axis) direction may be less than 8 mm. In some embodiments, on the sagittal plane (YZ plane) of the user, the distance between the second microphone 360 and the sound exit hole 301 in the direction of the sagittal axis (Y axis) may be less than 7 mm.
- the distance between the second microphone 360 and the sound exit hole 301 along the vertical axis (Z axis) direction may be 3 mm to 6 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the sound exit hole 301 along the vertical axis (Z axis) direction may be 2.5 mm to 5.5 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the sound exit hole 301 along the vertical axis (Z axis) direction may be 3 mm to 5 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the sound exit hole 301 along the vertical axis (Z axis) direction may be 3.5 mm to 4.5 mm.
- the distance between the second microphone 360 and the first microphone array 320 along the vertical axis (Z axis) direction may be 2 mm to 8 mm mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 in the direction of the vertical axis (Z axis) may be 3 mm to 7 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 along the vertical axis (Z axis) direction may be 4 mm to 6 mm.
- the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) direction may be 2 mm to 20 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) may be 4 mm to 18 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) may be 5 mm to 15 mm.
- the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) may be 6 mm to 12 mm. In some embodiments, on the sagittal plane of the user, the distance between the second microphone 360 and the first microphone array 320 along the sagittal axis (Y-axis) direction may be 8 mm to 10 mm.
- the distance between the second microphone 360 and the first microphone array 320 in the direction of the coronal axis (X axis) may be less than 3 mm in the transverse plane (XY plane) of the user. In some embodiments, the distance between the second microphone 360 and the first microphone array 320 in the direction of the coronal axis (X axis) may be less than 2.5 millimeters in the cross-section (XY plane) of the user. In some embodiments, the distance between the second microphone 360 and the first microphone array 320 in the direction of the coronal axis (X axis) may be less than 2 millimeters in the cross-section (XY plane) of the user. It can be understood that the distance between the second microphone 360 and the first microphone array 320 may be the distance between the second microphone 360 and any microphone in the first microphone array 320 .
- the second microphone 360 is configured to pick up ambient noise and target signals. Further, the processor 330 can update the noise reduction signal based on the sound signal picked up by the second microphone 360 , thereby further improving the active noise reduction of the earphone 300 Effect.
- the second microphone 360 can update the noise reduction signal based on the sound signal picked up by the second microphone 360 , thereby further improving the active noise reduction of the earphone 300 Effect.
- Figure 17 is an exemplary flow diagram of updating a noise reduction signal according to some embodiments of the present application. As shown in Figure 17, process 1700 may include:
- step 1710 based on the sound signal picked up by the second microphone 360, the sound field at the user's ear canal is estimated.
- this step may be performed by processor 330 .
- the sound signal picked up by the second microphone 360 includes ambient noise and the target signal output by the speaker 340.
- these uncancelled sound signals may be residual ambient noise and /residual target signal, so there is still a certain amount of noise in the user's ear canal after the ambient noise and the target signal are canceled.
- the processor 330 may process the sound signal (eg, environmental noise, target signal) picked up by the second microphone 360 to obtain parameter information of the sound field at the user's ear canal, such as frequency information, amplitude information, and phase information, etc. Achieves an estimation of the sound field at the user's ear canal.
- the sound signal eg, environmental noise, target signal
- step 1720 the noise reduction signal is updated according to the sound field at the user's ear canal.
- step 1720 may be performed by processor 330 .
- the processor 330 may adjust the parameter information (eg, frequency information, amplitude information and/or phase information) of the noise reduction signal according to the parameter information of the sound field at the user's ear canal obtained in step 1710, such that The amplitude information and frequency information of the updated noise reduction signal are more consistent with the amplitude information and frequency information of the environmental noise at the user's ear canal, and the phase information of the updated noise reduction signal is inverse to the environmental noise at the user's ear canal. The bit information is more consistent, so that the updated noise reduction signal can eliminate ambient noise more accurately.
- the parameter information eg, frequency information, amplitude information and/or phase information
- the microphone that picks up the sound field at the user's ear canal is not limited to the second microphone 360, and may also include other microphones, such as a third microphone, a fourth microphone, etc., and the relevant parameters of the sound field at the user's ear canal picked up by multiple microphones
- the information estimates the sound field at the user's ear canal in ways such as averaging or weighting algorithms.
- the second microphone 360 may include a microphone that is closer to the user's ear canal than any microphone in the first microphone array 320 .
- the sound signal picked up by the first microphone array 320 is ambient noise
- the sound signal picked up by the second microphone 360 is the ambient noise and the target signal.
- the processor 330 may estimate the sound field at the user's ear canal according to the sound signal picked up by the second microphone 360 to update the noise reduction signal. The second microphone 360 needs to monitor the sound field at the user's ear canal after the noise reduction signal and the ambient noise are canceled.
- the second microphone 360 includes a microphone that is closer to the user's ear canal than any microphone in the first microphone array 320, which can more accurately characterize the sound field.
- the sound signal heard by the user is estimated through the sound field of the second microphone 360 to update the noise reduction signal, which can further improve the noise reduction effect and the user's sense of hearing experience.
- the earphone 300 may not include the above-mentioned first microphone array, but only use the second microphone 360 to perform active noise reduction.
- the processor 330 may regard the ambient noise picked up by the second microphone 360 as the noise at the user's ear canal, and generate a feedback signal based on this to adjust the noise reduction signal, so as to cancel or reduce the ambient noise at the user's ear canal.
- the processor 330 can update the noise reduction signal according to the sound signal at the user's ear canal after the target signal and the ambient noise are cancelled, so as to further improve the active noise reduction effect of the earphone 300 .
- Figure 18 is an exemplary noise reduction flow diagram for a headset according to some embodiments of the present application. As shown in Figure 18, process 1800 may include:
- the picked-up environmental noise is divided into multiple frequency bands, and the multiple frequency bands correspond to different frequency ranges.
- this step may be performed by processor 330 .
- the ambient noise picked up by the microphone array (eg, the first microphone array 320 ) contains different frequency components.
- the processor 330 may divide the environmental noise frequency band into a plurality of frequency bands, and each frequency band corresponds to a different frequency range.
- the frequency range corresponding to each frequency band here may be a preset frequency range, for example, 20-100Hz, 100Hz-1000Hz, 3000Hz-6000Hz, 9000Hz-20000Hz, and so on.
- step 1820 based on at least one of the plurality of frequency bands, a noise reduction signal corresponding to each of the at least one frequency band is generated.
- this step may be performed by processor 330 .
- the processor 330 may analyze the frequency bands divided by the environmental noise to obtain parameter information (eg, frequency information, amplitude information, phase information, etc.) of the environmental noise corresponding to each frequency band.
- the processor 330 generates a noise reduction signal corresponding to each of the at least one frequency band according to the parameter information. For example, in the frequency band of 20Hz-100Hz, the processor 330 may generate noise reduction corresponding to the frequency band 20Hz-100Hz based on the parameter information (eg, frequency information, amplitude information, phase information, etc.) of the environmental noise corresponding to the frequency band 20Hz-100Hz Signal.
- the speaker 340 outputs the target signal based on the noise reduction signal in the frequency band of 20Hz-100Hz.
- the speaker 340 may output a target signal that is approximately opposite in phase and approximately equal in amplitude to the noise in the frequency band 20Hz-100Hz to cancel the noise in this frequency band.
- generating a noise reduction signal corresponding to each of the at least one frequency band based on at least one of the plurality of frequency bands may include obtaining sound pressure levels corresponding to the plurality of frequency bands, and based on the plurality of frequency bands Corresponding sound pressure levels and frequency ranges corresponding to multiple frequency bands generate noise reduction signals corresponding to only part of the frequency bands.
- the sound pressure levels of ambient noise in different frequency bands picked up by the microphone array may be different.
- the processor 330 analyzes the frequency bands divided by the environmental noise, and can obtain the sound pressure level corresponding to each frequency band.
- the earphone 300 may select the ambient noise frequency band in consideration of the difference in the structure of the open-back earphone (eg, the earphone 300 ) and the change of the transfer function due to the difference in the user's ear structure resulting in the different wearing position of the earphone Active noise reduction is performed on some of the frequency bands in the
- the processor 330 generates noise reduction signals corresponding to only part of the frequency bands based on the sound pressure levels and frequency ranges of the plurality of frequency bands. For example, when the low frequency (eg, 20Hz-100Hz) in the ambient noise is loud (eg, the sound pressure level is greater than 60dB), the open-back earphone may not emit a sufficiently large noise reduction signal to cancel the low frequency noise.
- the processor 330 may generate only the noise reduction signal corresponding to the higher frequency partial frequency band (eg, 100Hz-1000Hz, 3000Hz-6000Hz) in the ambient noise frequency band.
- the processor 330 may only generate a noise reduction signal corresponding to a lower frequency part of the frequency band (eg, 20Hz-100Hz) in the ambient noise frequency band.
- FIG. 19 is an exemplary flowchart for estimating noise at a spatial location of a target, according to some embodiments of the present application. As shown in Figure 19, process 1900 may include:
- step 1910 components associated with the signal picked up by the bone conduction microphone are removed from the picked up ambient noise in order to update the ambient noise.
- this step may be performed by processor 330 .
- the microphone array eg, the first microphone array 320
- the user's own speaking voice is also picked up by the microphone array, that is, the user's own speaking voice is also regarded as a part of the ambient noise .
- the target signal output by the speaker eg, the speaker 340
- the user's own voice needs to be preserved, for example, in scenarios such as the user making a voice call or sending a voice message.
- the headset eg, the headset 300
- the headset may include a bone conduction microphone.
- the bone conduction microphone may pick up the user by picking up vibration signals generated by the facial bones or muscles when the user speaks.
- the voice signal of speaking is transmitted to the processor 330 .
- the processor 330 acquires parameter information from the sound signal picked up by the bone conduction microphone, and removes sound signal components associated with the sound signal picked up by the bone conduction microphone from the ambient noise picked up by the microphone array.
- the processor 330 updates the ambient noise according to the remaining parameter information of the ambient noise.
- the updated environmental noise no longer includes the sound signal of the user's own speech, that is, the user can hear the sound signal of the user's own speech when the user makes a voice call.
- step 1920 the noise of the target spatial location is estimated according to the updated ambient noise.
- this step may be performed by processor 330 .
- Step 1920 may be performed in a similar manner to step 1420, and the related description is not repeated here.
- process 1900 is only for illustration and description, and does not limit the scope of application of the present application.
- Various modifications and changes to process 1900 may be made to process 1900 under the guidance of the present application to those skilled in the art.
- components associated with the signal picked up by the bone conduction microphone may also be preprocessed, and the signal picked up by the bone conduction microphone may be transmitted to the terminal device as an audio signal. Such corrections and changes are still within the scope of this application.
- the noise reduction signal may also be updated based on manual user input. For example, in some embodiments, due to differences in ear structures or different wearing states of the earphone 300 for different users, the active noise reduction effect of the earphone 300 may be different, resulting in an unsatisfactory listening experience effect. At this time, the user can manually adjust the parameter information (for example, frequency information, phase information or amplitude information) of the noise reduction signal according to his own hearing effect, so as to match the wearing positions of the headphones 300 worn by different users and improve the active noise reduction of the headphones 300 performance.
- the parameter information for example, frequency information, phase information or amplitude information
- the hearing ability is different from that of an ordinary user, and the noise reduction signal generated by the earphone 300 itself is different from the hearing ability of the special user.
- the ability does not match, resulting in a poor listening experience for special users.
- the special user can manually adjust the frequency information, phase information or amplitude information of the noise reduction signal according to his own hearing effect, so as to update the noise reduction signal to improve the hearing experience of the special user.
- the way for the user to manually adjust the noise reduction signal may be manual adjustment through the keys on the earphone 300 .
- any position of the fixing structure 310 of the earphone 300 may be provided with a key position for user adjustment, so as to adjust the effect of the active noise reduction of the earphone 300, thereby improving the A user's listening experience using the headset 300 .
- the way for the user to manually adjust the noise reduction signal may also be manual input adjustment through a terminal device.
- the earphone 300 or an electronic product such as a mobile phone, a tablet computer, a computer, etc., which are communicatively connected to the earphone 300 can display the sound field at the ear canal of the user, and feedback the frequency information range and amplitude of the noise reduction signal suggested to the user.
- the user can manually input the parameter information of the proposed noise reduction signal, and then fine-tune the parameter information according to their own listening experience.
- aspects of this application may be illustrated and described in several patentable categories or situations, including any new and useful process, machine, product, or combination of matter, or combinations of them. of any new and useful improvements. Accordingly, various aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, microcode, etc.), or by a combination of hardware and software.
- the above hardware or software may be referred to as a "data block”, “module”, “engine”, “unit”, “component” or “system”.
- aspects of the present application may be embodied as a computer product comprising computer readable program code embodied in one or more computer readable media.
- a computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on baseband or as part of a carrier wave.
- the propagating signal may take a variety of manifestations, including electromagnetic, optical, etc., or a suitable combination.
- Computer storage media can be any computer-readable media other than computer-readable storage media that can communicate, propagate, or transmit a program for use by coupling to an instruction execution system, apparatus, or device.
- Program code on a computer storage medium may be transmitted over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
- the computer program coding required for the operation of the various parts of this application may be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python Etc., conventional procedural programming languages such as C language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
- the program code may run entirely on the user's computer, or as a stand-alone software package on the user's computer, or partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any network, such as a local area network (LAN) or wide area network (WAN), or to an external computer (eg, through the Internet), or in a cloud computing environment, or as a service Use eg software as a service (SaaS).
- LAN local area network
- WAN wide area network
- SaaS software as a service
Abstract
Description
Claims (34)
- 一种耳机,其特征在于,包括:固定结构,被配置为将所述耳机固定在用户耳部附近且不堵塞用户耳道的位置,所述固定结构包括:钩状部和机体部,其中,在所述用户佩戴所述耳机时,所述钩状部挂设在所述用户耳部的第一侧与头部之间,所述机体部接触所述耳部的第二侧;第一麦克风阵列,位于所述机体部,被配置为拾取环境噪声;处理器,位于所述钩状部或所述机体部,被配置为:利用所述第一麦克风阵列对目标空间位置的声场进行估计,所述目标空间位置比所述第一麦克风阵列中任一麦克风更加靠近所述用户耳道,以及基于所述目标空间位置的声场估计生成降噪信号;以及扬声器,位于所述机体部,被配置为:根据所述降噪信号输出目标信号,所述目标信号通过出声孔传递至所述耳机的外部,用于降低所述环境噪声。
- 根据权利要求1所述的耳机,其特征在于,所述机体部包括连接部和保持部,其中,在所述用户佩戴所述耳机时,所述保持部接触所述耳部的第二侧,所述连接部连接所述钩状部和所述保持部。
- 根据权利要求2所述的耳机,所述用户佩戴所述耳机时,所述连接部从所述耳部的第一侧向所述耳部的第二侧延伸,所述连接部与所述钩状部配合为所述保持部提供对所述耳部的第二侧的压紧力,以及所述连接部与所述保持部配合为所述钩状部提供对所述耳部的第一侧的压紧力。
- 根据权利要求3所述的耳机,其特征在于,在从所述钩状部与所述连接部之间的第一连接点到所述钩状部的自由端的方向上,所述钩状部向所述耳部的第一侧弯折,并与所述耳部的第一侧形成第一接触点,所述保持部与所述耳部的第二侧形成第二接触点,其中,在自然状态下所述第一接触点和所述第二接触点沿所述连接部的延伸方向的距离小于在佩戴状态下所述第一接触点和所述第二接触点沿所述连接部的延伸方向的距离,进而为所述保持部提供对所述耳部的第二侧的压紧力,以及为所述钩状部提供对所述耳部的第一侧的压紧力。
- 根据权利要求3所述的耳机,其特征在于,在从所述钩状部和所述连接部之间的第一连接点到所述钩状部的自由端的方向上,所述钩状部向所述头部弯折,并与所述头部形成第一接触点和第三接触点,其中所述第一接触点位于所述第三接触点与所述第一连接点之间,进而使得所述钩状部形成以所述第一接触点为支点的杠杆结构,所述头部在所述第三接触点处提供的指向所述头部外侧的作用力经所述杠杆结构转化为所述第一连接点处的指向所述头部的作用力,进而经所述连接部为所述保持部提供对所述耳部的第二侧的压紧力。
- 根据权利要求2所述的耳机,其特征在于,所述扬声器设置在所述保持部,所述保持部为多段结构,以调节所述扬声器在所述耳机的整体结构上的相对位置。
- 根据权利要求6所述的耳机,其特征在于,所述保持部包括依次首尾连接的第一保持段、第二保持段和第三保持段,所述第一保持段背离所述第二保持段的一端与所述连接部连接,所述第二保持段相对于所述第一保持段回折,并具有一间距,以使得所述第一保持段和所述第二保持段呈U字形结构,所述扬声器设置在所述第三保持段。
- 根据权利要求6所述的耳机,其特征在于,所述保持部包括依次首尾连接的第一保持段、第二保持段和第三保持段,所述第一保持段背离所述第二保持段的一端与所述连接部连接,所述第二保持段相对于所述第一保持段弯折,所述第三保持段与所述第一保持段彼此并排设置,且具有一间距,所述扬声器设置在所述第三保持段。
- 根据权利要求2所述的耳机,其特征在于,所述保持部朝向所述耳部的一侧设有所述出声孔,以使所述扬声器输出的所述目标信号通过所述出声孔向所述耳部传递。
- 根据权利要求9所述的耳机,其特征在于,所述保持部朝向所述耳部的一侧包括第一区域和第二区域,所述第一区域设有出声孔,所述第二区域相较于所述第一区域更远离所述连接部,且相较于所述第一区域朝向所述耳部凸起,以允许所述出声孔在佩戴状态下与所述耳部间隔。
- 根据权利要求10所述的耳机,其特征在于,所述用户佩戴所述耳机时,所述出声孔与所述用户耳道之间的间距小于10毫米。
- 根据权利要求2所述的耳机,其特征在于,所述保持部沿垂直轴方向且靠近所述用户头顶的一侧设有泄压孔,所述泄压孔相对于所述出声孔更加远离所述用户耳道。
- 根据权利要求12所述的耳机,其特征在于,所述用户佩戴所述耳机时,所述泄压孔与所述用户耳道之间的间距为5毫米至15毫米。
- 根据权利要求12所述的耳机,其特征在于,所述泄压孔与所述出声孔之间的连线与所述保持部的厚度方向之间的夹角为0°至50°。
- 根据权利要求12所述耳机,其特征在于,所述泄压孔和所述出声孔形成声学偶极子,所述第一麦克风阵列设置在第一目标区域,所述第一目标区域为所述偶极子辐射声场的声学零点位置。
- 根据权利要求12所述的耳机,其特征在于,所述第一麦克风阵列位于所述连接部。
- 根据权利要求12所述的耳机,其特征在于,所述第一麦克风阵列和所述出声孔之间的连线与所述出声孔和所述泄压孔之间的连线具有第一夹角,所述第一麦克风阵列和所述泄压孔之间的连线与所述出声孔和所述泄压孔之间的连线具有第二夹角,所述第一夹角与所述第二夹角的差值不大于30°。
- 根据权利要求12所述的耳机,其特征在于,所述第一麦克风阵列和所述出声孔之间具有第一距离,所述第一麦克风阵列和所述泄压孔之间具有第二距离,所述第一距离与所述第二距离 的差值不大于6毫米。
- 根据权利要求1所述的耳机,其特征在于,所述基于所述目标空间位置的声场估计生成降噪信号包括:基于所述拾取的环境噪声估计所述目标空间位置的噪声;以及基于所述目标空间位置的噪声和所述目标空间位置的声场估计生成所述降噪信号。
- 根据权利要求19所述的耳机,其特征在于,所述耳机进一步包括一个或多个传感器,位于所述钩状部和/或所述机体部,被配置为:获取所述耳机的运动信息,以及所述处理器进一步被配置为:基于所述运动信息更新所述目标空间位置的噪声和所述目标空间位置的声场估计;以及基于所述更新后的目标空间位置的噪声和所述更新后的目标空间位置的声场估计生成所述降噪信号。
- 根据权利要求19所述的耳机,其特征在于,所述基于所述拾取的环境噪声估计所述目标空间位置的噪声包括:确定一个或多个与所述拾取的环境噪声有关的空间噪声源;以及基于所述空间噪声源,估计所述目标空间位置的噪声。
- 根据权利要求1所述的耳机,其特征在于,所述利用所述第一麦克风阵列对目标空间位置的声场进行估计包括:基于所述第一麦克风阵列构建虚拟麦克风,所述虚拟麦克风包括数学模型或机器学习模型,用于表示若所述目标空间位置处包括麦克风后所述麦克风采集的音频数据;以及基于所述虚拟麦克风对所述目标空间位置的声场进行估计。
- 根据权利要求22所述的耳机,其特征在于,所述基于所述目标空间位置的声场估计生成降噪信号包括:基于所述虚拟麦克风估计所述目标空间位置的噪声;以及基于所述目标空间位置的噪声和所述目标空间位置的声场估计生成所述降噪信号。
- 根据权利要求1所述的耳机,其特征在于,所述耳机包括第二麦克风,位于所述机体部,所述第二麦克风被配置为拾取所述环境噪声和所述目标信号;以及所述处理器被配置为基于所述第二麦克风拾取的声音信号更新所述目标信号。
- 根据权利要求24所述的耳机,其特征在于,所述第二麦克风至少包括一个比所述第一麦克风阵列中任意麦克风更加靠近所述用户耳道的麦克风。
- 根据权利要求24所述的耳机,其特征在于,所述第二麦克风设置于第二目标区域,所述第二目标区域是所述保持部上靠近所述用户耳道的区域。
- 根据权利要求26所述的耳机,其特征在于,所述用户佩戴所述耳机时,所述第二麦克风与所述用户耳道之间的距离小于10毫米。
- 根据权利要求26所述的耳机,其特征在于,在所述用户的矢状面上,所述第二麦克风与 所述出声孔沿矢状轴方向的距离小于10毫米。
- 根据权利要求26所述的耳机,其特征在于,在所述用户的矢状面上,所述第二麦克风与所述出声孔沿垂直轴方向的距离为2毫米至5毫米。
- 根据权利要求24所述的耳机,其特征在于,所述基于所述第二麦克风拾取的声音信号更新所述降噪信号包括:基于所述第二麦克风拾取的声音信号,对所述用户耳道处的声场进行估计;以及根据所述用户耳道处的声场,更新所述降噪信号。
- 根据权利要求1所述的耳机,其特征在于,基于所述目标空间位置的声场估计生成降噪信号包括:将所述拾取的环境噪声划分为多个频带,所述多个频带对应不同的频率范围;以及基于所述多个频带中的至少一个,生成与所述至少一个频带中的每一个对应的所述降噪信号。
- 根据权利要求31所述的耳机,其特征在于,所述基于所述多个频带中的至少一个,生成与所述至少一个频带中的每一个对应的所述降噪信号包括:获取所述多个频带的声压级;基于所述多个频带的所述声压级和所述多个频带的所述频率范围,仅生成与部分频带对应的所述降噪信号。
- 根据权利要求1所述的耳机,其特征在于,所述第一麦克风阵列或所述第二麦克风包括骨导麦克风,所述骨导麦克风被配置为:拾取所述用户的说话声音,所述处理器基于所述拾取的环境噪声估计所述目标空间位置的噪声包括:从所述拾取的环境噪声中去除与所述骨导麦克风拾取的信号相关联的成分,以更新所述环境噪声;以及根据所述更新后的环境噪声估计所述目标空间位置的噪声。
- 根据权利要求1所述的耳机,其特征在于,所述耳机进一步包括调节模块,被配置为:获取用户输入;以及所述处理器进一步被配置为根据所述用户输入调整所述降噪信号。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022580472A JP2023532489A (ja) | 2021-04-25 | 2021-11-19 | イヤホン |
KR1020227044224A KR20230013070A (ko) | 2021-04-25 | 2021-11-19 | 이어폰 |
BR112022023372A BR112022023372A2 (pt) | 2021-04-25 | 2021-11-19 | Fones de ouvido |
EP21938133.2A EP4131997A4 (en) | 2021-04-25 | 2021-11-19 | EARPHONE |
TW111111172A TW202243486A (zh) | 2021-04-25 | 2022-03-24 | 一種耳機 |
US18/047,639 US20230063283A1 (en) | 2021-04-25 | 2022-10-18 | Earphones |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/089670 WO2022226696A1 (zh) | 2021-04-25 | 2021-04-25 | 一种开放式耳机 |
CNPCT/CN2021/089670 | 2021-04-25 | ||
CNPCT/CN2021/091652 | 2021-04-30 | ||
PCT/CN2021/091652 WO2022227056A1 (zh) | 2021-04-25 | 2021-04-30 | 声学装置 |
CNPCT/CN2021/109154 | 2021-07-29 | ||
PCT/CN2021/109154 WO2022022618A1 (zh) | 2020-07-29 | 2021-07-29 | 一种耳机 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/047,639 Continuation US20230063283A1 (en) | 2021-04-25 | 2022-10-18 | Earphones |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022227514A1 true WO2022227514A1 (zh) | 2022-11-03 |
Family
ID=81456417
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/131927 WO2022227514A1 (zh) | 2021-04-25 | 2021-11-19 | 一种耳机 |
Country Status (8)
Country | Link |
---|---|
US (4) | US11328702B1 (zh) |
EP (1) | EP4131997A4 (zh) |
JP (1) | JP2023532489A (zh) |
KR (1) | KR20230013070A (zh) |
CN (2) | CN116918350A (zh) |
BR (1) | BR112022023372A2 (zh) |
TW (2) | TW202243486A (zh) |
WO (1) | WO2022227514A1 (zh) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11935513B2 (en) | 2019-10-27 | 2024-03-19 | Silentium Ltd. | Apparatus, system, and method of Active Acoustic Control (AAC) |
EP4210350A4 (en) * | 2021-11-19 | 2023-12-13 | Shenzhen Shokz Co., Ltd. | OPEN ACOUSTIC DEVICE |
US20230232239A1 (en) * | 2022-01-14 | 2023-07-20 | Qualcomm Incorporated | Methods for Reconfigurable Intelligent Surface (RIS) Aided Cooperative Directional Security |
KR102569637B1 (ko) * | 2022-03-24 | 2023-08-25 | 올리브유니온(주) | 이어 밴드에 마이크가 구성된 디지털 히어링 디바이스 |
WO2024003756A1 (en) * | 2022-06-28 | 2024-01-04 | Silentium Ltd. | Apparatus, system, and method of neural-network (nn) based active acoustic control (aac) |
US11956584B1 (en) * | 2022-10-28 | 2024-04-09 | Shenzhen Shokz Co., Ltd. | Earphones |
CN117956362A (zh) * | 2022-10-28 | 2024-04-30 | 深圳市韶音科技有限公司 | 一种开放式耳机 |
WO2024088223A1 (zh) * | 2022-10-28 | 2024-05-02 | 深圳市韶音科技有限公司 | 一种耳机 |
WO2024087487A1 (zh) * | 2022-10-28 | 2024-05-02 | 深圳市韶音科技有限公司 | 一种耳机 |
US11877111B1 (en) | 2022-10-28 | 2024-01-16 | Shenzhen Shokz Co., Ltd. | Earphones |
CN220254654U (zh) * | 2022-10-28 | 2023-12-26 | 深圳市韶音科技有限公司 | 一种开放式耳机 |
CN116614738B (zh) * | 2023-07-21 | 2023-12-08 | 江西红声技术有限公司 | 一种骨传导送话器及骨传导送话器组件 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109565626A (zh) * | 2016-07-29 | 2019-04-02 | 伯斯有限公司 | 具有主动降噪功能的声学开放式耳机 |
CN110430517A (zh) * | 2019-04-15 | 2019-11-08 | 美律电子(深圳)有限公司 | 辅助听力装置 |
CN111954142A (zh) * | 2020-08-29 | 2020-11-17 | 深圳市韶音科技有限公司 | 一种听力辅助装置 |
US20210067857A1 (en) * | 2019-08-28 | 2021-03-04 | Bose Corporation | Open Audio Device |
Family Cites Families (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7430300B2 (en) * | 2002-11-18 | 2008-09-30 | Digisenz Llc | Sound production systems and methods for providing sound inside a headgear unit |
GB2434708B (en) * | 2006-01-26 | 2008-02-27 | Sonaptic Ltd | Ambient noise reduction arrangements |
US8649526B2 (en) | 2010-09-03 | 2014-02-11 | Nxp B.V. | Noise reduction circuit and method therefor |
AR084091A1 (es) * | 2010-12-03 | 2013-04-17 | Fraunhofer Ges Forschung | Adquisicion de sonido mediante la extraccion de informacion geometrica de estimativos de direccion de llegada |
TW201228415A (en) | 2010-12-23 | 2012-07-01 | Merry Electronics Co Ltd | Headset for communication with recording function |
CN102306496B (zh) * | 2011-09-05 | 2014-07-09 | 歌尔声学股份有限公司 | 一种多麦克风阵列噪声消除方法、装置及系统 |
CN102348151B (zh) | 2011-09-10 | 2015-07-29 | 歌尔声学股份有限公司 | 噪声消除系统和方法、智能控制方法和装置、通信设备 |
US10231065B2 (en) * | 2012-12-28 | 2019-03-12 | Gn Hearing A/S | Spectacle hearing device system |
US10063958B2 (en) * | 2014-11-07 | 2018-08-28 | Microsoft Technology Licensing, Llc | Earpiece attachment devices |
EP3373602A1 (en) * | 2017-03-09 | 2018-09-12 | Oticon A/s | A method of localizing a sound source, a hearing device, and a hearing system |
CN108668188A (zh) | 2017-03-30 | 2018-10-16 | 天津三星通信技术研究有限公司 | 在电子终端中执行的耳机的主动降噪的方法及其电子终端 |
CN107346664A (zh) | 2017-06-22 | 2017-11-14 | 河海大学常州校区 | 一种基于临界频带的双耳语音分离方法 |
CN107452375A (zh) | 2017-07-17 | 2017-12-08 | 湖南海翼电子商务股份有限公司 | 蓝牙耳机 |
US10706868B2 (en) * | 2017-09-06 | 2020-07-07 | Realwear, Inc. | Multi-mode noise cancellation for voice detection |
JP6972814B2 (ja) * | 2017-09-13 | 2021-11-24 | ソニーグループ株式会社 | イヤホン装置、ヘッドホン装置及び方法 |
US10650798B2 (en) * | 2018-03-27 | 2020-05-12 | Sony Corporation | Electronic device, method and computer program for active noise control inside a vehicle |
EP3687193B1 (en) * | 2018-05-24 | 2024-03-06 | Sony Group Corporation | Information processing device and information processing method |
TWI690218B (zh) * | 2018-06-15 | 2020-04-01 | 瑞昱半導體股份有限公司 | 耳機 |
KR102406572B1 (ko) | 2018-07-17 | 2022-06-08 | 삼성전자주식회사 | 오디오 신호를 처리하는 오디오 장치 및 오디오 신호 처리 방법 |
BR112021021746A2 (pt) * | 2019-04-30 | 2021-12-28 | Shenzhen Voxtech Co Ltd | Aparelho de saída acústica |
US11197083B2 (en) * | 2019-08-07 | 2021-12-07 | Bose Corporation | Active noise reduction in open ear directional acoustic devices |
US10951970B1 (en) * | 2019-09-11 | 2021-03-16 | Bose Corporation | Open audio device |
US11478211B2 (en) * | 2019-12-03 | 2022-10-25 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for noise reduction |
CN111935589B (zh) | 2020-09-28 | 2021-02-12 | 深圳市汇顶科技股份有限公司 | 主动降噪的方法、装置、电子设备和芯片 |
-
2021
- 2021-04-30 CN CN202180094203.XA patent/CN116918350A/zh active Pending
- 2021-10-21 US US17/451,659 patent/US11328702B1/en active Active
- 2021-11-19 BR BR112022023372A patent/BR112022023372A2/pt unknown
- 2021-11-19 WO PCT/CN2021/131927 patent/WO2022227514A1/zh unknown
- 2021-11-19 EP EP21938133.2A patent/EP4131997A4/en active Pending
- 2021-11-19 CN CN202111408328.3A patent/CN115243137A/zh active Pending
- 2021-11-19 JP JP2022580472A patent/JP2023532489A/ja active Pending
- 2021-11-19 KR KR1020227044224A patent/KR20230013070A/ko not_active Application Discontinuation
-
2022
- 2022-03-24 TW TW111111172A patent/TW202243486A/zh unknown
- 2022-04-01 US US17/657,743 patent/US11715451B2/en active Active
- 2022-04-22 TW TW111115388A patent/TW202242855A/zh unknown
- 2022-10-18 US US18/047,639 patent/US20230063283A1/en active Pending
-
2023
- 2023-06-11 US US18/332,746 patent/US20230317048A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109565626A (zh) * | 2016-07-29 | 2019-04-02 | 伯斯有限公司 | 具有主动降噪功能的声学开放式耳机 |
CN110430517A (zh) * | 2019-04-15 | 2019-11-08 | 美律电子(深圳)有限公司 | 辅助听力装置 |
US20210067857A1 (en) * | 2019-08-28 | 2021-03-04 | Bose Corporation | Open Audio Device |
CN111954142A (zh) * | 2020-08-29 | 2020-11-17 | 深圳市韶音科技有限公司 | 一种听力辅助装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4131997A4 |
Also Published As
Publication number | Publication date |
---|---|
EP4131997A4 (en) | 2023-12-06 |
US20230063283A1 (en) | 2023-03-02 |
US20230317048A1 (en) | 2023-10-05 |
JP2023532489A (ja) | 2023-07-28 |
US20220343887A1 (en) | 2022-10-27 |
TW202243486A (zh) | 2022-11-01 |
TW202242855A (zh) | 2022-11-01 |
US11328702B1 (en) | 2022-05-10 |
KR20230013070A (ko) | 2023-01-26 |
BR112022023372A2 (pt) | 2024-02-06 |
CN116918350A (zh) | 2023-10-20 |
EP4131997A1 (en) | 2023-02-08 |
CN115243137A (zh) | 2022-10-25 |
US11715451B2 (en) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022227514A1 (zh) | 一种耳机 | |
CN108600907B (zh) | 定位声源的方法、听力装置及听力系统 | |
CN105530580B (zh) | 听力系统 | |
EP3285500B1 (en) | A binaural hearing system configured to localize a sound source | |
US20140270321A1 (en) | Non-occluded personal audio and communication system | |
CN108156567B (zh) | 无线听力设备 | |
CN113329312A (zh) | 确定话轮转换的助听器 | |
CN112911477A (zh) | 包括个人化波束形成器的听力系统 | |
WO2023087565A1 (zh) | 一种开放式声学装置 | |
WO2022227056A1 (zh) | 声学装置 | |
WO2023087572A1 (zh) | 声学装置及其传递函数确定方法 | |
WO2023164954A1 (zh) | 一种听力辅助设备 | |
WO2022226792A1 (zh) | 声学输入输出设备 | |
RU2807021C1 (ru) | Наушники | |
US11689845B2 (en) | Open acoustic device | |
CN115250395A (zh) | 声学输入输出设备 | |
CN115250392A (zh) | 声学输入输出设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021938133 Country of ref document: EP Effective date: 20221101 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112022023372 Country of ref document: BR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21938133 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20227044224 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022580472 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 112022023372 Country of ref document: BR Kind code of ref document: A2 Effective date: 20221117 |