CN115967883A - Earphone, user equipment and method for processing signal - Google Patents

Earphone, user equipment and method for processing signal Download PDF

Info

Publication number
CN115967883A
CN115967883A CN202111328416.2A CN202111328416A CN115967883A CN 115967883 A CN115967883 A CN 115967883A CN 202111328416 A CN202111328416 A CN 202111328416A CN 115967883 A CN115967883 A CN 115967883A
Authority
CN
China
Prior art keywords
signal
sound
ambient sound
noise reduction
sound signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111328416.2A
Other languages
Chinese (zh)
Inventor
李芳庆
黄景昌
关智博
李培硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to PCT/CN2022/118312 priority Critical patent/WO2023061130A1/en
Publication of CN115967883A publication Critical patent/CN115967883A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Headphones And Earphones (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

The application provides an earphone, user equipment and a method for processing signals. The earphone includes: the audio input module is used for receiving an audio signal to be played; the environment sound acquisition module is used for acquiring environment sound signals around the earphone; the active noise reduction module is used for generating a noise reduction signal according to the environment sound signal output by the environment sound acquisition module; the environment sound processing module is used for acquiring a first environment sound signal in a designated sound pickup direction from the environment sound signals output by the environment sound acquisition module, and the first environment sound signal is a multichannel signal; and the audio playing module is used for playing the audio signal, the noise reduction signal and the sound mixing signal of the first environment sound signal. The earphone provided by the embodiment of the application does not adopt the omnidirectional noise reduction, but can reserve the environment sound signal with the space sound effect in the appointed pickup direction, thereby avoiding the problems in aspects such as safety and the like caused by the adoption of the omnidirectional noise reduction mode, and enabling the environment sound signal played by the earphone to be heard more truly.

Description

Earphone, user equipment and method for processing signal
Technical Field
The present application relates to the field of headset technology, and more particularly, to a headset, a user equipment, and a method of processing a signal.
Background
With the development of the random earphone technology, the application of earphones with the active noise reduction function is wider and wider. The traditional active noise reduction mode adopts an omnidirectional noise reduction mode, namely, ambient noise in all directions around a headset wearer is eliminated.
The omnidirectional noise reduction mode has the advantage that the audio signal played by the earphone can be prevented from being interfered by environmental noise. However, in some cases, the omni-directional noise reduction method may cause problems in terms of safety, for example. For example, when a wearer of the headset walks outdoors, if the headset shields sound from a vehicle behind, traffic accidents are likely to occur.
Disclosure of Invention
The present application provides an earphone, a user equipment and a method for processing signals, so as to solve the above problems.
In a first aspect, a first earpiece is provided, comprising: an audio input module configured to receive an audio signal to be played; an ambient sound capture module comprising a first microphone and a second microphone configured to capture an ambient sound signal around the first earpiece; an active noise reduction module configured to generate a noise reduction signal from the ambient sound signal; the sound pickup device comprises an ambient sound processing module, a sound pickup module and a sound pickup module, wherein the ambient sound processing module is configured to acquire a first ambient sound signal of a designated sound pickup direction from the ambient sound signal; the audio playing module is configured to play audio signals, noise reduction signals and sound mixing signals of the first environment sound signals; wherein the first microphone and the second microphone are configured to be arranged along a first direction, the first direction making a first angle belonging to an acute angle with a length direction of a headphone handle of the first headphone.
In a second aspect, a wireless headset is provided, comprising: a first audio receiver comprising a first ambient sound capture module comprising a first microphone and a second microphone configured to capture an ambient sound signal surrounding the first audio receiver, and a first noise reduction module configured to generate a first noise reduction signal from the ambient sound signal surrounding the first audio receiver to reduce noise of the audio signal received by the first audio receiver; a second audio receiver comprising a second ambient sound capture module comprising a third microphone and a fourth microphone configured to capture an ambient sound signal surrounding the second audio receiver, and a second noise reduction module configured to generate a second noise reduction signal from the ambient sound signal surrounding the second audio receiver to reduce noise of the audio signal received by the second audio receiver; wherein the first and second microphones are configured to be arranged in a first direction at a first angle belonging to an acute angle to a length direction of the handle of the first audio receiver, and the third and fourth microphones are configured to be arranged in a second direction at a second angle belonging to an acute angle to a width direction of the handle of the first audio receiver.
In a third aspect, a headset is provided, comprising: a first audio receiver comprising a first ambient sound capture module comprising a first microphone and a second microphone configured to capture an ambient sound signal surrounding the first audio receiver, and a first noise reduction module configured to generate a first noise reduction signal from the ambient sound signal surrounding the first audio receiver to reduce noise of the audio signal received by the first audio receiver; a second audio receiver comprising a second ambient sound capture module comprising a third microphone and a fourth microphone configured to capture an ambient sound signal surrounding the second audio receiver, and a second noise reduction module configured to generate a second noise reduction signal from the ambient sound signal surrounding the second audio receiver to reduce noise of the audio signal received by the second audio receiver; wherein the first and second microphones are configured to be arranged in a first direction, and the third and fourth microphones are configured to be arranged in a second direction, wherein the first direction makes a first angle with a central axis of the headset at an acute angle, and the second direction makes a second angle with a front of the headset at an acute angle.
The earphone provided by the embodiment of the application adopts a directional noise reduction mode instead of an omnidirectional noise reduction mode. That is to say, the earphone can keep the environment sound signal of the appointed sound pickup direction, thus avoiding the problems of safety and the like caused by adopting the omnidirectional noise reduction mode. For example, the designated sound pickup direction may be set to the rear direction to retain the sound of a vehicle behind the headphone wearer, thereby reducing the probability of a traffic accident. Furthermore, the environment sound signals retained by the earphone also have a spatial sound effect, so that a user wearing the earphone can locate the sound source position of the sound signals, and the environment sound signals played by the earphone can be more truly heard.
Drawings
Fig. 1 is a schematic structural diagram of a headphone with an active noise reduction function.
Fig. 2 is a schematic structural diagram of an earphone according to an embodiment of the present application.
FIG. 3 is an exemplary diagram of one possible implementation of the ambient sound processing module of FIG. 2.
Fig. 4 is an exemplary diagram of the working principle of the beam shaper.
Fig. 5 is an exemplary diagram of one possible implementation of the directional sound pickup module in fig. 3.
FIG. 6 is an exemplary diagram of one possible implementation of the sound effects processing module of FIG. 3.
Fig. 7 is an exemplary diagram of another possible implementation of the ambient sound processing module of fig. 2.
Fig. 8 is an exemplary diagram of one possible implementation of the audio playing module in fig. 2.
Fig. 9 is a schematic flowchart of a customized manner of personalized HRTF filters according to an embodiment of the present application.
FIG. 10 is a diagram illustrating another possible implementation of the sound effects processing module of FIG. 3.
Fig. 11 is a schematic flow chart of a method for processing a signal according to an embodiment of the present application.
Fig. 12A illustrates the location of a microphone array provided by an embodiment of the present application.
Fig. 12B illustrates a microphone array in a left earpiece provided by an embodiment of the present application.
Fig. 12C illustrates a microphone array in a right earpiece provided by an embodiment of the present application.
Fig. 12D illustrates a headset provided by an embodiment of the present application.
Detailed Description
Along with the progress of the technology, the performance of earphones with the noise reduction function on the market is better and better. At present, the noise reduction technology of the earphone mainly has two types: passive noise reduction (ANC) and Active Noise Cancellation (ANC).
Passive noise reduction techniques have long been known. Passive noise reduction techniques primarily surround the ear with the earpiece to form an enclosed space to block ambient noise. Or, the earphone can also adopt sound insulation materials such as silica gel earplugs and the like to block outside noise, so that passive noise reduction is realized.
The active noise reduction technology is to generate a noise reduction signal corresponding to the ambient noise around the earphone through an active noise reduction module, so as to reduce or even eliminate the ambient noise, thereby realizing the noise reduction effect. The noise reduction signal may be, for example, an acoustic signal having the same amplitude as the external noise but opposite phase. In the past, headsets with active noise reduction typically existed only in certain specific industrial areas for reasons of size, power consumption, cost, etc. In recent years, with the progress of technology and the improvement of consumption level, active noise reduction technology is more and more widely applied in the field of electronic consumer products, so that the popularization rate of earphones with the active noise reduction function in consumer groups is higher and higher.
Currently, active noise reduction technologies applied in earphones are mainly divided into three noise reduction modes, namely feedforward noise reduction, feedback noise reduction and hybrid noise reduction. The acoustic structure and the signal processing mode of the earphone corresponding to different noise reduction modes have certain difference, so that the different noise reduction modes have respective characteristics in the aspects of noise reduction depth, noise reduction bandwidth and the like.
A noise reduction module that employs a feedforward noise reduction mode may be referred to as a feedforward noise reduction module. As shown in fig. 1 (a), the earphone may receive an externally input audio signal. The audio signal is processed by the active noise reduction circuit and then can be played through a loudspeaker for a wearer of the earphone to listen. The feed-forward noise reduction module may include a feed-forward microphone (or feed-forward microphone) and an active noise reduction circuit shown in fig. 1 (a). The feedforward microphone can be used for detecting a noise signal of the surrounding environment of the earphone, and outputting a signal (or called an inverted signal) with the same frequency response as the environmental noise of the earphone but opposite phase through the active noise reduction circuit so as to realize the active noise reduction function. At the eardrum of the wearer of the headset, the anti-phase signal cancels out the noise signal, thereby reducing the level of noise heard by the human ear. Due to the delay of signal transmission, the noise signal detected by the feedforward microphone is different from the noise signal detected by the eardrum, and the active noise reduction circuit needs to compensate for the difference.
A noise reduction module that employs a feedback noise reduction mode may be referred to as a feedback noise reduction module. Referring to fig. 1 (b), the main difference from the feedforward noise reduction module shown in fig. 1 (a) is that the feedback noise reduction module replaces the feedforward microphone with the feedback microphone. The feedback noise reduction module primarily uses a feedback microphone to detect the noise signal in the eardrum region and then forms a feedback path to minimize the noise level in that region.
A noise reduction module that employs a hybrid noise reduction mode may be referred to as a hybrid noise reduction module. Referring to fig. 1 (c), the hybrid noise reduction module includes both a feedforward microphone and a feedback microphone. In the hybrid noise reduction module, the noise reduction signal emitted by the loudspeaker of the earphone is determined by the feedforward microphone and the feedback microphone together. The feedforward noise reduction module in the hybrid noise reduction module can weaken high-frequency noise signals in noise signals of the surrounding environment of the earphone, and the feedback noise reduction module can reduce low-frequency noise signals in the noise signals. The feedforward noise reduction module and the feedback noise reduction module are mutually matched, so that the flexibility of the noise reduction module can be effectively enhanced, and the noise reduction effect is very effective.
The earphone with active noise reduction function in the market realizes omnidirectional noise reduction. That is, noise signals in all directions around the headset are suppressed by the noise reduction module of the headset. The advantage of the omni-directional noise reduction mode is that the audio signal played in the earphone can be ensured to be almost not interfered by the noise signal. However, in some situations, the omni-directional noise reduction may cause problems in user experience and may even cause certain safety hazards.
For example, when the wearer of the headset wishes to listen to music while communicating with a friend on the left of the side, it is difficult to hear the friend's voice unless the headset wearer takes off the headset because the friend's speech is canceled by the noise reduction module. In this case, the user experience of the wearer of the headset can be improved if the ambient sound signal on the left side of the headset can be preserved.
As another example, when the headset wearer listens to music in public, especially on the road, traffic accidents may easily occur because the sound from vehicles behind the body is eliminated by the noise reduction module in the headset, which may make the headset wearer unable to feel the surrounding potential risk. Therefore, in such a case, it is necessary to suppress an ambient sound signal in the visual range (in front of the headphone wearer) and to keep an ambient sound signal outside the visual range (particularly behind the headphone wearer), thereby improving safety of the user when traveling with the headphone.
In order to solve the above problem, the structure of the earphone provided by the embodiment of the present application is described in detail below with reference to fig. 2.
As shown in fig. 2, the headset 2 may include an audio input module 21, an ambient sound collection module 22, an active noise reduction module 23, an ambient sound processing module 24, and an audio playback module 25.
The audio input module 21 (or audio input circuit) can be used to receive an audio signal to be played. The audio input circuit 21 may comprise, for example, one or more audio signal interfaces. Through the one or more audio data interfaces, one or more types of audio signals may be input into the headset 2. The audio signal received by the audio input module 21 may be, for example, a music signal or a sound signal.
The ambient sound capture module 22 may be configured to capture an ambient sound signal around the headset 2. The ambient sound collection module 22 may include, for example, a plurality of microphones, which may include, for example, a feed-forward microphone, a feedback microphone, a talk microphone, and other auxiliary microphones, among others. The multiple microphones may be located at different locations of the headset. For example, the plurality of microphones may be distributed in two earphone units of the left and right earphones. The plurality of microphones may form a Microphone Array (MA).
The ambient sound signal output by the ambient sound capture module 22 may be an omnidirectional ambient sound signal. That is, the ambient sound signal output by the ambient sound capturing module 22 may include sound signals in various directions around the headset 2. Since the environment sound signals collected by the feedforward microphone and the feedback microphone are generally considered as noise signals corresponding to the audio signals played by the earphone, the environment sound signals collected by the feedforward microphone and the feedback microphone may also be referred to as environment noise signals in some embodiments.
The active noise reduction module 23 may be configured to generate a noise reduction signal according to the ambient sound signal output by the ambient sound collection module 22. An input of the active noise reduction module 23 may be electrically connected to an output of the ambient sound collection module 22 to receive an ambient noise signal from the output of the ambient sound collection module 22. An output end of the active noise reduction module 23 may output a noise reduction signal corresponding to the ambient noise signal.
The noise reduction signal corresponding to the ambient noise signal may be used to reduce or cancel the ambient noise signal. For example, the noise reduction signal may be an inverse of the ambient noise signal. Alternatively, the noise reduction signal may be of the same frequency response and opposite phase to the ambient noise signal. In some embodiments, the noise reduction signal output by the active noise reduction module 23 may not be the final noise reduction signal, and needs to be processed by some processing links for noise reduction.
The active noise reduction module 23 may be any one of a feedforward noise reduction module, a feedback noise reduction module, and a hybrid noise reduction module shown in fig. 1. The feedforward noise reduction module has the advantage of wide noise reduction range coverage, but is difficult to finely adjust the noise reduction effect. The noise reduction range of the feedback noise reduction module is relatively small, but the low-frequency band signal can be finely adjusted. The hybrid noise reduction module combines the performance of both the feedforward and feedback noise reduction modules, but the power consumption and cost of the hybrid noise reduction module may be higher. Therefore, according to the characteristics of various noise reduction modules and the actual requirements of users, an appropriate active noise reduction mode can be set for the active noise reduction module 23.
The ambient sound processing module 24 may be configured to obtain a first ambient sound signal with a designated sound pickup direction from the ambient sound signal output by the ambient sound pickup module 22. An input of the ambient sound processing module 24 may be electrically connected to an output of the ambient sound collection module 22 to receive the ambient sound signal output by the ambient sound collection module 22 from the ambient sound collection module 22. Furthermore, the output of the ambient sound processing module 24 is operable to output a first ambient sound signal.
In some embodiments, the first audio receiver (e.g., left earpiece) includes a first ambient sound processing module configured to obtain an ambient sound signal specifying a pickup direction (e.g., directly behind the wearer) from an ambient sound signal around the first audio receiver, and similarly, the second audio receiver (e.g., right earpiece) includes a second ambient sound processing module configured to obtain an ambient sound signal specifying a pickup direction (e.g., directly behind the wearer) from an ambient sound signal around the second audio receiver.
In some embodiments, the ambient sound capture module 22 within the first earpiece not only captures the ambient sound signal around the first earpiece (e.g., the left earpiece), but also captures the ambient signal around the second earpiece (e.g., the right earpiece). Similarly, the ambient sound capture module in the second earphone not only captures the ambient sound signal around the second earphone (e.g., the right earphone), but also captures the ambient signal around the first earphone (e.g., the left earphone). Thus, the environmental sound from the appointed sound pickup direction can be sufficiently reserved, so that the active noise reduction effect can be enhanced, and the earphone wearer can timely sense the movement of a rear object or an object.
The first environment sound signal is a signal with a spatial sound effect. Spatial sound effects may also be referred to as stereo sound effects. Thus, the first ambient sound signal may also be referred to as a stereo signal or a multi-channel signal. The number of the channel signals included in the first ambient sound signal is not specifically limited in the embodiments of the present application. For example, the first ambience sound signal may be a binaural signal. As another example, the first ambience sound signal may be a five-channel signal with surround sound.
The "designation" in designating the sound pickup direction may be understood as a preset. The sound pickup direction may be specified when the headphone is shipped. Alternatively, the sound pick-up direction may be specified and/or adjusted by the wearer of the headset according to actual needs. For example, in listening to music, if the headphone wearer wishes to be able to hear sounds from behind in order to avoid traffic hazards, the headphone wearer may set the designated sound pickup direction to "behind". As another example, during listening to music, if the headset wearer wishes to talk to a friend on the left side of the person, the headset wearer may set the designated pickup direction to "left".
The specified sound pickup direction may refer to a range of directions. For example, if the sound pickup direction is designated as the rear direction, the designated sound pickup direction may refer to a range of 60 degrees to the left and right directly behind the headphone wearer, or a range of 90 degrees to the left and right directly behind the headphone wearer.
As mentioned above, the first ambient sound signal output by the ambient sound processing module 24 is a signal with spatial sound effect. The embodiment of the present application does not specifically limit the generation manner of the first ambient sound signal. As an example, the ambient sound processing module 24 may perform filtering processing on the left channel signal collected by the ambient sound collection module 22 to extract an ambient sound signal a specifying a sound pickup direction from the left channel signal. Further, the ambient sound processing module 24 may further perform filtering processing on the right channel signal collected by the ambient sound collecting module 22 to extract an ambient sound signal b specifying a sound pickup direction from the right channel signal. The environment sound signal a and the environment sound signal b can form a dual-channel signal with a space sound effect and a designated sound pickup direction.
As another example, the ambient sound processing module 24 may perform directional enhancement on the ambient sound signal output by the ambient sound collecting module 22 based on a beam-forming filter, so as to obtain the pickup ambient sound signal. Since the beamforming filter weights and combines the signals of multiple channels into a signal of one channel, the sound pickup environment sound signal is a mono signal. Then, the ambient sound processing module 24 may perform sound effect rendering on the picked-up ambient sound signal to obtain a first ambient sound signal with spatial sound effect. Such an implementation is described in detail below in conjunction with fig. 3, and will not be described in detail here.
The audio playing module 25 may be configured to play the audio signal, the noise reduction signal, and the mixed sound signal of the first environment sound signal. The audio playing module 25 may comprise, for example, a speaker. In some embodiments, the audio playing module 25 may further include a mixing module. For a detailed description of the implementation of the audio playing module 25, reference may be made to fig. 8.
The earphone with the active noise reduction function provided by the embodiment of the application does not carry out omnidirectional noise reduction on the environmental sound signals, but keeps the sound signals in a certain appointed sound pickup direction. The preservation of the sound signal for a given pick-up direction can be improved to avoid the aforementioned problems associated with omni-directional noise reduction. For example, in an outdoor scene, the designated sound pickup direction can be set to the rear direction of the headset wearer, and then the headset wearer can hear the sound of the vehicle at the rear, thereby improving the safety of headset wearing. For another example, in a scene of listening to music and chatting, the designated pickup direction may be set as the direction in which the chatting object is located, so that the earphone wearer can listen to music once and chat with friends once. In addition, in the embodiment of the application, the environment sound signal played by the earphone has a spatial sound effect, so that the earphone wearer can locate the direction of the sound signal, and the environment sound signal can be more truly heard.
One possible implementation of the ambient sound processing module 24 is given below in conjunction with fig. 3.
Referring to fig. 3, the ambient sound processing module 24 may include a directional sound pickup module 241. The directional sound pickup module 241 may perform directional sound pickup based on a beamforming filter. The beam forming filter utilizes the beam forming principle to directionally enhance the multi-path signals in the space, and finally forms a path of directionally enhanced signals. The signal output by the beamforming filter has a high signal quality. The embodiment of the present application may utilize a beam forming filter to process the ambient sound signal output by the ambient sound collecting module 22, so as to obtain the ambient sound signal with the designated sound pickup direction. For convenience of description, a signal output by the directional sound pickup module 241 will be referred to as a sound pickup environment sound signal hereinafter.
Referring to fig. 4, the signal shown in fig. 4 is the ambient sound signal output by the ambient sound collection module 22. The ambient sound signal output by the ambient sound collecting module 22 may include signals at various frequency points (e.g., 100Hz, 500Hz, 1000Hz, and 5000Hz shown in fig. 4). The front in fig. 4 represents the front of the body of the wearer of the headset; backward means behind the body of the wearer of the headset. The wave beam forming filter can set a smaller gain for the forward signal of each frequency point in the environment sound signal output by the environment sound collecting module 22, and a larger gain for the backward signal, so that the environment sound signal located backward of the earphone wearer can be effectively enhanced, the environment sound signal located forward of the earphone wearer is inhibited, and the directional sound pickup effect is formed.
As mentioned above, the ambient sound capture module 22 may capture the ambient sound signal around the headphones using a plurality of microphone arrays. As shown in fig. 5, the microphone array may include a left ear microphone 221 and a right ear microphone 222. Since the left ear microphone 221 and the right ear microphone 222 are respectively located in the left and right ear units of the earphone, there is a difference in the environmental sound signals collected by the left ear microphone 221 and the right ear microphone 222. Therefore, in some embodiments, the ambient sound signals collected by the left ear microphone 221 and the right ear microphone 222 may be further processed 2411 synchronously before being beamformed 2412 by the directional sound pickup module 241. For example, signals collected by the left ear microphone 221 and the right ear microphone 222 may be transmitted to each other in a wired or wireless manner, and then synchronous processing of the signals is performed at each side. According to the principles of auditory psychology, the time delay between the signals collected by the left ear microphone 221 and the right ear microphone 222 needs to be controlled within 20ms, otherwise the effect of the subsequent mixing is affected.
Referring back to fig. 3, the directional sound pickup module 241 converts the ambient sound signal output from the ambient sound pickup module 22 into a picked-up ambient sound signal based on the beam forming filter. Because the working principle of the beam forming filter is to synthesize multiple paths of signals into one path of signal, the sound pickup environment sound signal is a single-channel signal. In order to improve the hearing effect of the ambient sound signal, the sound effect processing module 242 may be used to perform sound effect processing (or sound effect rendering) on the pickup ambient sound signal, and convert the pickup ambient sound signal into a first ambient sound signal with spatial sound effect, so as to improve the hearing experience of the earphone wearer.
In some embodiments, as shown in FIG. 6, the sound-effect processing module 242 may be implemented based on Head Related Transfer Function (HRTF) filters 2421.
The HRTF filter 2421 may be understood as a sound effect rendering algorithm. In particular, a person has two ears, but is able to localize sound from three-dimensional space, which is advantageous for the human ear's ability to analyze the sound signal. The digitized representation of this analysis capability is the HRTF filter 2421. That is, a sound source signal transmitted from any point in space to the human ear (in front of the eardrum) can be described by using an HRTF filter 2421. The sound source signal is acted by the HRTF filter 2421, and the obtained signal is the sound signal in front of the eardrums of the two ears. The HRTF filter 2421 can be seen as a black box. When the HRTF filter 2421 describing a transfer relationship between a sound source signal of a certain orientation in a space to a sound signal in front of eardrums of both ears is obtained in a certain manner, the sound source signal of the orientation can be restored based on the HRTF filter 2421, thereby generating a spatial sound effect.
The HRTF filter 2421 may be fitted through a mathematical function. Alternatively, the HRTF filter 2421 may also be measured experimentally. As an example, a library of HRTF filters collected from laboratory tests may be stored in the headphones for recall by the headphones during actual operation.
The following illustrates a process of implementing a spatial sound effect based on HRTF filters. Referring to fig. 6, the azimuth information of the sound source signal may be acquired first. Then, an HRTF filter (or parameters of an HRTF filter) corresponding to the orientation of the sound source signal is selected from the HRTF filters 2421. Next, the convolution module 2422 may be utilized to perform convolution operation on the sound pickup environment sound signal output by the directional sound pickup module 241 (see fig. 3) and the HRTF filter corresponding to the direction of the sound source signal, so as to obtain the first environment sound signal with spatial sound effect.
Illustratively, assume h l (n) and h r (n) represents the impulse response of the HRTF filter of the left ear and the HRTF filter of the right ear in a specific direction (or angle), x (n) represents the first ambient sound signal, and the output signals after HRTF filter filtering are:
Figure BDA0003347797450000111
in the above formula, y l (n) and y r (n) HRTF impulse response outputs representing the left and right ears, respectively, after acquisition of y l (n) and y r After (n), it is equivalent to obtain a two-channel sound signal with a spatial sound effect.
Before the sound effect processing module 242 performs processing, the azimuth information of the sound source signal (may be the direction information of the sound source signal, or may be the angle information of the sound source signal) may be determined, and then the sound pickup environment sound signal output by the directional sound pickup module 241 is rendered into a first environment sound signal, so that the first environment sound signal has a spatial sound effect from the azimuth of the sound source signal. One possible way of determining the bearing information of the sound source signal is given below in connection with fig. 7.
As shown in fig. 7, in some embodiments, the ambient sound processing module 24 may further include a sound source location estimation module 243. The sound source orientation estimation module 243 may be configured to acquire orientation information of a sound source signal specifying a sound pickup direction. As mentioned above, the designated sound pickup direction may be a direction range, and therefore, the sound source orientation estimation module 243 may obtain the accurate orientation of each sound source signal in the direction range.
In some embodiments, the sound source Direction estimation module 243 may estimate the Direction of Arrival (DOA) of the sound source signal. Taking the environmental sound collection module 22 as an example of a microphone array, the direction of arrival of sound source signals of a plurality of microphones at different positions in the microphone array may be utilized, and then azimuth information of the sound source signal specifying the sound pickup direction may be obtained based on the DOA principle.
In some embodiments, as shown in fig. 8, audio playback module 25 may include a mixing module 251 and speakers 252. The mixing module 251 may be configured to mix the audio signal, the noise reduction signal, and the first environment sound signal, so as to generate a path of signal to drive the speaker 252 to sound.
In some embodiments, since the low frequency part of the audio signal is attenuated during the mixing process, as shown in fig. 8, an EQ (equal) adjusting module 253 may be added to the transmission path of the audio signal to compensate for the loss of the low frequency part of the audio signal caused by the mixing process.
In some embodiments, as shown in fig. 8, before mixing, the noise reduction signal output by the noise reduction module may be adjusted in phase opposition by the phase opposition module 254, so that it can be used to cancel the ambient noise.
In some embodiments, in order to allow the headphone wearer to flexibly adjust the hearing sense and the sensitivity to the rear sound source of the audio signal (e.g. music signal), as shown in fig. 8, a gain adjustment module 255 may be disposed in the audio playing module 25 to flexibly adjust the strength of the sound source signal in the designated sound pickup direction. Taking the designated sound pickup direction as the rear direction as an example, the strength of a rear sound signal (such as a sound signal emitted by a vehicle) can be adjusted by the gain adjustment module 255, so as to control the security level.
As mentioned previously, the sound effect processing module 242 may be implemented based on HRTF filters. Since the universal HRTF is different from the user's actual individual situation, the effect of the universal HRTF filter bank is generally poor. Thus, in some embodiments, personalized HRTF filters can be customized for the headphone wearer. The personalized customization of the HRTF filters is illustrated in detail below with reference to fig. 9 and 10.
Referring to fig. 9 and 10, first, an ear image of the wearer of the headset may be acquired. Then, feature detection (landmark detection) is performed on the ear image to extract the ear feature of the headphone wearer from the ear image. Then, HRTF filters that match (roughly match) the ear features can be selected from an HRTF filter database. The parameters of the HRTF filter can then be corrected or fine-tuned based on the dimensions of the ear and the head driving model (HAT) of the headphone wearer, resulting in a personalized HRTF filter. The processes described in fig. 9 and 10 may be performed when the headset is not in use. For example, after purchasing headphones, the headphone wearer may first perform the above process using the application software associated with the headphones to customize the personalized HRTF filters. After the personalized HRTF filter is obtained, the earphone is used for playing the audio signal, and the earphone can use the personalized HRTF filter to realize accurate sound effect rendering.
The embodiment of the present application further provides a user equipment, which includes a wireless communication unit and an earphone, where the earphone may be any one of the earphones described above.
The apparatus embodiment of the present application is described in detail above with reference to fig. 1 to 10, and the method embodiment of the present application is described in detail below with reference to fig. 11. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the apparatus embodiments in sections which are not described in detail.
Fig. 11 is a schematic flowchart of a method for processing a signal according to an embodiment of the present application. The method of fig. 11 may be applied to the aforementioned headphones. The method of fig. 11 may include steps S1110 to S1150.
In step S1110, an audio signal to be played is received.
In step S1120, an ambient sound signal around the headset is collected.
In step S1130, a noise reduction signal is generated from the ambient sound signal.
In step S1140, a first ambient sound signal specifying a sound pickup direction is acquired from the ambient sound signal. The first environment sound signal is a multi-channel signal with a spatial sound effect.
In step S1150, a mixed sound signal of the audio signal, the noise reduction signal, and the first ambient sound signal is played.
Optionally, step S1140 may include: acquiring a sound pickup environment sound signal of a designated sound pickup direction from the environment sound signal, wherein the sound pickup environment sound signal is a single sound channel signal; and sound effect processing is carried out on the sound pickup environment sound signal so as to generate a first environment sound signal with a spatial sound effect.
Optionally, the method of fig. 11 may further include: acquiring azimuth information of a sound source signal of a specified pickup direction; step S1140 may include: according to the azimuth information of the sound source signal, sound effect processing is carried out on the sound pickup environment sound signal, and a first environment sound signal is obtained, so that the first environment sound signal has a spatial sound effect from the azimuth of the sound source signal.
Alternatively, performing the sound effect processing on the sound pickup environment sound signal according to the azimuth information of the sound source signal to obtain the first environment sound signal may include: determining an HRTF filter corresponding to the azimuth of the sound source signal according to the azimuth information of the sound source signal; and convolving the HRTF filter with the sound pickup environment sound signal to obtain a first environment sound signal.
Optionally, before receiving the audio signal to be played, the method of fig. 11 may further include: acquiring an ear image of a wearer of the headset; extracting ear features from the ear image; HRTF filters that match the ear characteristics are selected from a database of HRTF filters.
Alternatively, the ambient sound signal is collected by a plurality of microphones, and the obtaining of the azimuth information of the sound source signal specifying the sound pickup direction may include: the azimuth information of the sound source signal specifying the sound pickup direction is obtained from the arrival directions of the sound source signals collected by the plurality of microphones.
Optionally, the method of fig. 11 may further include: the intensity of the first ambient sound signal is adjusted in accordance with instructions from the wearer of the headset. Optionally, the pickup direction is designated to be behind the wearer of the headset.
Fig. 12A-12D illustrate headphones (including wireless headphones, headsets) provided by some embodiments of the present application, in which a particular configuration of microphone arrays is employed.
As shown in fig. 12A, in an embodiment, a microphone array composed of 4 microphones is disposed in two earphones, and a plurality of microphones M1, M2, M3, M4 of the microphone array are distributed in two left and right earphone units of the earphones, where two microphones M1, M2 are disposed in the left earphone and two microphones M3, M4 are disposed in the right earphone. Fig. 12A shows three-dimensional coordinate axes, the X-axis pointing directly behind the wearer's body, the Y-axis pointing to the right of the wearer, and the Z-axis pointing in a vertically upward direction.
With continued reference to fig. 12A, the first and second microphones M1, M2 are arranged in a first direction, wherein the first direction is generally parallel to the Z-axis direction, and the third and fourth microphones M3, M4 are arranged in a second direction, wherein the second direction is generally parallel to the X-axis direction. Thus, an angle close to 90 degrees exists between the arrangement direction of the first and second microphones and the arrangement direction of the third and fourth microphones. It should be understood that the included angle is not limited to the 90 degree example. In some embodiments, the first direction includes a first angle with respect to the Z-axis direction that is acute. In other embodiments, a second angle between the second direction and the X-axis is acute, and the second angle may be different from the first angle.
More specifically, the first and second microphones M1 and M2 are placed at positions (in the first earphone) having three-dimensional coordinates of (0, -9, 2), (0, -9, -2), respectively, with the very center of the brain of the wearer of the earphone as the origin, and the third and fourth microphones M3 and M4 are placed at positions (in the second earphone) having three-dimensional coordinates of (2, 9, 0) (-2, 9, 0), respectively. For convenience of explanation, the first earphone is taken as a left earphone, the second earphone is taken as a right earphone, and the sound source which the earphone wearer needs to pay attention to comes from the right back is taken as an example for explanation.
In the case where the sound source comes from directly behind, the sound source direction is defined as (α =90 °, θ =0 °), where α denotes a horizontal angle of the sound source direction (equal to an angle with the Y-axis Z-axis plane) and θ denotes a pitch angle of the sound source direction (equal to an angle with the X-axis Z-axis plane). The microphone array collects sound signals directly behind the earphones, when a sound source moves, an X axis pointing to the direct rear of the body of the wearer and a Y axis pointing to the right of the wearer also change along with the sound source, and then alpha and theta change along with the movement of the sound source, so that the pointing direction of the microphone array relative to a coordinate system is changed.
According to an improved embodiment, the directivity index is measured
Figure BDA0003347797450000151
And a signal integrity indicator>
Figure BDA0003347797450000152
In order to make the sound source signal perceivable to the wearer, it is desirable that the total energy of the processed ambient sound signal is at least min h(w) P(w)=h H (w)*[p*I m +T α,θ (w)]*h(w),
The following calculation is thus obtained:
Figure BDA0003347797450000153
wherein w is the frequency of the signal, α is the horizontal angle, θ is the pitch angle, h (w) is the weighting matrix of the ambient sound processing module, d (w, α, θ) is the correlation delay matrix of the microphone array, T α,θ (w) is a pseudo correlation matrix between the microphone arrays, and M is the number of microphones in the microphone arrays. In this way, the beam forming of the desired target direction is obtained by the trade-off between signal integrity and beam directivity, thereby achieving the technical effect that when an object passes behind, the beam direction moves along with the object.
As shown in fig. 12B, according to one embodiment of the present application, the first earphone 31 includes a body portion and an earphone handle having a length L, a width W, and a thickness T. The first and second microphones M1 and M2 are disposed in the first earphone 31. The left side of fig. 12B illustrates a rear view of the first earpiece, and the right side illustrates a side view of the first earpiece when worn on the left ear.
Wherein, the first and second microphones M1 and M2 are arranged along the minor axis direction of the main body (ellipse or near ellipse). When worn on the left ear, the minor axis makes an acute angle with the Z-axis (vertical upward direction), which may be, for example, 0-30 degrees. In another embodiment, the first and second microphones M1, M2 in the first earpiece 31 may be arranged along a length direction L of the earpiece, which is substantially parallel to the Z-axis when worn in the left ear.
As shown in fig. 12C, according to one embodiment of the present application, the second earpiece 32 includes a body portion and a stem, the stem having a length L, a width W, and a thickness T. The third and fourth microphones M3 and M4 are disposed in the second earphone 32. The left side view of fig. 12C shows a side view of the second earphone when worn on the right ear, and the right side view shows a rear view of the second earphone.
The third and fourth microphones M3 and M4 are arranged substantially along the major axis direction of the main body (ellipse or near ellipse). When worn on the right ear, the major axis makes an acute angle with the X-axis (pointing directly behind the wearer's body), which may be, for example, 0-30 degrees. In another embodiment, the third and fourth microphones M3, M4 in the second earpiece 32 may be arranged along a width direction W of the earpiece stem, the length direction being substantially parallel to the X-axis when worn in the right ear.
In the case where the first earphone 31 is arranged as shown in fig. 12B and the second earphone 32 is arranged as shown in fig. 12C, there is an angle between the arrangement direction of the first and second microphones M1 and M2 and the arrangement direction of the third and fourth microphones M3 and M4, and preferably, the angle is close to 90 degrees.
Due to the angle between the arrangement direction of the first and second microphones M1 and M2 and the arrangement direction of the third and fourth microphones M3 and M4, the weighting matrix of the ambient sound processing module can be determined according to the above formula (1), including determining the weighting matrix of the first ambient sound processing module in the first earphone and the weighting matrix of the second ambient sound processing module in the second earphone, so that the background sound in the target direction (right behind the body of the wearer) can be retained while the omnidirectional background noise is eliminated.
As shown in fig. 12D, according to an embodiment of the present application, a headphone includes a left headphone 33, a right headphone 34, and a headband 35 that connects and fixes the two headphones. The left earphone 33 is provided with a first microphone and a second microphone, the right earphone 34 is provided with a third microphone and a fourth microphone, the first microphone and the second microphone are arranged along a first direction, and the third microphone and the fourth microphone are arranged along a second direction. Specifically, the first direction may be angled at a first angle with respect to a central axis of the headset (see fig. 12D), and the second direction may be angled at a second angle with respect to a front of the headset. Wherein, the first and second included angles are acute angles. In one embodiment, after the user wears the headset, the first direction is substantially parallel to the Z-axis and the second direction is directed substantially directly behind the wearer. Based on the arrangement, an included angle close to 90 degrees exists between the arrangement direction of the first and second microphones and the arrangement direction of the third and fourth microphones.
More specifically, the left earphone includes a first ambient sound collection module and a first noise reduction module, where the first ambient sound collection module is composed of the first microphone and the second microphone, and the first noise reduction module may generate a first noise reduction signal according to an ambient sound signal around the left earphone and perform noise reduction on the received audio signal. In addition, the first ambient sound collection module can further obtain ambient sound signals around the right earphone, so that the technical effects of eliminating omnidirectional background noise and reserving background sound right behind the body of a wearer are improved.
It should be understood that although the first earphone is described above as the left earphone and the second earphone is described as the right earphone, in other embodiments of the present application, the first audio receiver (the first earphone) may be the right earphone and the second audio receiver (the second earphone) may be the left earphone. In other words, the first and second microphones may be arranged along the X-axis direction, and the third and fourth microphones may be arranged along the Z-axis direction, as long as there is an angle between the two arrangement directions.
With reference to fig. 12A to 12D, in the above embodiments, the specifically arranged microphone array has a spatial filtering function, and if an environmental sound processing module equipped with the microphone array is introduced into ANC, the noise in the target direction can be retained while the omnidirectional background noise is eliminated, so that a completely new user experience can be created, for example, when running, the environmental sound behind the user can be retained, and the safety of the user wearing the earphone can be improved. Meanwhile, the environment sound processing module optimizes calculation of beam forming according to the criterion of minimum energy under the constraints of signal integrity and signal directivity indexes, and finally, the optimal weighting matrix of different frequencies can be obtained. The matrix can greatly inhibit the environmental noise in other directions on the premise of ensuring that the target direction sound is not lost, and can reduce the beam bandwidth of the environmental sound processing module under the constraint of the signal directivity index, thereby improving the directional pertinence.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply any order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
It should be understood that units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product may include one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in, or transmitted from, a computer-readable storage medium to another computer-readable storage medium, e.g., from one website, computer, server, or data center, over a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.) network, the computer-readable storage medium may be any available medium that can be read by a computer or a data storage device including one or more integrated servers, data centers, etc. the available medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., digital Versatile Disk (DVD)), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the limitations in the claims.

Claims (35)

1. A first earpiece, comprising:
an audio input module configured to receive an audio signal to be played;
an ambient sound capture module comprising a first microphone and a second microphone configured to capture an ambient sound signal around the first earpiece;
an active noise reduction module configured to generate a noise reduction signal from the ambient sound signal;
the ambient sound processing module is configured to obtain a first ambient sound signal of a specified sound pickup direction from the ambient sound signals;
an audio playing module configured to play the audio signal, the noise reduction signal and a sound mixing signal of the first environment sound signal;
wherein the first microphone and the second microphone are configured to be arranged in a first direction at a first angle belonging to an acute angle with a length direction of a headphone handle of the first headphone.
2. The first earpiece of claim 1, wherein the ambient sound processing module comprises:
a directional sound pickup module configured to obtain a sound pickup environment sound signal of the specified sound pickup direction from the environment sound signal, wherein the sound pickup environment sound signal is a monaural signal;
and the sound effect processing module is configured to be paired and used for carrying out sound effect processing on the sound pickup environment sound signal so as to generate the first environment sound signal with the space sound effect.
3. The first earpiece of claim 2, wherein the ambient sound processing module further comprises:
a sound source orientation estimation module configured to:
acquiring azimuth information of the sound source signal of the specified pickup direction;
the sound effect processing module is configured to:
according to the azimuth information of the sound source signal, sound effect processing is carried out on the pickup environment sound signal to obtain the first environment sound signal, wherein the first environment sound signal has a spatial sound effect from the azimuth of the sound source signal.
4. The first headphone of claim 3, wherein the sound effects processing module is configured to:
determining an HRTF filter corresponding to the azimuth of the sound source signal according to the azimuth information of the sound source signal;
and convolving the HRTF filter and the sound pickup environment sound signal to obtain the first environment sound signal.
5. The first earpiece of claim 1, wherein the ambient sound capture module is further configured to capture an ambient sound signal around a second earpiece used with the first earpiece.
6. The first earpiece of claim 1, wherein the first direction comprises a long axis direction of a body portion of the first earpiece.
7. The first earpiece of claim 1, further comprising:
a gain adjustment module configured to adjust the level of the first ambient sound signal according to instructions of a wearer of the headset.
8. The first earpiece of any of claims 1-7, wherein the designated pickup direction includes a rear of a wearer of the earpiece.
9. A second earpiece, comprising:
an audio input module configured to receive an audio signal to be played;
an ambient sound collection module comprising a third microphone and a fourth microphone configured to collect an ambient sound signal around the second earpiece;
an active noise reduction module configured to generate a noise reduction signal from the ambient sound signal;
the ambient sound processing module is configured to obtain a first ambient sound signal of a specified sound pickup direction from the ambient sound signals;
an audio playing module configured to play the audio signal, the noise reduction signal and a sound mixing signal of the first environment sound signal;
wherein the third microphone and the fourth microphone are configured to be arranged in a second direction at a second angle belonging to an acute angle with a width direction of a headphone handle of the second headphone.
10. The second earpiece of claim 9, wherein the ambient sound capture module is further configured to capture an ambient sound signal around a first earpiece used with the second earpiece.
11. The second earpiece of claim 9, wherein the second direction comprises a minor axis direction of a body portion of the second earpiece.
12. A wireless headset comprising a first headset according to any of the claims 1-8 and a second headset according to any of the claims 8-11.
13. A wireless headset, comprising:
a first audio receiver comprising a first ambient sound capture module and a first noise reduction module, wherein the first ambient sound capture module comprises a first microphone and a second microphone configured to capture an ambient sound signal surrounding the first audio receiver, the first noise reduction module configured to generate a first noise reduction signal from the ambient sound signal surrounding the first audio receiver to reduce noise in an audio signal received by the first audio receiver;
a second audio receiver comprising a second ambient sound capture module comprising a third microphone and a fourth microphone configured to capture an ambient sound signal around the second audio receiver, and a second noise reduction module configured to generate a second noise reduction signal from the ambient sound signal around the second audio receiver to reduce noise of the audio signal received by the second audio receiver;
wherein the first and second microphones are configured to be arranged along a first direction at a first angle belonging to an acute angle to a length direction of the handle of the first audio receiver, and the third and fourth microphones are configured to be arranged along a second direction at a second angle belonging to an acute angle to a width direction of the handle of the first audio receiver.
14. The wireless headset of claim 13, wherein the first direction is a long axis direction of the body portion of the first audio receiver and the second direction is a short axis direction of the body portion of the second audio receiver.
15. The wireless headset of claim 13, wherein the first direction is orthogonal to the second direction.
16. The wireless headset of claim 13, wherein the first ambient sound capture module is further configured to obtain an ambient sound signal around the second audio receiver, and wherein the second ambient sound capture module is further configured to obtain an ambient sound signal around the first audio receiver.
17. The wireless headset of claim 13, wherein the first audio receiver further comprises a first ambient sound processing module configured to obtain an ambient sound signal with a specified pickup direction from an ambient sound signal, and wherein the second audio receiver further comprises a second ambient sound processing module configured to obtain the ambient sound signal with the specified pickup direction from the ambient sound signal.
18. The wireless headset of claim 13, wherein the first audio receiver further comprises a first HRTF filter that matches characteristics of a left ear of a wearer of the wireless headset, and wherein the second audio receiver further comprises a second HRTF filter that matches characteristics of a right ear of the wearer of the wireless headset.
19. A wireless headset according to any of claims 13-18, wherein the specified pick-up direction comprises the rear of a wearer of the wireless headset.
20. A headset, comprising:
a first audio receiver comprising a first ambient sound capture module and a first noise reduction module, wherein the first ambient sound capture module comprises a first microphone and a second microphone configured to capture an ambient sound signal around the first audio receiver, the first noise reduction module configured to generate a first noise reduction signal from the ambient sound signal around the first audio receiver to reduce noise in an audio signal received by the first audio receiver;
a second audio receiver comprising a second ambient sound capture module and a second noise reduction module, wherein the second ambient sound capture module comprises a third microphone and a fourth microphone configured to capture an ambient sound signal around the second audio receiver, the second noise reduction module configured to generate a second noise reduction signal from the ambient sound signal around the second audio receiver to reduce noise in the audio signal received by the second audio receiver;
wherein the first and second microphones are configured to be arranged in a first direction, and the third and fourth microphones are configured to be arranged in a second direction, wherein the first direction is at a first angle to a central axis of the headset that is an acute angle, and the second direction is at a second angle to a front of the headset that is an acute angle.
21. The headset of claim 20, wherein the first direction is orthogonal to the second direction.
22. The headphone of claim 20, wherein the first ambient sound capture module is further configured to obtain an ambient sound signal around the second audio receiver, the second ambient sound capture module being further configured to obtain an ambient sound signal around the first audio receiver.
23. The headphone of claim 20, wherein the first audio receiver further comprises a first ambient sound processing module configured to obtain a pickup-direction-specified ambient sound signal from the ambient sound signal, and wherein the second audio receiver further comprises a second ambient sound processing module configured to obtain the pickup-direction-specified ambient sound signal from the ambient sound signal.
24. A user equipment comprising a wireless headset according to any of claims 13-20 or a headset according to any of claims 21-23.
25. An audio system comprising an audio source, and a wireless headset according to any of claims 13-19 or a headset according to any of claims 20-23.
26. A method of processing an audio signal, the method being applied to a first headphone, the method comprising the steps of:
s1, receiving an input audio signal;
s2, collecting an environment sound signal around the first earphone by using a first microphone and a second microphone in the first earphone; wherein the first and second microphones are configured to be arranged in a first direction that is substantially parallel to a direct rear direction of a wearer or substantially parallel to a vertical direction of the first headset;
s3, generating a first noise reduction signal based on the environment sound signal;
s4, acquiring a first environment sound signal of a designated sound pickup direction from the environment sound signals; and
and S5, outputting an audio signal to be played based on the input audio signal, the first noise reduction signal and the first environment sound signal.
27. The method according to claim 26, wherein the step S4 comprises:
acquiring a sound pickup environment sound signal of the appointed sound pickup direction from the environment sound signal, wherein the sound pickup environment sound signal is a single sound channel signal;
the sound pickup environment sound signal is subjected to sound effect processing to generate the first environment sound signal with a spatial sound effect.
28. The method of claim 27, further comprising:
acquiring azimuth information of the sound source signal of the appointed pickup direction;
according to the azimuth information of the sound source signal, sound effect processing is carried out on the pickup environment sound signal to obtain the first environment sound signal, wherein the first environment sound signal has a spatial sound effect from the azimuth of the sound source signal.
29. The method of claim 28, further comprising:
determining an HRTF filter corresponding to the azimuth of the sound source signal according to the azimuth information of the sound source signal;
and convolving the HRTF filter and the sound pickup environment sound signal to obtain the first environment sound signal.
30. The method of claim 29, further comprising:
acquiring an ear image of a wearer of the headset;
extracting ear features from the ear image;
selecting the HRTF filters that match the ear features.
31. The method of claim 26, further comprising:
determining a weighting matrix of an ambient sound processing module based on minimizing an energy of the first ambient sound signal on a premise that a signal integrity indicator of the ambient sound processing module equals a number of microphones in the first earpiece and a signal directivity indicator of the ambient sound processing module equals a square of the number of microphones, wherein the ambient sound processing module is configured to derive the first ambient sound signal from the ambient sound signal.
32. The method according to any one of claims 26-31, further comprising:
and adjusting the strength of the first environment sound signal according to the instruction of the wearer of the earphone.
33. A method of processing an audio signal for use in a wireless stereo headset, the method comprising:
s1, respectively receiving a first audio signal and a second audio signal by using a first earphone and a second earphone;
s2, collecting ambient sound signals around the first earphone by using a first microphone and a second microphone in the first earphone, and collecting ambient sound signals around the second earphone by using a third microphone and a fourth microphone in the second earphone;
wherein the first and second microphones are configured to be arranged in a first direction and the third and fourth microphones are configured to be arranged in a second direction, wherein the first direction is substantially parallel to a direct rear direction of the wearer and the second direction is substantially parallel to a vertical direction of the second headset;
s3, respectively generating a first noise reduction signal and a second noise reduction signal based on the ambient sound signal around the first earphone and the ambient sound signal around the second earphone;
s4, acquiring a first environment sound signal of a designated sound pickup direction from the environment sound signal around the first earphone, and acquiring a second environment sound signal of the designated sound pickup direction from the environment sound signal around the second earphone; and
and S5, generating the first audio signal, the first noise reduction signal and the sound mixing signal of the first environment sound signal, and generating the second audio signal, the second noise reduction signal and the sound mixing signal of the second environment sound signal.
34. The method of claim 33, further comprising:
acquiring azimuth information of the sound source signal of the appointed pickup direction;
determining a first angular variation of the sound source signal with respect to a direct rearward direction of the wearer and a second angular variation with respect to a vertical direction of the second earpiece;
determining an attenuation matrix for a microphone array based on the first and second angular changes, wherein the microphone array includes the first, second, third, and fourth microphones.
35. The method of claim 34, further comprising:
on the premise that a signal integrity indicator of an ambient sound processing module equals a number of microphones within the microphone array and a signal directivity indicator of the ambient sound processing module equals a square of the number of microphones, determining a weighting matrix of the ambient sound processing module based on minimizing an energy of the first ambient sound signal and minimizing an energy of the second ambient sound signal, wherein the ambient sound processing module is configured to derive the first ambient sound signal and/or the second ambient sound signal from the ambient sound signal.
CN202111328416.2A 2021-10-12 2021-11-10 Earphone, user equipment and method for processing signal Pending CN115967883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/118312 WO2023061130A1 (en) 2021-10-12 2022-09-13 Earphone, user device and signal processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111190746 2021-10-12
CN202111190746X 2021-10-12

Publications (1)

Publication Number Publication Date
CN115967883A true CN115967883A (en) 2023-04-14

Family

ID=85888383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111328416.2A Pending CN115967883A (en) 2021-10-12 2021-11-10 Earphone, user equipment and method for processing signal

Country Status (2)

Country Link
CN (1) CN115967883A (en)
WO (1) WO2023061130A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4812302B2 (en) * 2005-01-12 2011-11-09 学校法人鶴学園 Sound source direction estimation system, sound source direction estimation method, and sound source direction estimation program
US9681246B2 (en) * 2014-02-28 2017-06-13 Harman International Industries, Incorporated Bionic hearing headset
CN109195043B (en) * 2018-07-16 2020-11-20 恒玄科技(上海)股份有限公司 Method for improving noise reduction amount of wireless double-Bluetooth headset
EP3668123A1 (en) * 2018-12-13 2020-06-17 GN Audio A/S Hearing device providing virtual sound
CN113194372B (en) * 2021-04-27 2022-11-15 歌尔股份有限公司 Earphone control method and device and related components

Also Published As

Publication number Publication date
WO2023061130A1 (en) 2023-04-20

Similar Documents

Publication Publication Date Title
US11676568B2 (en) Apparatus, method and computer program for adjustable noise cancellation
CN105530580B (en) Hearing system
US8184823B2 (en) Headphone device, sound reproduction system, and sound reproduction method
JP4304636B2 (en) SOUND SYSTEM, SOUND DEVICE, AND OPTIMAL SOUND FIELD GENERATION METHOD
US8270642B2 (en) Method and system for producing a binaural impression using loudspeakers
US9613610B2 (en) Directional sound masking
EP3103269B1 (en) Audio signal processing device and method for reproducing a binaural signal
US9681246B2 (en) Bionic hearing headset
US6975731B1 (en) System for producing an artificial sound environment
WO2012053446A1 (en) Headphone device
EP3468228B1 (en) Binaural hearing system with localization of sound sources
JP2008543144A (en) Acoustic signal apparatus, system, and method
JP2008543143A (en) Acoustic transducer assembly, system and method
CN106303832B (en) Loudspeaker, method for improving directivity, head-mounted equipment and method
JP2002209300A (en) Sound image localization device, conference unit using the same, portable telephone set, sound reproducer, sound recorder, information terminal equipment, game machine and system for communication and broadcasting
US11805364B2 (en) Hearing device providing virtual sound
JP4221746B2 (en) Headphone device
CN115967883A (en) Earphone, user equipment and method for processing signal
WO2022009722A1 (en) Acoustic output device and control method for acoustic output device
CN110099351B (en) Sound field playback method, device and system
US6983054B2 (en) Means for compensating rear sound effect
CN214799882U (en) Self-adaptive directional hearing aid
CN117082406A (en) Audio playing system
Lezzoum et al. Assessment of sound source localization of an intra-aural audio wearable device for audio augmented reality applications
CN117641198A (en) Far-field silencing method, broadcasting equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination