CN116153281A - Active noise reduction method and electronic equipment - Google Patents

Active noise reduction method and electronic equipment Download PDF

Info

Publication number
CN116153281A
CN116153281A CN202111398001.2A CN202111398001A CN116153281A CN 116153281 A CN116153281 A CN 116153281A CN 202111398001 A CN202111398001 A CN 202111398001A CN 116153281 A CN116153281 A CN 116153281A
Authority
CN
China
Prior art keywords
sound signal
noise reduction
scene
noise
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111398001.2A
Other languages
Chinese (zh)
Inventor
张景
韩荣
熊伟
仇存收
田立生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111398001.2A priority Critical patent/CN116153281A/en
Priority to PCT/CN2022/127015 priority patent/WO2023093412A1/en
Publication of CN116153281A publication Critical patent/CN116153281A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1781Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations

Abstract

The application provides an active noise reduction method and electronic equipment, and relates to the technical field of terminals, wherein the method comprises the steps of obtaining a first sound signal outside a noise reduction space, responding to the first sound signal, playing a second sound signal inside the noise reduction space, obtaining a third sound signal inside the noise reduction space, and playing a fourth sound signal inside the noise reduction space, wherein the fourth sound signal is used for eliminating part or all of the first sound signal. According to the technical scheme, the interference of ANC to the user can be reduced, and the ANC effect is improved.

Description

Active noise reduction method and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to an active noise reduction method and electronic equipment.
Background
Active noise reduction (active noise cancellation, ANC) can improve the noise heard by the user, giving the user a pleasant listening experience. The principle of ANC is to generate a secondary noise signal of the same amplitude and opposite phase to the primary noise signal (i.e. the original noise signal in the external environment), and then play the secondary noise signal through a speaker, thereby canceling the primary noise signal.
Generally, before starting ANC, the electronic device may play a prompt tone signal (such as "ding-dong" or "noise reduction on") in the noise reduction space, where the prompt tone signal may remind the user that an ANC function is about to be opened. The electronic device may also obtain a sound signal inside the noise reduction space, and play a secondary noise signal inside the noise reduction space that matches the noise reduction space based on the played cue sound signal and the obtained sound signal, the secondary noise signal may be used to cancel a primary noise signal from outside the noise reduction space. However, on the one hand, the alert tone signal may interfere with the user, and on the other hand, the secondary noise signal is determined prior to performing ANC, and after starting ANC, the noise reduction space may also change, thereby making it difficult for the secondary noise signal to match the noise reduction space, and the ANC is less effective.
Disclosure of Invention
In view of this, the present application provides an ANC method and an electronic device, which can reduce the disturbance of ANC to a user and improve the ANC effect.
To achieve the above object, in a first aspect, an embodiment of the present application provides a method for ANC, including:
acquiring a first sound signal outside the noise reduction space;
in response to the first sound signal, playing a second sound signal inside the noise reduction space, wherein the second sound signal is a sound signal which is not perceived by human ears;
Acquiring a third sound signal in the noise reduction space;
and playing a fourth sound signal inside the noise reduction space, wherein the fourth sound signal is used for eliminating part or all of the first sound signal.
Wherein the third sound signal may comprise at least part of the second sound signal.
The noise reduction space may be a space where ANC is required. Different noise reduction spaces may have different spatial features, which may include shape, size, and closure, among others. When the spatial characteristics of the noise reduction space are different, the secondary paths of the noise reduction space are also different, and further, different secondary noise signals need to be played to achieve good noise reduction effect.
The human ear does not perceive means that the human ear cannot hear the second sound signal, but the second sound signal can still be detected by a physical device such as a microphone.
The first sound signal may be a primary noise signal, the fourth sound signal may be a secondary noise signal, and the fourth sound signal may be determined based on the second sound signal and the third sound signal.
In the embodiment of the application, the first sound signal outside the noise reduction space can be acquired. Because the first sound signal can influence the range of the sound signal which can be perceived by the human ear in practice, the second sound signal which is not perceived by the human ear is played in the noise reduction space in response to the first sound signal, and the interference of the second sound signal to the user can be reduced. The electronic device may further obtain a third sound signal inside the noise reduction space, and play a fourth sound signal for canceling the first sound signal inside the noise reduction space based on the second sound signal and the third sound signal, thereby implementing ANC with reduced interference to the user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved.
In some embodiments, an electronic device may include a first microphone, a second microphone, and a speaker. The region opposite the speaker may be a noise reduction space, and the speaker may be used to play the second sound signal and the fourth sound signal. The first microphone may be located outside the noise reduction space for acquiring the first sound signal. The second speaker may be located within the noise reduction space for acquiring a third sound signal.
For example, if the electronic device is a headset, the noise reduction space may be an ear canal, the first microphone may be outside the ear canal, and the speaker and the second microphone may be inside the ear canal. If the electronic device is a vehicle-mounted device, the noise reduction space may be a vehicle space, the first microphone may be located outside the vehicle space, and the speaker and the second microphone may be located inside the vehicle space.
Optionally, the second sound signal is different when the noise reduction scene indicated by the first sound signal is different.
Because the first sound signals included in different noise reduction scenes are different, and the first sound signals can influence the range of sound signals which can be perceived by human ears in practice, different second sound signals which are not perceived by human ears are played based on different noise reduction scenes, and the interference of the second sound signals to users can be reduced.
Optionally, when the noise reduction scene indicated by the first sound signal is a stable noise scene, the second sound signal includes a sound signal masked by the first sound signal.
When the noise scene is stabilized at the electronic device, the electronic device may be in a simpler environment, so that only a single or stable noise source exists, the first sound signal is a stable sound signal, and therefore the second sound signal masked by the first sound signal can be played, so that the second sound signal still cannot be perceived by human ears under the masking effect of sound, and the interference of the second sound signal to a user can be reduced. In addition, the sound signal masked by the first sound signal is selected as the second sound signal, so that the selectable frequency range and the selectable energy amplitude of the second sound signal are larger, the flexibility of generating the second sound signal and the anti-interference performance of the second sound signal are improved, the accuracy of determining the noise reduction coefficient subsequently and generating the fourth sound signal is further improved, and the noise reduction effect is further improved.
Optionally, the method further comprises:
when the first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to a first frequency band, a first time period of the fifth sound signal in the first sound signal is greater than or equal to a duration threshold, and energy fluctuation of the second energy amplitude corresponding to the fifth sound signal in the first time period is less than a fluctuation range threshold, the noise reduction scene is the stable noise scene (i.e., the first sound signal includes a stable fifth sound signal).
Alternatively, in some embodiments, the first sound signal may include a plurality of frames of sub-signals, the duration of each frame of sub-signals may be a preset duration, the electronic device may determine a first number of frames of the sub-signals including the first frequency band, and the second energy amplitude may be an energy amplitude of the first frequency band in the sub-signals. When the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame number of the sub-signals comprising the fifth sound signal corresponding to the first frequency band is greater than or equal to the frame number threshold, and the energy fluctuation of the second energy amplitude in the sub-signals of the first frame number is less than the fluctuation range threshold, the noise reduction scene indicated by the first sound signal is an unstable noise scene (i.e. the first sound signal comprises a stable fifth sound signal).
The duration of each frame of sub-signal may be a preset duration, and the electronic device may determine a first energy amplitude based on an energy spectrum corresponding to each frame of sub-signal, where the first energy amplitude may be an average energy amplitude of a plurality of frames of sub-signals, or may be a sum of energy amplitudes of multiple frames of sub-signals.
It should be noted that the first duration or the first frame number may be used to indicate the stability of the fifth sound signal in time. The longer the first time length or the larger the first frame number, the more stable in time the fifth sound signal is. Wherein the first frame number may correspond to a first time length, the first time length being equal to a product of the first frame number and a time length of each frame of the sub-signal; the frame number threshold may correspond to a duration threshold that is equal to the product of the frame number threshold and the duration of each frame of the sub-signal.
In some embodiments, the first frame number is a frame number of a consecutive plurality of sub-signals comprising the fifth sound signal, such that the first frame number is more accurately indicative of a degree of temporal stability of the fifth sound signal.
It should be further noted that the second energy magnitude may be used to indicate the degree of stability of the fifth sound signal in terms of sound intensity. The smaller the fluctuation of the second energy amplitude is, the more stable the sound intensity of the fifth sound signal is represented.
Optionally, the second sound signal includes a sound signal masked by the fifth sound signal.
Since the first sound signal includes the stabilized fifth sound signal, the second sound signal masked by the first sound signal can be played based on the masking effect (frequency domain masking effect or time domain masking effect) of the sound, so that the second sound signal is still not perceived by the human ear. In some embodiments, to enable the second sound signal to be stably masked, reducing problems perceived by the user, the second sound signal may include a sound signal masked by the fifth sound signal.
In some embodiments, when the current noise reduction scene is a steady noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave, and/or a sound signal having an energy amplitude less than a hearing threshold.
Optionally, when the noise reduction scene indicated by the first sound signal is an unstable noise scene, the second sound signal comprises a sound signal with an energy amplitude below a hearing threshold.
When the electronic device is in an unstable noise scene, the electronic device may include various noise sources when the electronic device is in a relatively complex environment, so that a sound signal with an arbitrary frequency band and an energy amplitude smaller than a hearing threshold corresponding to the frequency band can be selected as the second sound signal, and interference of the second sound signal to a user can be reduced. In addition, the sound signal with the hearing threshold is selected as the second sound signal by the energy amplitude, so that the selectable frequency range of the second sound signal is larger, the flexibility of generating the second sound signal and the anti-interference performance of the second sound signal are improved, the accuracy of determining the noise reduction coefficient and generating the fourth sound signal is further improved, and the noise reduction effect is further improved.
Optionally, the method further comprises:
when the first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, and the first sound signal does not comprise a fifth sound signal corresponding to a first frequency band, the noise reduction scene is the unstable noise scene; or alternatively, the first and second heat exchangers may be,
When a first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal comprises a fifth sound signal corresponding to a first frequency band, but a first duration of the fifth sound signal in the first sound signal is less than a duration threshold, and the noise reduction scene is the unstable noise scene; or alternatively, the first and second heat exchangers may be,
when the first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to a first frequency band, a first time period of the fifth sound signal in the first sound signal is greater than or equal to a duration threshold, but energy fluctuation of the second energy amplitude corresponding to the fifth sound signal in the first time period is greater than or equal to a fluctuation range threshold, the noise reduction scene is the unstable noise scene (i.e., the first sound signal does not include a stable fifth sound signal).
Alternatively, in some embodiments, the first sound signal may include a plurality of frames of sub-signals, the duration of each frame of sub-signals may be a preset duration, the electronic device may determine a first number of frames of the sub-signals including the first frequency band, and the second energy magnitude may be an energy magnitude of the first frequency band in the sub-signals. When the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, each frame of the sub-signal does not include the fifth sound signal corresponding to the first frequency band, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame number of the sub-signal including the fifth sound signal corresponding to the first frequency band is less than the frame number threshold, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame number of the sub-signal including the fifth sound signal corresponding to the first frequency band is greater than or equal to the frame number threshold, but the energy fluctuation of the second energy amplitude in the sub-signal of the first frame number is greater than or equal to the fluctuation range threshold, the noise reduction scene indicated by the first sound signal is an unstable noise scene (i.e., the first sound signal does not include the stable fifth sound signal).
In some embodiments, when the current noise reduction scene is an unstable noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave.
Optionally, the first frequency band is greater than 10Hz (hertz) and less than 1000Hz.
Optionally, when the noise reduction scene indicated by the first sound signal is a quiet scene, the second sound signal includes at least one of an infrasonic wave and an ultrasonic wave.
Since there may be little noise source in the environment in which the electronic device is located in a quiet scene, infrasonic and/or ultrasonic waves may be selected as the second sound signal to reduce the interference of the second sound signal to the user.
It should be noted that, compared with the ultrasonic wave, the infrasonic wave has smaller energy loss in the propagation process, which is beneficial to the subsequent determination of the accuracy of the noise reduction coefficient and improves the noise reduction effect.
Optionally, the method further comprises:
and when the first energy amplitude corresponding to the first sound signal is smaller than an energy amplitude threshold, the noise reduction scene is the quiet scene.
In some embodiments, the second sound signal may be a pure sound signal with a single frequency, or may be a sound signal obtained by superimposing pure sound signals with multiple frequencies. When the frequency components included in the second sound signal are more complex, the anti-interference capability of the second sound signal is stronger, so that the accuracy of the subsequent determination of the noise reduction coefficient and the generation of the fourth sound signal is higher, and the noise reduction effect is better.
In some embodiments, for quiet scenes and unsteady noise scenes, the timing at which the electronic device plays the second sound signal may be independent of the timing at which the first sound signal is detected. In other embodiments, for a steady noise scenario, since the first sound signal (or the fifth sound signal) is required to mask the second sound signal, the electronic device may play the second sound signal while playing the first sound signal (or the fifth sound signal) such that the first sound signal (or the fifth sound signal) masks the second sound signal through a frequency masking effect; alternatively, the electronic device may play the second sound signal within a leading masking duration and a trailing masking duration of the first sound signal (or the fifth sound signal) such that the first sound signal (or the fifth sound signal) masks the second sound signal by a time domain masking effect.
Optionally, playing a fourth sound signal inside the noise reduction space includes:
determining a noise reduction coefficient based on the second sound signal and the third sound signal;
generating the fourth sound signal based on the noise reduction coefficient;
and playing the fourth sound signal.
Optionally, the determining the noise reduction coefficient based on the second sound signal and the third sound signal includes:
determining a first secondary path transfer function based on the second sound signal and the third sound signal;
acquiring leakage state data corresponding to at least one second secondary path transfer function and each second secondary path transfer function;
determining leakage state data corresponding to the second secondary path transfer function with the smallest difference from the first secondary path transfer function as leakage state data corresponding to the first secondary path transfer function;
the noise reduction coefficient is determined based on leakage state data corresponding to the first secondary path transfer function.
Wherein the secondary path is a physical path between the speaker to the second microphone. The physical path represents the path of the acoustic signal transmitted by the physical acoustic device, which may also be referred to as a physical model; the transfer function of a physical path, which is a mathematical estimate of the physical model representing the acoustic response of the physical model to a sound signal, may also be referred to as a mathematical model.
It should be noted that, when the anti-interference performance of the second sound signal is stronger, the larger the energy amplitude is, the higher the accuracy of the determined secondary path transfer function is, and correspondingly, the higher the accuracy of the determined noise reduction coefficient is.
In some embodiments, the leakage status data may include a leakage level and the noise reduction coefficients may include filter coefficients for generating the secondary sound signal.
In some embodiments, the second secondary path transfer function may be established by the electronic device in an offline state (e.g., before the electronic device leaves the factory). In some embodiments, the electronic device comprises a headset, and the second secondary path transfer function is a constructed secondary path transfer function of the headset at different human ears (large ear, middle ear, and small ear), multiple wearing postures, and multiple wearing tightness. In some embodiments, the electronic device comprises an in-vehicle device, and the second secondary path transfer function is a constructed secondary path transfer function of the in-vehicle device with doors, windows, air conditioners, trunk, etc. of different vehicles open and closed.
In some embodiments, the electronic device may determine whether the electronic device is currently in a noise reduction scene or a non-noise reduction scene, if the electronic device is in the non-noise reduction scene, not execute ANC, if the electronic device is in the noise reduction scene, further determine which noise reduction scene is in, and play a corresponding second sound signal to perform ANC.
In some embodiments, the non-noise reducing scene may include at least one of a talk scene and a multimedia scene. The communication scene may refer to that the electronic device is currently performing a voice communication with other devices, and the multimedia scene may refer to that the electronic device is playing multimedia data such as music and video through a speaker.
In a second aspect, embodiments of the present application provide a method of ANC, including:
when different noise reduction scenes are located, respectively playing second sound signals corresponding to the different noise reduction scenes in the noise reduction space, wherein the second sound signals are sound signals which are not perceived by human ears;
acquiring a third sound signal in the noise reduction space, wherein the third sound signal at least comprises part of the second sound signal;
and playing a fourth sound signal inside the noise reduction space, wherein the fourth sound signal is used for eliminating part or all of the first sound signal from the outside of the noise reduction space.
In the embodiment of the application, because the first sound signals included in different noise reduction scenes are different, and the first sound signals can influence the range of sound signals which can be perceived by human ears in practice, different second sound signals which are not perceived by human ears are played based on different noise reduction scenes, and the interference of the second sound signals to users can be reduced. The electronic device may further obtain a third sound signal inside the noise reduction space, and play a fourth sound signal for canceling the first sound signal inside the noise reduction space based on the second sound signal and the third sound signal, thereby implementing ANC with reduced interference to the user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved.
Optionally, when the noise reduction scene is a stable noise scene, the second sound signal includes a sound signal masked by the first sound signal.
Optionally, when a first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to a first frequency band, a first time period of the fifth sound signal in the first sound signal is greater than or equal to a time period threshold, and an energy fluctuation of a second energy amplitude corresponding to the fifth sound signal in the first time period is less than a fluctuation range threshold, the second sound signal includes a sound signal masked by the fifth sound signal.
In some embodiments, when the current noise reduction scene is a steady noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave, and/or a sound signal having an energy amplitude less than a hearing threshold.
Optionally, the first frequency band is greater than 10Hz and less than 1000Hz.
Because the perception difference of different human ears on the sound signals of the frequency band which is more than 10Hz and less than 1000Hz is larger, and the leakage condition of the sound signals of the frequency band which is more than 10Hz and less than 1000Hz is also serious, the frequency band corresponding to the second sound signals can be more than 10Hz and less than 1000Hz, so that ANC can be carried out on the noise signals of the frequency band which is more than 10Hz and less than 1000Hz mainly or mainly, and the noise reduction effect is improved.
Optionally, when the noise reduction scene is an unstable noise scene, the second sound signal comprises a sound signal having an energy amplitude below a hearing threshold.
In some embodiments, when the current noise reduction scene is an unstable noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave.
Optionally, when the noise reduction scene is a quiet scene, the second sound signal includes at least one of infrasonic waves and ultrasonic waves.
In a third aspect, embodiments of the present application provide a method for ANC, including:
when the electronic equipment is in a stable noise scene, playing a second sound signal corresponding to the stable noise scene in a noise reduction space, wherein the second sound signal comprises a sound signal masked by a first sound signal, and the first sound signal is a sound signal outside the noise reduction space;
the electronic equipment acquires a third sound signal in the noise reduction space, wherein the third sound signal at least comprises part of the second sound signal;
the electronic equipment plays a fourth sound signal in the noise reduction space, wherein the fourth sound signal is used for eliminating part or all of the first sound signal.
In this embodiment of the present application, when a noise scene is stabilized at the electronic device, the electronic device may be in a simpler environment, so that there is only a single or stable noise source, and the first sound signal is a stable sound signal, so that the second sound signal masked by the first sound signal may be played, so that the second sound signal still cannot be perceived by the human ear under the masking effect of the sound, and interference of the second sound signal to the user may be reduced. The electronic device may further obtain a third sound signal inside the noise reduction space, and play a fourth sound signal for canceling the first sound signal inside the noise reduction space based on the second sound signal and the third sound signal, thereby implementing ANC with reduced interference to the user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved. In addition, the sound signal masked by the first sound signal is selected as the second sound signal, so that the selectable frequency range and the selectable energy amplitude of the second sound signal are larger, the flexibility of generating the second sound signal and the anti-interference performance of the second sound signal are improved, the accuracy of determining the noise reduction coefficient subsequently and generating the fourth sound signal is further improved, and the noise reduction effect is further improved.
Optionally, when a first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to a first frequency band, a first time period of the fifth sound signal in the first sound signal is greater than or equal to a time period threshold, and an energy fluctuation of a second energy amplitude corresponding to the fifth sound signal in the first time period is less than a fluctuation range threshold, the second sound signal includes a sound signal masked by the fifth sound signal.
In some embodiments, when the current noise reduction scene is a steady noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave, and/or a sound signal having an energy amplitude less than a hearing threshold.
Optionally, the first frequency band is greater than 10Hz and less than 1000Hz.
In a fourth aspect, embodiments of the present application provide an apparatus for ANC having a method for implementing any one of the above first aspect, any one of the second aspect, or any one of the third aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a transceiver module or unit, a processing module or unit, an acquisition module or unit, etc.
In a fifth aspect, embodiments of the present application provide an electronic device, including: a memory and a processor, the memory for storing a computer program; the processor is configured to perform the method of ANC of any of the first, second or third aspects above when the computer program is invoked.
In a sixth aspect, embodiments of the present application provide a chip system, the chip system including a processor coupled to a memory, the processor executing a computer program stored in the memory to implement the method of the ANC of any one of the first, second, or third aspects above.
The chip system can be a single chip or a chip module formed by a plurality of chips.
In a seventh aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the first, second or third aspects of ANC described above.
In an eighth aspect, embodiments of the present application provide a computer program product which, when run on an electronic device, causes the electronic device to perform the method of any one of the first, second or third aspects of ANC described above.
It will be appreciated that the advantages of the fourth to eighth aspects may be found in the relevant description of the first, second or third aspects, and are not described here again.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an earphone according to an embodiment of the present application;
FIG. 3 is a block diagram of an ANC algorithm provided by an embodiment of the present application;
FIG. 4 is a block diagram of another electronic device according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of a method for ANC according to an embodiment of the present disclosure;
fig. 6 is a flowchart of a method for determining a noise reduction scene according to an embodiment of the present application;
fig. 7 is a schematic diagram of a frequency distribution of an audio signal according to an embodiment of the present disclosure;
FIG. 8 is a schematic illustration of a hearing threshold and masking threshold provided in an embodiment of the present application;
FIG. 9 is a flowchart of a method for determining a noise reduction coefficient based on a second sound signal and a third sound signal according to an embodiment of the present application;
FIG. 10 is a block diagram of an algorithm for determining a secondary path transfer function provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a secondary path transfer function according to an embodiment of the present application;
FIG. 12 is a flow chart of another method for ANC provided in an embodiment of the present application;
FIG. 13 is a flow chart of another method of ANC provided in an embodiment of the present application;
FIG. 14 is a flow chart of another method of ANC provided in an embodiment of the present application;
FIG. 15 is a flow chart of another method for ANC provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The ANC method provided by the embodiment of the application can be applied to electronic equipment such as headphones and vehicle-mounted equipment. The earphone may include a plurality of types of earphones such as ear-covering earphone, ear-attaching earphone, ear-in earphone and earplug earphone. The manner of deployment of the electronic device may correspond to a device type of the electronic device. Such as when the electronic device is a headset, the user may wear the headset; when the electronic device is an in-vehicle device, the user may install the in-vehicle device in the vehicle. In practical application, the electronic device is not limited to the earphone and the vehicle-mounted device, and the embodiment of the application does not limit the device type of the electronic device.
The electronic device can adjust the noise reduction effect in real time, for example, the electronic device can improve noise heard by the user through ANC. The principle of ANC is to generate a secondary noise signal of the same amplitude and opposite phase to the primary noise signal (i.e. the original noise signal in the external environment), and then play the secondary noise signal through a speaker, thereby canceling the primary noise signal.
Referring to fig. 1, a schematic structure of an electronic device 100 provided in the present application is shown. The electronic device 100 may include a processor 110, an internal memory 120, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, a wireless communication module 150, an audio module 160, a speaker 160A, a receiver 160B, a microphone 160C, a sensor module 170, keys 180, a motor 191, and an indicator 192. Wherein the sensor module 170 may include a pressure sensor, a gyroscope sensor, an acceleration sensor, a fingerprint sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a controller, a memory, a digital signal processor (digital signal processor, DSP) and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are reduced, reducing the latency of the processor 110, and thus improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to a touch sensor, a charger, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor through an I2C interface, such that the processor 110 communicates with the touch sensor through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 160 via an I2S bus to enable communication between the processor 110 and the audio module 160. In some embodiments, the audio module 160 may communicate audio signals to the wireless communication module 150 via the I2S interface to implement a function of answering a call via bluetooth.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 160 and the wireless communication module 150 may be coupled by a PCM bus interface. In some embodiments, the audio module 160 may also transmit audio signals to the wireless communication module 150 through the PCM interface to implement a function of answering a call through bluetooth. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 150. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 150 through a UART interface to implement a bluetooth function.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the wireless communication module 150, the audio module 160, the sensor module 170, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 120, the wireless communication module 150, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 can be realized by the antenna 1 and the wireless communication module 150, a modem processor, a baseband processor, and the like.
The antenna 1 is used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The wireless communication module 150 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 150 may be one or more devices that integrate at least one communication processing module. The wireless communication module 150 receives electromagnetic waves via the antenna 1, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 150 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves to radiate through the antenna 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: speech recognition, etc.
The internal memory 120 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 120. The internal memory 120 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, etc.) required for at least one function of the operating system, and the like. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, etc.), and so on. In addition, the internal memory 120 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 160, a speaker 160A, a receiver 160B, a microphone 160C, an application processor, and the like. Such as music playing, recording, etc.
The audio module 160 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 160 may also be used to encode and decode audio signals. In some embodiments, the audio module 160 may be disposed in the processor 110, or some functional modules of the audio module 160 may be disposed in the processor 110.
The speaker 160A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. When the electronic device 100 is deployed, a space (such as an ear canal or an in-vehicle space) opposite to the speaker 160A may be used as a noise reduction space, and the speaker 160A may be used as a secondary sound source, so as to play a secondary noise signal that is opposite to a primary noise signal in an environment where the electronic device is located, thereby eliminating the primary noise signal transferred into the noise reduction space, and implementing ANC. In some embodiments, the electronic device 100 may listen to music, or to hands-free conversations, through the speaker 160A.
A receiver 160B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 160B in close proximity to the human ear.
Microphone 160C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 160C through the mouth, inputting a sound signal to the microphone 160C. The electronic device 100 may be provided with a plurality of microphones 160C, and the microphones 160C may collect sound signals and may also implement a noise reduction function. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 160C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording, etc.
In some embodiments, multiple microphones 160C may be disposed at different locations of electronic device 100. When the electronic device is deployed, some of the plurality of microphones 160C may be outside the noise reducing space (e.g., outside the ear canal or outside the vehicle space), i.e., a first microphone, and other microphones 160C may be inside the noise reducing space (e.g., inside the ear canal or inside the vehicle space), i.e., a second microphone.
The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. Pressure sensors are of many kinds, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions.
The gyroscopic sensor may be used to determine a motion pose of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by a gyroscopic sensor.
The acceleration sensor may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary.
The ambient light sensor is used for sensing ambient light brightness. The ambient light sensor may also cooperate with the proximity light sensor to detect whether the electronic device 100 is being worn on the ear.
The fingerprint sensor is used for collecting fingerprints. The electronic device 100 may utilize the captured fingerprint characteristics to effect fingerprint unlocking.
Touch sensors, also known as "touch panels". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. In some embodiments, a touch sensor may also be provided on a surface of the electronic device 100.
The bone conduction sensor may acquire a vibration signal. In some embodiments, the bone conduction sensor may acquire a vibration signal of the human vocal tract vibrating the bone pieces. In some embodiments, the bone conduction sensor may also be provided in an electronic device, in combination with the bone conduction electronic device. The audio module 160 may parse out a voice signal based on the vibration signal of the sound portion vibration bone piece obtained by the bone conduction sensor, so as to implement a voice function.
The keys 180 include a power on key, a volume key, etc. The keys 180 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., audio playback, etc.) may correspond to different vibration feedback effects. Different application scenarios may also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, etc.
It should be understood that the structure of the electronic device 100 is not particularly limited in the embodiments of the present application, except for the various components or modules listed in fig. 1. In other embodiments of the present application, electronic device 100 may also include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Fig. 2 is a schematic structural diagram of an earphone 200 according to an embodiment of the present application. The headset 200 may be understood as an electronic device 100 of a particular device type, and the headset 200 may include at least some of the components previously described in fig. 1. The earphone 200 includes a headband 210 and two earshells 220, the two earshells 220 being disposed on one side of the headband 210, respectively, and receiving cavities of the two earshells 220 being disposed opposite to each other. The housing chamber of the earshell 220 is internally provided with components such as the speaker 160A, and may also be used to house the pinna of the user while the user wears the earphone 200. The earmuff 220 is provided with at least one of a reference microphone 230 and a talk microphone 240 on a side opposite to the receiving chamber, and a feedback microphone 250 is also provided in the receiving chamber.
When the user wears the earphone 200, the space opposite to the speaker 160A is the ear canal of the user, which is the noise reduction space. The reference microphone 230 and the conversation microphone 240 are still exposed outside the noise reduction space, so that the sound of the external environment where the user is located can be collected, and the conversation microphone 240 is located closer to the user's mouth than the reference microphone 230. The conversation microphone 240 may be used to collect the voice of the conversation of the user, the reference microphone 230 may be used to collect the voice outside the earphone 200, and the position of the reference microphone 230 is not limited to the position of the user's mouth. And it will be appreciated that in practical applications, the call microphone 240 may also collect sounds other than the user's call, and the reference microphone 230 may also collect sounds of the user's call. The feedback microphone 250 is located in the noise reduction space so that sound in the user's ear canal can be collected. Wherein the first microphone may comprise at least one of a reference microphone 230 and a talk microphone 240 and the second microphone may comprise a feedback microphone 250.
It should be noted that, in the embodiment of the present application, only the positions of the reference microphone 230, the talk microphone 240 and the feedback microphone 250 are described with reference to fig. 2, and the positions of the reference microphone 230, the talk microphone 240 and the feedback microphone 250 or the structure of the earphone 200 are not limited.
Referring to fig. 3, a block diagram of an ANC principle is provided in an embodiment of the present application. As shown in fig. 3, x (n) represents a primary noise signal; p (z) represents the primary path transfer function; the primary path is the physical path between the first microphone (e.g., reference microphone 230) to the second microphone (e.g., feedback microphone 250); d (n) represents the sound signal that x (n) passes through the primary path to the second microphone; e (n) represents the residual noise signal after ANC; w (z) denotes an adaptive filter performing ANC for generating a secondary noise signal based on the filter coefficients; y (n) represents a sound signal after w (z) to x (n) filter processing, that is, a secondary noise signal; s (z) denotes a secondary path transfer function, the secondary path being a physical path between the speaker to the second microphone; y' (n) represents the sound signal of y (n) transmitted to the second microphone via the secondary path; a least mean square algorithm (least mean square, LMS) may be used to update the filter coefficients of w (z), i.e., the noise reduction coefficients.
Wherein the physical path represents the path of the acoustic signal transmitted by the physical acoustic device, which may also be referred to as a physical model; the transfer function of a physical path, which is a mathematical estimate of the physical model representing the acoustic response of the physical model to a sound signal, may also be referred to as a mathematical model.
As can be seen from fig. 3, the adaptive filter w (z) generates a corresponding secondary noise signal y (n) based on the primary noise signal x (n), and y (n) may be superimposed on x (n) to achieve noise reduction. The second microphone determines a residual noise signal e (n) based on d (n) after the primary noise signal x (n) passes through the primary path and y' (n) after the secondary noise signal y (n) passes through the secondary path, and the LMS updates the filter coefficients of the adaptive filter w (z) based on the correlation between e (n) and x (n) (i.e., the similarity between the primary noise and the residual noise signal).
From the foregoing, it can be seen that the secondary path can greatly affect the noise reduction effect of ANC. Different noise reduction spaces may have different spatial features, which may include shape, size, and closure, among others. When the spatial characteristics of the noise reduction space are different, the secondary paths of the noise reduction space are also different, and further, different secondary noise signals need to be played to achieve good noise reduction effect.
Taking the earphone as an example, the noise reduction space is related to the shape of the auditory canal of the user, the posture of wearing the earphone, the wearing tightness degree and other factors, and as the auditory canal shapes of different users, the posture of wearing the earphone and the wearing tightness degree are possibly different, the noise reduction effect finally experienced by the users is extremely different under the condition of noise reduction based on the same secondary noise.
In some embodiments, the electronic device may play an alert tone signal (e.g., "ding-dong" or "noise reduction on") inside the noise reduction space before starting ANC, which may alert the user that an ANC function is about to be turned on. The electronic device may also obtain a sound signal inside the noise reduction space, the sound signal including at least a portion of the cue sound signal, play a secondary noise signal inside the noise reduction space that matches the noise reduction space based on the played cue sound signal and the obtained sound signal, the secondary noise signal may be used to cancel a primary noise signal from outside the noise reduction space. However, on the one hand, the alert tone signal may interfere with the user, and on the other hand, the secondary noise signal is determined prior to performing ANC, and after starting ANC, the noise reduction space may also change, thereby making it difficult for the secondary noise signal to match the noise reduction space, and the ANC is less effective.
In other embodiments, the electronic device may play music within the noise reduction space, collect sound signals via the feedback microphone, the collected sound signals including at least a portion of the music, and then play a secondary noise signal based on the played alert sound signal and the collected sound signals. However, in this manner, if the current electronic device does not play music, it may be difficult to determine the secondary noise signal, or the secondary noise signal may be difficult to match the noise reduction space, which may be poor in ANC effect.
In order to at least solve at least some of the above technical problems, an embodiment of the present application provides an electronic device and an ANC method.
Fig. 4 is a block diagram of an electronic device according to an embodiment of the present application. The electronic device includes a first microphone 410, a scene recognition module 420, a noise selection and playback module 430, a second microphone 440, a first secondary path construction module 450, a second secondary path construction module 460, a leakage status determination module 470, and a noise reduction coefficient matching module 480.
The first microphone 410 may be a microphone that is outside the noise reduction space. In some embodiments, the first microphone 410 may include at least one of the reference microphone 230 and the talk microphone 240 described previously. The sound signal collected by the first microphone 410 outside the noise reduction space is a first sound signal, which is a primary noise signal.
The scene recognition module 420 may be configured to recognize, based on the first sound signal, a noise reduction scene in which the electronic device is currently located. In some embodiments, the scene recognition module 420 may be further configured to recognize whether the current device of the electronic device is in a noise reduction scene or a non-noise reduction scene, and if so, determine which noise reduction scene is currently in.
The non-noise reduction scene may be a scene in which ANC is not performed or at least is not performed in a manner provided by the embodiments of the present application, and the noise reduction scene may be a scene in which ANC is performed in a manner provided by the embodiments of the present application.
In some embodiments, the non-noise reducing scenes may include a talk scene, which may refer to the electronic device currently engaged in a voice call with other devices, and a multimedia scene, which may refer to the electronic device playing multimedia data, such as music and video, through speakers. In some embodiments, the noise reduction scenes may include quiet scenes, non-stationary noise scenes, and stationary noise scenes, where the electronic device may be in a simpler environment, such that there is only a single or stationary source of noise, such as in a room that includes only one fan turned on, the noise signal being the sound of the fan operation. In an unsteady noise scenario, the electronic device may be in a station or mall that may include a variety of sources of noise when the electronic device is in a more complex environment, such as a noisy station or mall. In a quiet scene, there may be little noise source in the environment in which the electronic device is located.
It should be noted that, in practical applications, the non-noise-reducing scene and the noise-reducing scene may include more or less other scenes, and the non-noise-reducing scene and the noise-reducing scene may be determined in advance by the electronic device.
The noise selecting and playing module 430 may be configured to play a corresponding noise signal that is not perceived by the human ear, i.e., a second sound signal, according to the identified noise reduction scene. And it is understood that the lack of perception by the human ear means that the human ear does not hear the second sound signal, but the second sound signal is still detectable by a physical device such as a microphone. In some embodiments, the noise selection and playback module 430 may include speakers that are internal to the noise reduction space. In some embodiments, the speaker may be further configured to play a fourth sound signal for ANC, where the fourth sound signal is a secondary noise signal, and may be configured to cancel the first sound signal within the noise reduction space.
It should be noted that, the electronic device may also play the second sound signal and the fourth sound signal through different speakers, respectively.
The second microphone 440 may be a microphone inside the noise reduction space. In some embodiments, the second microphone 440 may include the feedback microphone 250 described above. The sound signal collected by the second microphone 440 from the noise reduction space is the third sound signal. In some embodiments, the third sound signal may include at least a portion of the second sound signal that is played by the speaker and transmitted via the noise reduction space, and may include a noise signal that remains after passing through the ANC.
For example, the first sound signal may include a hum sound from the operation of the fan, the second sound signal may include a infrasonic wave, and the third sound signal may include the infrasonic wave and a weak hum sound remaining after ANC of the first sound signal.
The first secondary path building block 450 may be configured to determine a current first secondary path transfer function of the noise reduction space according to the second sound signal played by the speaker and the third sound signal received by the second microphone 440, where the first secondary path transfer function is used to indicate an acoustic response of the physical path between the speaker and the second microphone 440 to the sound signal.
The second sub-path building block 460 may store second sub-path transfer functions corresponding to the noise reduction space of the electronic device in various spatial features, where the second sub-path transfer functions may be established by the second sub-path building block 460 in an offline state (e.g., before the electronic device leaves the factory). In some embodiments, the electronic device comprises a headset, and the second secondary path transfer function is a constructed secondary path transfer function of the headset at different human ears (large ear, middle ear, and small ear), multiple wearing postures, and multiple wearing tightness. In some embodiments, the electronic device comprises an in-vehicle device, and the second secondary path transfer function is a constructed secondary path transfer function of the in-vehicle device with doors, windows, air conditioners, trunk, etc. of different vehicles open and closed.
The leakage state determination module 470 may be configured to determine current leakage state data of the noise reduction space for the first secondary path transfer function and the at least one second secondary path transfer function, where the leakage state data may be configured to indicate a leakage state of the noise reduction space for the sound signal. In some embodiments, the leakage state data may include leakage levels, each of which may correspond to a second secondary path transfer function. For example, the second secondary path transfer function includes secondary path function 1, secondary path transfer function 2, and secondary path transfer function 3, and then the leakage level may include level 1, level 2, and level 3 corresponding to secondary path function 1, secondary path transfer function 2, and secondary path transfer function 3, in that order.
It should be noted that, the leakage level corresponding to each second secondary path transfer function may be determined by a related technician, and of course, in practical application, the electronic device may also determine the leakage level corresponding to the second secondary path transfer function by other manners, and the embodiment of the present application does not limit a specific manner of determining the leakage level corresponding to the second secondary path transfer function.
The noise reduction coefficient matching module 480 may be configured to determine a corresponding noise reduction coefficient according to the current leakage state data. The different noise reduction coefficients may enable the electronic device to generate different fourth sound signals (for example, different corresponding frequency bands and/or different energy magnitudes corresponding to the same frequency band), so as to achieve different noise reduction effects. In some embodiments, the noise reduction coefficients may be filter coefficients that perform ANC.
In the embodiment of the application, the electronic device may acquire the first sound signal outside the noise reduction space. Because the first sound signal can influence the range of sound signals which can be perceived by human ears in practice, the second sound signal which is not perceived by human ears is played in the noise reduction space in response to the first sound signal, and the interference of the second sound signal to a user can be reduced. The electronic device may further obtain a third sound signal inside the noise reduction space, and play a fourth sound signal for canceling the first sound signal inside the noise reduction space based on the second sound signal and the third sound signal, thereby implementing ANC with reduced interference to the user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved.
The technical scheme of the present application is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Referring to fig. 5, a flowchart of a method for ANC according to an embodiment of the present application is provided. It should be noted that the method is not limited by the specific order shown in fig. 5 and described below, and it should be understood that, in other embodiments, the order of some steps in the method may be interchanged according to actual needs, or some steps in the method may be omitted or deleted. The method comprises the following steps:
s501, the electronic device acquires a first sound signal outside the noise reduction space.
The electronic device may obtain the first sound signal through a first microphone external to the noise reduction space. The first sound signal may be used to indicate the current noise reduction scene, and is also the primary noise signal that needs to be reduced.
In some embodiments, if the electronic device is an earphone, the first microphone may include at least one of a reference microphone and a talk microphone, and the electronic device may collect a sound signal as the first sound signal.
S502, the electronic equipment responds to the first sound signal, and plays a second sound signal in the noise reduction space, wherein the second sound signal is a sound signal which is not perceived by human ears.
Because different noise reduction scenes may include different noise signals, the noise signals can influence the range of sound signals which can be perceived by human ears actually, and the first sound signals can indicate the current noise reduction scene, so that the second sound signals which are not perceived by human ears are played in the noise reduction space in response to the first sound signals, the second sound signals can be more matched with the first sound signals and the current noise reduction scene, the interference of the second sound signals to users is reduced, the second sound signals can be played at any time, and the real-time ANC is convenient to realize.
The electronic device may play the second sound signal through a speaker inside the noise reduction space. In some embodiments, if the electronic device is a headset, the electronic device may play the second sound signal through a speaker in the ear canal.
In some embodiments, when the noise reduction scenes indicated by the first sound signals are different, the second sound signals may be different, so that it can be ensured that the corresponding second sound signals can be played in the different noise reduction scenes, interference of the second sound signals to the user is further reduced, and real-time ANC is convenient to realize.
The electronic device may determine a noise reduction scene shown by the first sound signal and a corresponding second sound signal based on the sound characteristics of the first sound signal.
In some embodiments, the manner in which the electronic device determines the noise reduction scene based on the first sound signal may be as follows, as shown in fig. 6.
In some embodiments, when the first energy amplitude corresponding to the first sound signal is less than the energy amplitude threshold, the noise reduction scene indicated by the first sound signal is a quiet scene, and the second sound signal may include at least one of an ultrasonic wave and a infrasonic wave.
The range of sound frequencies in nature is very broad, whereas the range of frequencies that can be perceived by the human ear is very limited. As shown in fig. 7, the frequency range of the sound emitted by the person is 85Hz (hertz) -1100Hz, the frequency range of the sound signal which can be perceived by the human ear is 20Hz-20000Hz, the sound signal below 20Hz is the infrasonic wave outside 20Hz-20000Hz, the sound signal above 20000Hz is the ultrasonic wave, and both the infrasonic wave and the ultrasonic wave are the sound signals which can not be perceived by the human ear. Thus, in a quiet scene, the electronic device may treat at least one of infrasonic and ultrasonic waves as the second sound signal, thereby avoiding the user from perceiving the second sound signal.
It should be noted that, compared with the ultrasonic wave, the infrasonic wave has smaller energy loss in the propagation process, which is beneficial to the subsequent determination of the accuracy of the noise reduction coefficient and improves the noise reduction effect.
In some embodiments, the first sound signal may include a plurality of frame sub-signals, and the duration of each frame sub-signal may be a preset duration, and the electronic device may determine the first energy amplitude based on the energy spectrum corresponding to each frame sub-signal, where the first energy amplitude may be an average energy amplitude of a plurality of frame sub-signals, or may be a sum of energy amplitudes of the plurality of frame sub-signals. Of course, in practical applications, the electronic device may determine the first energy magnitude in other ways.
It should be noted that the preset duration may be determined in advance by the electronic device. The preset duration may be 5ms (millisecond), 7.5ms, 10ms, or 15ms, but in practical application, the preset duration may also be other values, and the size of the preset duration is not limited in the embodiment of the present application.
It should be further noted that, in the embodiment of the present application, the number of frames of the sub-signals included in the first sound signal, or the duration of the first sound signal is not limited, where the duration of the first sound signal is equal to the sum of the durations of the multi-frame sub-signals included in the first sound signal. In some embodiments, the first sound signal may comprise 6 frames, 10 frames, or 12 frames of sub-signals, or, in some embodiments, the duration of the first sound signal may be 1 minute or 2 minutes.
It should be further noted that the energy amplitude threshold may be determined by the electronic device in advance based on the manner in which the first energy amplitude is determined. In some embodiments, if the first energy amplitude is the average energy amplitude of the multi-frame sub-signal, the energy amplitude threshold may be 30dB (decibel) or 40dB. The magnitude of the energy amplitude threshold is not limited by the embodiments of the present application.
In some embodiments, when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first sound signal does not include a fifth sound signal corresponding to the first frequency band, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to the first frequency band, but the first duration of the fifth sound signal in the first sound signal is less than the duration threshold, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to the first frequency band, the fifth sound signal includes a fifth sound signal corresponding to the first frequency band, the first time of the fifth sound signal in the first sound signal is greater than or equal to the duration threshold, but the second energy amplitude corresponding to the fifth sound signal in the first duration is greater than or equal to the fluctuation range threshold, the noise scene indicated by the first sound signal is an unsteady noise scene (i.e., the first sound signal does not include a steady fifth sound signal), and the second sound signal includes a sound signal with a hearing amplitude lower than the energy amplitude threshold.
In some embodiments, the first sound signal may include a plurality of frames of sub-signals, the duration of each frame of sub-signal may be a preset duration, the electronic device may determine a first number of frames of the sub-signal including the first frequency band, and the second energy amplitude may be an energy amplitude of the first frequency band in the sub-signal. When the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, each frame of the sub-signal does not include the fifth sound signal corresponding to the first frequency band, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame of the sub-signal including the fifth sound signal corresponding to the first frequency band is less than the frame threshold, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame of the sub-signal including the fifth sound signal corresponding to the first frequency band is greater than or equal to the frame threshold, but the energy fluctuation of the second energy amplitude in the sub-signal of the first frame is greater than or equal to the fluctuation range threshold, the noise reduction scene indicated by the first sound signal is an unstable noise scene (i.e., the first sound signal does not include a stable fifth sound signal), and the second sound signal includes a sound signal whose energy amplitude is below the hearing threshold.
The first frequency band may be predetermined by the electronic device. In some embodiments, the first frequency band is greater than 10Hz and less than 1000Hz. Because the perception difference of different human ears on the sound signals of the frequency band which is more than 10Hz and less than 1000Hz is larger, and the leakage condition of the sound signals of the frequency band which is more than 10Hz and less than 1000Hz is also serious, the frequency band corresponding to the second sound signals can be more than 10Hz and less than 1000Hz, so that ANC can be carried out on the noise signals of the frequency band which is more than 10Hz and less than 1000Hz mainly or mainly, and the noise reduction effect is improved. Of course, it should be noted that, in practical applications, the first frequency band may be in other ranges.
The first duration or the first number of frames may be used to indicate a degree of stability of the fifth sound signal over time. The longer the first time length or the larger the first frame number, the more stable in time the fifth sound signal is. Wherein the first frame number may correspond to a first time length, the first time length being equal to a product of the first frame number and a time length of each frame of the sub-signal; the frame number threshold may correspond to a duration threshold that is equal to the product of the frame number threshold and the duration of each frame of the sub-signal.
It should be noted that the first frame number or the first duration may be determined in advance by the electronic device. In some embodiments, the first frame number may be 4 frames, 5 frames, 6 frames, etc., or the first duration may be 20ms, 25ms, or 30ms. Of course, in practical applications, the first frame number or the first duration may also be other values, and the value of the first frame number or the first duration is not limited in the embodiments of the present application.
In some embodiments, the first frame number is a frame number of a consecutive plurality of sub-signals comprising the fifth sound signal, such that the first frame number is more accurately indicative of a degree of temporal stability of the fifth sound signal.
The second energy magnitude may be used to indicate a degree of stability of the fifth sound signal in sound intensity. The smaller the fluctuation of the second energy amplitude is, the more stable the sound intensity of the fifth sound signal is represented.
The fluctuation range threshold may be determined in advance by the electronic device. In some embodiments, the fluctuation range threshold may be 10dB or 20dB, and of course, in practical applications, the fluctuation range threshold may also be other values, which are not limited in the magnitude of the fluctuation range threshold in the embodiments of the present application.
The sensitivity of the human ear to sound signals of different loudness is also different for sound signals of different frequencies, as shown in fig. 8. In fig. 8, the horizontal axis represents frequency, the vertical axis represents energy amplitude, the energy amplitude of the sound signal is positively correlated with the intensity of the sound signal, the broken line represents a hearing threshold, and in the range of 0Hz to 10000Hz, when the energy amplitude of the sound signal is smaller than the hearing threshold, the human ear cannot perceive the sound signal. Taking 100Hz as an example, as can be seen from fig. 8, the hearing threshold corresponding to 100Hz is about 25dB, and then for a sound signal of 100Hz and 20dB, the human ear will not perceive the sound signal. Therefore, the sound signal below the hearing threshold is used as the second sound signal, and the user can be prevented from perceiving the second sound signal. In addition, compared with the infrasonic wave serving as a second sound signal, the method has the advantages that the sound signal which is in any frequency band and has the energy amplitude smaller than the hearing threshold corresponding to the frequency band is selected as the second sound signal, so that the selectable frequency range of the second sound signal is larger, the flexibility and the anti-interference performance of generating the second sound signal are improved, the accuracy of determining the noise reduction coefficient and generating the fourth sound signal is further improved, and the noise reduction effect is further improved.
It should be noted that, the hearing threshold corresponding to each frequency band may be determined in advance by the electronic device.
In some embodiments, when the current noise reduction scene is an unstable noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave.
In some embodiments, when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to the first frequency band, a first time period of the fifth sound signal in the first sound signal is greater than or equal to a time period threshold, and energy fluctuation of the second energy amplitude corresponding to the fifth sound signal in the first time period is less than the fluctuation range threshold, the noise reduction scene indicated by the first sound signal is a stable noise scene (i.e., the first sound signal includes a stable fifth sound signal), and the second sound signal includes a sound signal masked by the first sound signal.
In some embodiments, the first sound signal may include a plurality of frames of sub-signals, the duration of each frame of sub-signal may be a preset duration, the electronic device may determine a first number of frames of the sub-signal including the first frequency band, and the second energy amplitude may be an energy amplitude of the first frequency band in the sub-signal. When the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame number of the sub-signals comprising the fifth sound signal corresponding to the first frequency band is greater than or equal to the frame number threshold, and the energy fluctuation of the second energy amplitude in the sub-signals of the first frame number is less than the fluctuation range threshold, the noise reduction scene indicated by the first sound signal is an unstable noise scene (i.e. the first sound signal comprises a stable fifth sound signal), and the second sound signal comprises a sound signal masked by the first sound signal.
Since the first sound signal includes the stabilized fifth sound signal, the second sound signal masked by the first sound signal can be played based on the masking effect (frequency domain masking effect or time domain masking effect) of the sound, so that the second sound signal is still not perceived by the human ear. In some embodiments, to enable the second sound signal to be stably masked, reducing problems perceived by the user, the second sound signal may include a sound signal masked by the fifth sound signal.
In the sense of hearing of the human ear, one weaker sound will be masked by another stronger sound, i.e. masking effect. Masking effects may include frequency domain masking and time domain masking. Frequency domain masking refers to masking of a sound signal while the masked sound signal is acting simultaneously. One strong tone in the frequency domain can mask a nearby weak tone which sounds simultaneously with the strong tone, and the closer the weak tone is to the strong tone, the easier the weak tone is to mask in general; conversely, weak sounds farther from strong sounds are not easily masked. In this case, a stronger sound signal capable of masking other sound signals may be used as the masking sound signal, and a weaker sound signal to be masked may be used as the masked sound signal. As shown in fig. 8, the frequency of the masking sound signal is about 300Hz, the energy amplitude is about 55dB, the masking threshold corresponding to the masking sound signal may be as shown in solid line, and for a sound signal of an arbitrary frequency band, if the energy amplitude of the sound signal is smaller than the masking threshold corresponding to the frequency band, the sound signal is masked by the masking sound signal. Time domain masking refers to masking of a masked sound signal before and after the masked sound signal when a masking effect occurs when the masked sound signal is different from the masked sound signal. In addition, compared with the infrasonic wave serving as the second sound signal, the sound signal masked by the first sound signal is selected to serve as the second sound signal, so that the selectable frequency range and the selectable energy amplitude of the second sound signal are larger, the flexibility of generating the second sound signal and the anti-interference performance of the second sound signal are improved, the accuracy of determining the noise reduction coefficient subsequently and generating the fourth sound signal is further improved, and the noise reduction effect is further improved.
In some embodiments, taking determining the second sound signal based on the fifth sound signal as an example, the electronic device may determine a plurality of frequency bands masked by the fifth sound signal and masking thresholds corresponding to respective frequency bands based on the first frequency band and the second energy amplitude corresponding to the fifth sound signal, and determine the second sound signal based on the plurality of frequency bands and the masking thresholds corresponding to respective frequency bands.
In some embodiments, when the current noise reduction scene is a steady noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave, and/or a sound signal having an energy amplitude less than a hearing threshold.
It should be noted that, when comparing the quiet scene, the non-stable noise scene and the stable noise scene, and when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold (i.e., not being the quiet scene), the noise reducing scene where the electronic device is currently located may be the stable noise scene or the non-stable noise scene. In some embodiments, the electronic device may determine that it is currently in an unstable noise scenario upon determining that it is not.
In some embodiments, the second sound signal may be a pure sound signal with a single frequency, or may be a sound signal obtained by superimposing pure sound signals with multiple frequencies. When the frequency components included in the second sound signal are more complex, the anti-interference capability of the second sound signal is stronger, so that the accuracy of the subsequent determination of the noise reduction coefficient and the generation of the fourth sound signal is higher, and the noise reduction effect is better.
In some embodiments, for quiet scenes and unsteady noise scenes, the timing at which the electronic device plays the second sound signal may be independent of the timing at which the first sound signal is detected. In other embodiments, for a steady noise scenario, since the first sound signal (or the fifth sound signal) is required to mask the second sound signal, the electronic device may play the second sound signal while playing the first sound signal (or the fifth sound signal) such that the first sound signal (or the fifth sound signal) masks the second sound signal through a frequency masking effect; alternatively, the electronic device may play the second sound signal within a leading masking duration and a trailing masking duration of the first sound signal (or the fifth sound signal) such that the first sound signal (or the fifth sound signal) masks the second sound signal by a time domain masking effect.
Wherein the leading masking duration may be 5ms and the lagging masking duration may be 50ms-200ms. Of course, in practical application, the duration of the advanced masking duration and the duration of the delayed masking duration may be other values determined by experiments or other manners, and the embodiment of the present application does not limit the duration of the advanced masking duration and the delayed masking duration.
It should be noted that, in practical applications, the electronic device may determine the current noise reduction scene in other manners. For example, in some embodiments, an electronic device may receive a user-submitted noise reduction scene.
S503, the electronic device acquires a third sound signal inside the noise reduction space.
The electronic device may acquire the third sound signal through a second microphone inside the noise reduction space. The third sound signal may comprise at least part of the aforementioned noise signal not perceived by the human ear, i.e. the second sound signal. In some embodiments, the third sound signal may also include a residual noise signal after the ANC.
In some embodiments, if the electronic device is a headset, the second microphone may include a feedback microphone.
S504, the electronic device plays a fourth sound signal inside the noise reduction space, where the fourth sound signal is used to partially or completely cancel the first sound signal.
Because the electronic device obtains the third sound signal in the noise reduction space under the condition that the second sound signal is played in the noise reduction space, the influence of the space characteristics of the current noise reduction space on sound transmission can be determined based on the difference between the third sound signal and the second sound signal, and further, a more accurate fourth sound signal can be generated aiming at the current noise reduction space, so that the fourth sound signal can well eliminate the first sound signal. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved.
Wherein the electronic device may play the fourth sound signal through a speaker inside the noise reduction space. The speaker for playing the fourth sound signal and the speaker for playing the second sound signal may be the same speaker or different speakers.
In some embodiments, the electronic device may determine a noise reduction coefficient based on the second sound signal and the third sound signal, generate a fourth sound signal based on the noise reduction coefficient, and play the fourth sound signal.
The electronic device may determine a current secondary path transfer function of the noise reduction space based on the second sound signal and the third sound signal, and determine the noise reduction coefficient based on the secondary path transfer function. And when the anti-interference performance of the second sound signal is stronger, the energy amplitude is larger, the accuracy of the determined secondary path transfer function is higher, and correspondingly, the accuracy of the determined noise reduction coefficient is higher.
The manner in which the electronic device determines the noise reduction coefficient based on the second sound signal and the third sound signal can be referred to as fig. 9 described below.
In the embodiment of the application, the electronic device may acquire the first sound signal outside the noise reduction space. Because the first sound signal can influence the range of the sound signal which can be perceived by the human ear in practice, the second sound signal which is not perceived by the human ear is played in the noise reduction space in response to the first sound signal, and the interference of the second sound signal to the user can be reduced. The electronic device may further obtain a third sound signal inside the noise reduction space, and play a fourth sound signal for canceling the first sound signal inside the noise reduction space based on the second sound signal and the third sound signal, thereby implementing ANC with reduced interference to the user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved.
Referring to fig. 6, a flowchart of a method for determining a noise reduction scene according to an embodiment of the present application is provided. It should be noted that the method is not limited by the specific order shown in fig. 6 and described below, and it should be understood that, in other embodiments, the order of some steps in the method may be interchanged according to actual needs, or some steps in the method may be omitted or deleted. The method comprises the following steps:
s601, the electronic device judges whether the electronic device is currently in a non-noise reduction scene or a noise reduction scene. If not, ending, if not, and if not, executing S602.
The non-noise reducing scene may be a scene that does not require ANC or at least does not require ANC in the manner provided by embodiments of the present application. Accordingly, the performing of the subsequent steps may be stopped when the electronic device is currently in a non-noise reducing scene, and may continue when the electronic device determines that the electronic device is not currently in a non-noise reducing scene.
The non-noise reducing scene and its manner of recognition may be determined in advance by the electronic device. In some embodiments, the non-noise reducing scene may include at least one of a talk scene and a multimedia scene. The communication scene may refer to that the electronic device is currently performing a voice communication with other devices, and the multimedia scene may refer to that the electronic device is playing multimedia data such as music and video through a speaker.
In some embodiments, the electronic device may determine whether the first sound signal includes a voice signal by performing voice activity detection (voice activity detection, VAD) on the first sound signal, and if so, determine that the first sound signal is currently in a talk scenario, and otherwise, determine that the first sound signal is not currently in a talk scenario.
The VAD is a voice processing technology, and is capable of detecting whether a voice signal is included in a voice signal to be detected.
In some embodiments, if the electronic device is a first sound signal collected by the reference microphone and the call microphone, a correlation coefficient between the first sound signal collected by the reference microphone and the first sound signal collected by the call microphone may be determined, if the correlation coefficient is higher than a preset correlation coefficient threshold, the current call scene may be determined, otherwise, the current call scene is determined not to be in.
Of course, in practical application, the electronic device may also determine whether the current call scene is in another manner, and the embodiment of the present application does not limit a specific manner of determining whether the current call scene is in.
In some embodiments, the electronic device may detect whether a sound signal from an audio player or a video player is included in a sound signal output to a speaker, and if so, may determine that a multimedia scene is currently in, otherwise, may determine that a multimedia scene is not currently in.
Of course, in practical applications, the electronic device may also determine whether the electronic device is currently in a multimedia scene in other manners, which is not limited in the embodiment of the present application.
In addition, in other embodiments, the electronic device may not distinguish between non-noise-reduction scenes, that is, all scenes are noise-reduction scenes, so that S601 is omitted and the subsequent steps are directly performed.
S602, the electronic device judges whether the first energy amplitude of the first sound signal is smaller than an energy amplitude threshold. If yes, determining that the current sound scene is a quiet scene, otherwise, executing S603.
In some embodiments, the first sound signal may include a plurality of frame sub-signals, and the duration of each frame sub-signal may be a preset duration, and the electronic device may determine the first energy amplitude based on the energy spectrum corresponding to each frame sub-signal, where the first energy amplitude may be an average energy amplitude of a plurality of frame sub-signals, or may be a sum of energy amplitudes of the plurality of frame sub-signals. Of course, in practical applications, the electronic device may determine the first energy magnitude in other ways.
It should be noted that the preset duration may be determined in advance by the electronic device. The preset duration may be 5ms (millisecond), 7.5ms, 10ms, or 15ms, but in practical application, the preset duration may also be other values, and the size of the preset duration is not limited in the embodiment of the present application.
It should be further noted that, in the embodiment of the present application, the number of frames of the sub-signal included in the first sound signal is set. In some embodiments, the first sound signal may include 6 frames, 10 frames, or 12 frames of sub-signals.
It should be further noted that the energy amplitude threshold may be determined by the electronic device in advance based on the manner in which the first energy amplitude is determined. In some embodiments, if the first energy amplitude is the average energy amplitude of the multi-frame sub-signal, the energy amplitude threshold may be 30dB or 40dB. The magnitude of the energy amplitude threshold is not limited by the embodiments of the present application.
S603, the electronic device tracks the corresponding frequency band of each frame of the sub-signal and the energy amplitude corresponding to each frequency band, and determines the first frame number of the fifth sound signal corresponding to the first frequency band.
The electronic device may determine a frequency band corresponding to each frame of the sub-signal and an energy amplitude corresponding to each frequency band, and further determine a frequency band change corresponding to the multi-frame sub-signal and an energy change corresponding to each frequency band. If any frame of sub-signal comprises the corresponding frequency band comprising the first frequency band, continuing to acquire whether the frequency band corresponding to another frame of sub-signal comprises the first frequency band. If the frequency band corresponding to the other frame of sub-signal also comprises the first frequency band, judging whether the energy fluctuation of the second energy amplitude corresponding to the fifth sound signal in the two frames of sub-signals is smaller than the fluctuation range threshold value. If the energy fluctuation is less than the fluctuation range threshold, the first frame number is increased by 1. If the energy fluctuation is greater than or equal to the fluctuation range threshold, the accumulation of the first frame number can be stopped, and the acquisition of the next frame sub-signal can be continued.
In some embodiments, the first frame number is a frame number of a consecutive plurality of sub-signals comprising the fifth sound signal, such that the first frame number is more accurately indicative of a degree of temporal stability of the fifth sound signal.
The first frequency band may be predetermined by the electronic device. In some embodiments, the first frequency band is greater than 10Hz and less than 1000Hz.
For example, the first sound signal includes a first frame sub-signal, a second frame sub-signal, a third frame sub-signal, a fourth frame sub-signal, a fifth frame sub-signal, and a sixth frame sub-signal, the fluctuation range threshold is 10dB, and the frame number threshold is 5. The electronic equipment acquires each frequency band corresponding to the first frame of sub-signals and the energy amplitude corresponding to each frequency band, determines that the frequency band corresponding to the first frame of sub-signals comprises a fifth sound signal of 100Hz and the corresponding energy amplitude is 50dB, continuously acquires each frequency band corresponding to the second frame of sub-signals and the energy amplitude corresponding to each frequency band, determines that the frequency band corresponding to the second frame of sub-signals also comprises the fifth sound signal of 100Hz and the energy amplitude is 55dB, and the energy amplitude fluctuation of the fifth sound signal in the first frame of sub-signals and the second frame of sub-signals is 5dB and 5dB is smaller than 10dB, so that the first frame number is determined to be increased by 1, and the frequency bands corresponding to the third frame of sub-signals and the energy amplitude corresponding to each frequency band are continuously acquired until the sixth frame of sub-signals are obtained to obtain the final first frame number.
S604, the electronic device judges whether the first frame number is smaller than a frame number threshold. If yes, determining the current sound scene as an unstable noise scene, otherwise, determining the current sound scene as a stable noise scene.
In the embodiment of the application, the electronic device may identify whether the current environment is a noise reduction scene or a non-noise reduction scene. If the current noise reduction scene is determined, whether the current noise reduction scene is a quiet scene, an unstable noise scene or a stable noise scene can be further determined based on the sound characteristics of the first sound signal, so that the subsequent selection of the second sound signal which is not perceived by the human ear for the specific noise reduction scene is facilitated, and the interference of the second sound signal to the user is reduced.
Referring to fig. 9, a flowchart of a method for determining a noise reduction coefficient based on a second sound signal and a third sound signal according to an embodiment of the present application is provided. It should be noted that the method is not limited by the specific order shown in fig. 9 and described below, and it should be understood that, in other embodiments, the order of some steps in the method may be interchanged according to actual needs, or some steps in the method may be omitted or deleted. The method comprises the following steps:
S901, the electronic device determines a current first secondary path transfer function of the noise reduction space based on the second sound signal and the third sound signal.
In the ANC, the sound signals played by the loudspeaker may include a second sound signal not perceived by the human ear, and may further include a noise cancellation signal for the ANC, that is, a fourth sound signal, where the third sound signal includes an acquired second sound signal not perceived by the human ear, and may further include a residual noise signal after the ANC. The electronic device may determine a first secondary path transfer function based on the incoherence of the second sound signal and the residual noise signal.
In some embodiments, the first secondary path transfer function may be determined by an LMS algorithm. As shown in fig. 10, where x (n) is a second sound signal not perceived by the human ear; s' (z) is the first secondary path transfer function; y (n) is the estimated sound signal of x (n) after the first secondary path is transmitted; d (n) is the residual noise signal; e (n) is a third sound signal. The electronic device may determine s' (z) based on x (n) and e (n).
S902, the electronic equipment acquires at least one stored second secondary path transfer function and leakage state data corresponding to each second secondary path transfer function.
The electronic device may determine and store the second sub-path transfer function corresponding to each noise reduction space and the leakage state data corresponding to the second sub-path transfer function under the circumstance of the noise reduction spaces with various spatial features.
For example, the at least one second secondary path transfer function may be as shown in fig. 11. Each curve may represent a second secondary path transfer function, with the abscissa representing frequency and the ordinate representing the amplitude corresponding to frequency. The relevant technician can determine the leakage state data corresponding to each second secondary path transfer function according to experience and actual noise reduction requirements. Taking the leakage state data as the leakage level as an example, in fig. 11, including 8 second sub-path transfer functions, the leakage levels corresponding to the second sub-path transfer functions from top to bottom are level 1, level 2, level 3, level 4, level 5, level 6, level 7, and level 8 in order.
In some embodiments, the electronic device comprises a headset, and the second secondary path transfer function is a constructed secondary path transfer function of the headset at different human ears (large ear, middle ear, and small ear), multiple wearing postures, and multiple wearing tightness. In some embodiments, the electronic device comprises an in-vehicle device, and the second secondary path transfer function is a constructed secondary path transfer function of the in-vehicle device with doors, windows, air conditioners, trunk, etc. of different vehicles open and closed.
S903, the electronic device compares the first secondary path transfer function with each second secondary path transfer function, and determines current leakage state data.
The electronic device may compare the first secondary path transfer function with each second secondary path transfer function, determine a second secondary path transfer function having a smallest difference from the first secondary path transfer function, where the leakage state data corresponding to the second secondary path transfer function is current leakage state data.
Wherein the electronic device may compare the magnitude of the first secondary path transfer function with the magnitude of the second secondary path transfer function, as shown in fig. 11, to determine a second secondary path transfer function having a smallest difference from the magnitude of the first secondary path transfer function; alternatively, the electronic device may compare the phase of the first secondary path transfer function with the phase of the second secondary path transfer function to determine the second secondary path transfer function that has the smallest phase difference from the first secondary path transfer function.
In some embodiments, the electronic device may determine, mathematically statistically, a difference in magnitude or phase of the first secondary path transfer function corresponding to each frequency in the second secondary transfer function, thereby determining a second secondary path transfer function that has the smallest difference in magnitude from the first secondary path transfer function, or determining a second secondary path transfer function that has the smallest difference in phase from the first secondary path transfer function. Alternatively, in other embodiments, the electronic device may generate an amplitude image of the first secondary path transfer function and the second secondary path transfer function, determine the second secondary path transfer function with the smallest difference in amplitude from the first secondary path transfer function by way of image analysis, or generate a phase image of the first secondary path transfer function and the second secondary path transfer function, and determine the second secondary path transfer function with the smallest difference in phase from the first secondary path transfer function by way of image analysis.
S904, the electronic device determines a noise reduction coefficient corresponding to the leakage state data.
The electronic device may determine a corresponding noise reduction coefficient based on the leakage state data, such as the leakage level, and the like, and the noise reduction coefficient may be used to generate a fourth sound signal corresponding to the first sound signal, so that the fourth sound signal may be capable of eliminating the first sound signal, thereby implementing ANC.
In some embodiments, the electronic device may determine the noise reduction coefficient corresponding to each leakage state data in advance, and thus, when determining the current leakage state data, may acquire the noise reduction coefficient corresponding to each leakage state data based on the noise reduction coefficient corresponding to the leakage state data. In other embodiments, the electronic device may determine a policy for determining the noise reduction coefficient based on the leakage state data, such as a trained network learning model, and when determining the current leakage state data, may determine the current noise reduction coefficient based on the leakage state data and the policy for determining the noise reduction coefficient based on the leakage state data.
In the embodiment of the application, the electronic device may determine, based on the second sound signal and the third sound signal, a current first secondary path transfer function of the noise reduction space, compare the first secondary path transfer function with multiple second secondary path transfer functions acquired under different noise reduction spaces, and determine, according to a difference between the first secondary path transfer function and the second secondary path transfer function, a noise reduction coefficient matched with the current noise reduction space, where the noise reduction coefficient can enable a fourth sound signal to be generated and played to be more matched with the first sound signal outside the noise reduction space, so as to achieve a better noise reduction effect.
Referring to fig. 12, a flowchart of a method for ANC according to an embodiment of the present application is provided. It should be noted that the method is not limited by the specific order shown in fig. 12 and described below, and it should be understood that, in other embodiments, the order of some steps in the method may be interchanged according to actual needs, or some steps in the method may be omitted or deleted. The method comprises the following steps:
and S1201, when the electronic equipment is in different noise reduction scenes, respectively playing second sound signals corresponding to the different noise reduction scenes in the noise reduction space, wherein the second sound signals are sound signals which are not perceived by human ears.
Different noise reduction scenes may include different noise signals, the noise signals may influence the range of sound signals actually perceived by human ears, and the electronic device plays second sound signals corresponding to the different noise reduction scenes, so that interference of the second sound signals to users can be reduced, and real-time ANC is convenient to realize.
In some embodiments, the electronic device may determine the current noise reduction scene in a similar or identical manner to the foregoing S501-S502, or in a manner shown in fig. 6, and generate and play a second sound signal corresponding to the noise reduction scene.
In some embodiments, the electronic device may receive the user-specified noise reduction scene and then generate and play the second sound signal in a similar or identical manner to S502 described previously. For example, the electronic device may provide a plurality of noise reduction scenes to a user and receive a noise reduction scene specified by the user among the plurality of noise reduction scenes.
Of course, in practical applications, the electronic device may determine the current noise reduction scene in other manners, and the manner in which the electronic device determines the current noise reduction scene in the embodiment of the present application is not limited.
In some embodiments, the second sound signal may include at least one of an ultrasonic wave and an infrasonic wave when the noise reduction scene in which the electronic device is currently located is a quiet scene.
In some embodiments, when the noise reduction scene in which the electronic device is currently located is an unstable noise scene (i.e., the first sound signal does not include a stable fifth sound signal), the second sound signal includes a sound signal having an energy amplitude below a hearing threshold. In some embodiments, when the current noise reduction scene is an unstable noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave.
In some embodiments, when the noise reduction scene in which the electronic device is currently located is a stationary noise scene (i.e., the first sound signal includes a stationary fifth sound signal), the second sound signal includes a sound signal masked by the first sound signal. In some embodiments, when the current noise reduction scene is a steady noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave, and/or a sound signal having an energy amplitude less than a hearing threshold.
It should be noted that, if the electronic device does not determine the noise reduction scene based on the first sound signal outside the noise reduction space, after determining that the noise reduction scene is a stable noise scene, the first sound signal may be acquired in a similar or identical manner to S501, and then the second sound signal masked by the first sound signal may be determined in a similar or identical manner to S502.
S1202, the electronic device acquires a third sound signal inside the noise reduction space, wherein the third sound signal at least comprises part of the second sound signal.
The manner in which the electronic device obtains the third sound signal in the noise reduction space may refer to the description related to S503, which is not described in detail herein.
S1203, the electronic device plays a fourth sound signal inside the noise reduction space, where the fourth sound signal is used to cancel part or all of the first sound signal from outside the noise reduction space.
The manner in which the electronic device plays the fourth sound signal in the noise reduction space based on the second sound signal and the third sound signal may refer to the related description in S504, which is not described in detail herein.
In the embodiment of the application, because the first sound signals included in different noise reduction scenes are different, and the first sound signals can influence the range of sound signals which can be perceived by human ears in practice, different second sound signals which are not perceived by human ears are played based on different noise reduction scenes, and the interference of the second sound signals to users can be reduced. The electronic device may further obtain a third sound signal inside the noise reduction space, and play a fourth sound signal for canceling the first sound signal inside the noise reduction space based on the second sound signal and the third sound signal, thereby implementing ANC with reduced interference to the user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved.
Referring to fig. 13, a flowchart of a method for ANC according to an embodiment of the present application is provided. It should be noted that the method is not limited by the specific order shown in fig. 13 and described below, and it should be understood that, in other embodiments, the order of some steps in the method may be interchanged according to actual needs, or some steps in the method may be omitted or deleted. The method comprises the following steps:
s1301, when a noise scene is stabilized at the electronic device, a second sound signal corresponding to the stabilized noise scene is played inside the noise reduction space, the second sound signal including a sound signal masked by a first sound signal, the first sound signal being a sound signal outside the noise reduction space.
The electronic device may determine whether the current noise reduction scene is a stable noise scene or determine the second sound signal masked by the first sound signal in a similar or identical manner to the foregoing S501 to S502, or in a manner shown in fig. 6.
When the electronic equipment is in a stable noise scene, the electronic equipment may be in a simpler environment, so that only a single or stable noise source exists, the first sound signal is a stable sound signal, and therefore the second sound signal masked by the first sound signal can be played, so that the second sound signal still cannot be perceived by human ears under the masking effect of sound, the selectable frequency range and the energy amplitude of the second sound signal are larger, the flexibility of generating the second sound signal and the anti-interference performance of the second sound signal are improved, the accuracy of determining the noise reduction coefficient subsequently and generating a fourth sound signal is further improved, and the noise reduction effect is further improved.
In some embodiments, when a first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to the first frequency band, a first time period of the fifth sound signal in the first sound signal is greater than or equal to a time period threshold, and an energy fluctuation of a second energy amplitude corresponding to the fifth sound signal in the first time period is less than a fluctuation range threshold, the first sound signal includes a stable fifth sound signal, and the second sound signal includes a sound signal masked by the fifth sound signal.
In some embodiments, the first sound signal may include a plurality of frames of sub-signals, the duration of each frame of sub-signal may be a preset duration, the electronic device may determine a first number of frames of the sub-signal including the first frequency band, and the second energy amplitude may be an energy amplitude of the first frequency band in the sub-signal. When the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame number of the sub-signals comprising the fifth sound signal corresponding to the first frequency band is greater than or equal to the frame number threshold, and the energy fluctuation of the second energy amplitude in the sub-signals of the first frame number is less than the fluctuation range threshold, the first sound signal comprises a stable fifth sound signal, and correspondingly, the second sound signal comprises a sound signal masked by the fifth sound signal.
In some embodiments, when the current noise reduction scene is a steady noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave, and/or a sound signal having an energy amplitude less than a hearing threshold.
In S1302, the electronic device obtains a third sound signal inside the noise reduction space, where the third sound signal includes at least a portion of the second sound signal.
The manner in which the electronic device obtains the third sound signal in the noise reduction space may refer to the description related to S503, which is not described in detail herein.
And S1303, the electronic device plays a fourth sound signal in the noise reduction space, wherein the fourth sound signal is used for eliminating part or all of the first sound signal.
The manner in which the electronic device plays the fourth sound signal in the noise reduction space based on the second sound signal and the third sound signal may refer to the related description in S504, which is not described in detail herein.
In this embodiment of the present application, when a noise scene is stabilized at the electronic device, the electronic device may be in a simpler environment, so that there is only a single or stable noise source, and the first sound signal is a stable sound signal, so that the second sound signal masked by the first sound signal may be played, so that the second sound signal still cannot be perceived by the human ear under the masking effect of the sound, and interference of the second sound signal to the user may be reduced. The electronic device may further obtain a third sound signal inside the noise reduction space, and play a fourth sound signal for canceling the first sound signal inside the noise reduction space based on the second sound signal and the third sound signal, thereby implementing ANC with reduced interference to the user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved. In addition, the sound signal masked by the first sound signal is selected as the second sound signal, so that the selectable frequency range and the selectable energy amplitude of the second sound signal are larger, the flexibility of generating the second sound signal and the anti-interference performance of the second sound signal are improved, the accuracy of determining the noise reduction coefficient subsequently and generating the fourth sound signal is further improved, and the noise reduction effect is further improved.
Referring to fig. 14, a flowchart of a method for ANC according to an embodiment of the present application is provided. It should be noted that the method is not limited by the specific order shown in fig. 14 and described below, and it should be understood that, in other embodiments, the order of some steps in the method may be interchanged according to actual needs, or some steps in the method may be omitted or deleted. The method comprises the following steps:
s1401, when the electronic device is in an unstable noise scenario, playing a second sound signal corresponding to the unstable noise scenario inside the noise reduction space, where the second sound signal includes a sound signal with an energy amplitude lower than a hearing threshold.
The electronic device may determine, in a similar or identical manner to the foregoing S501-S502, or in a manner shown in fig. 6, whether the current noise reduction scene is an unstable noise scene, or determine the second sound signal.
When the electronic equipment is in an unstable noise scene, the electronic equipment may include various noise sources when the electronic equipment is in a complex environment, so that a sound signal with any frequency band and energy amplitude smaller than a hearing threshold corresponding to the frequency band can be selected as a second sound signal, interference of the second sound signal to a user is reduced, the selectable frequency range of the second sound signal is also enabled to be larger, flexibility and anti-interference performance of generating the second sound signal are improved, further the accuracy of determining a noise reduction coefficient and generating a fourth sound signal later is improved, and the noise reduction effect is further improved.
In some embodiments, when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first sound signal does not include a fifth sound signal corresponding to the first frequency band, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to the first frequency band, but the first duration of the fifth sound signal in the first sound signal is less than the duration threshold, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first sound signal includes a fifth sound signal corresponding to the first frequency band, the fifth sound signal includes a fifth sound signal corresponding to the first frequency band, the first time of the fifth sound signal in the first sound signal is greater than or equal to the duration threshold, but the second energy amplitude corresponding to the fifth sound signal in the first duration is greater than or equal to the fluctuation range threshold, the noise scene indicated by the first sound signal is an unsteady noise scene (i.e., the first sound signal does not include a steady fifth sound signal), and the second sound signal includes a sound signal with a hearing amplitude lower than the energy amplitude threshold.
In some embodiments, the first sound signal may include a plurality of frames of sub-signals, the duration of each frame of sub-signal may be a preset duration, the electronic device may determine a first number of frames of the sub-signal including the first frequency band, and the second energy amplitude may be an energy amplitude of the first frequency band in the sub-signal. When the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, each frame of the sub-signal does not include the fifth sound signal corresponding to the first frequency band, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame of the sub-signal including the fifth sound signal corresponding to the first frequency band is less than the frame threshold, or when the first energy amplitude corresponding to the first sound signal is greater than or equal to the energy amplitude threshold, the first frame of the sub-signal including the fifth sound signal corresponding to the first frequency band is greater than or equal to the frame threshold, but the energy fluctuation of the second energy amplitude in the sub-signal of the first frame is greater than or equal to the fluctuation range threshold, the noise reduction scene indicated by the first sound signal is an unstable noise scene (i.e., the first sound signal does not include a stable fifth sound signal), and the second sound signal includes a sound signal whose energy amplitude is below the hearing threshold.
In some embodiments, when the current noise reduction scene is an unstable noise scene, the second sound signal may further include at least one of an ultrasonic wave and a infrasonic wave.
S1402, the electronic device obtains a third sound signal inside the noise reduction space, where the third sound signal at least includes a portion of the second sound signal.
The manner in which the electronic device obtains the third sound signal in the noise reduction space may refer to the description related to S503, which is not described in detail herein.
S1403, the electronic device plays a fourth sound signal inside the noise reduction space, where the fourth sound signal is used to cancel part or all of the first sound signal, and the first sound signal is a sound signal outside the noise reduction space.
The manner in which the electronic device plays the fourth sound signal in the noise reduction space based on the second sound signal and the third sound signal may refer to the related description in S504, which is not described in detail herein.
In the embodiment of the application, when the electronic device is in an unstable noise scene, the electronic device may include various noise sources when the electronic device is in a relatively complex environment, so that a sound signal with any frequency band and energy amplitude smaller than a hearing threshold corresponding to the frequency band can be selected as a second sound signal, and interference of the second sound signal to a user can be reduced. The electronic device may further obtain a third sound signal inside the noise reduction space, and play a fourth sound signal for canceling the first sound signal inside the noise reduction space based on the second sound signal and the third sound signal, thereby implementing ANC with reduced interference to the user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved. In addition, the sound signal with the hearing threshold is selected as the second sound signal by the energy amplitude, so that the selectable frequency range of the second sound signal is larger, the flexibility of generating the second sound signal and the anti-interference performance of the second sound signal are improved, the accuracy of determining the noise reduction coefficient and generating the fourth sound signal is further improved, and the noise reduction effect is further improved.
Referring to fig. 15, a flowchart of a method for ANC according to an embodiment of the present application is provided. It should be noted that the method is not limited by the specific order shown in fig. 15 and described below, and it should be understood that, in other embodiments, the order of some steps in the method may be interchanged according to actual needs, or some steps in the method may be omitted or deleted. The method comprises the following steps:
s1501, when the electronic device is in a quiet scene, playing a second sound signal corresponding to the quiet scene inside the noise reduction space, the second sound signal including at least one of an infrasonic wave and an ultrasonic wave.
The electronic device may determine whether the current noise reduction scene is a quiet scene in a similar or identical manner to the foregoing S501-S502, or in a manner shown in fig. 6.
Since there may be little noise source in the environment in which the electronic device is located in a quiet scene, infrasonic and/or ultrasonic waves may be selected as the second sound signal to reduce the interference of the second sound signal to the user.
In some embodiments, when the first energy amplitude corresponding to the first sound signal is less than the energy amplitude threshold, the noise reduction scene indicated by the first sound signal is a quiet scene. In some embodiments, the first sound signal may include a plurality of frame sub-signals, and the duration of each frame sub-signal may be a preset duration, and the electronic device may determine the first energy amplitude based on the energy spectrum corresponding to each frame sub-signal, where the first energy amplitude may be an average energy amplitude of a plurality of frame sub-signals, or may be a sum of energy amplitudes of the plurality of frame sub-signals. Of course, in practical applications, the electronic device may determine the first energy magnitude in other ways.
S1502, the electronic device obtains a third sound signal inside the noise reduction space, where the third sound signal at least includes a portion of the second sound signal.
The manner in which the electronic device obtains the third sound signal in the noise reduction space may refer to the description related to S503, which is not described in detail herein.
In S1503, the electronic device plays a fourth sound signal in the noise reduction space, where the fourth sound signal is used to cancel part or all of the first sound signal, and the first sound signal is a sound signal outside the noise reduction space.
The manner in which the electronic device plays the fourth sound signal in the noise reduction space based on the second sound signal and the third sound signal may refer to the related description in S504, which is not described in detail herein.
In the embodiment of the application, because in a quiet scene, the environment in which the electronic device is likely to be located has almost no noise source, the infrasonic wave and/or the ultrasonic wave can be selected as the second sound signal, so that the interference of the second sound signal to the user is reduced. And the electronic equipment can also acquire a third sound signal in the noise reduction space, and play a fourth sound signal for eliminating the first sound signal in the noise reduction space based on the second sound signal and the third sound signal, so that ANC is realized under the condition of reducing interference to a user. And because the interference of the ANC realization process to the user is reduced, the real-time ANC can be realized, and the noise reduction effect is improved.
In some embodiments, the electronic device may perform ANC in real time, periodically, or when a preset condition is triggered, in accordance with the method as provided in fig. 5, 12, 13, 14, or 15.
If the electronic device performs ANC according to the method provided by the embodiment of the present application in real time, the electronic device may continuously start executing each step in the ANC method provided in fig. 5, fig. 12, fig. 13, fig. 14, or fig. 15 from S501, S1201, S1301, S1401, or S1501 when running or the ANC function is opened, and further continuously determine the sound feature of the first sound signal or the current scene of the current noise reduction space, so as to update the played second sound signal when the sound feature of the first sound signal or the current scene changes, further obtain the latest leakage state parameter that is most matched with the noise reduction space currently, and play and update the played fourth sound signal based on the leakage state parameter, thereby improving the matching degree of the fourth sound signal and the first sound signal, and improving the ANC effect.
If the electronic device periodically performs ANC according to the method provided in the embodiment of the present application, each step in the ANC method provided in fig. 5, 12, 13, 14 or 15 may be performed from S501, S1201, S1301, S1401 or S1501 after a preset specific period of time passes when the ANC function is operated or the ANC function is turned on, until a fourth sound signal is played, then the fourth sound signal is continuously played, and the specific detection period is further separated, and each step in the ANC method provided in fig. 5, 12, 13, 14 or 15 may be performed from S501, S1201, S1301, S1401 or S1501 again, where a new fourth sound signal is played, that is, the fourth sound signal determined in the previous time is updated. The detection period may be determined in advance by the electronic device, and in some embodiments, the detection period may be 3 minutes or 5 minutes, and of course, in practical applications, the detection period may also be other time periods, which is not limited in the embodiments of the present application. The electronic device periodically performs ANC, so that compared with performing ANC in real time, power consumption can be reduced, and interference of playing the second sound signal in the ANC process to the user can be further reduced.
If the electronic device performs ANC according to the method provided in the embodiment of the present application when the preset condition is triggered, each step in the ANC method provided in fig. 5, 12, 13, 14 or 15 may be performed from S501, S1201, S1301, S1401 or S1501 until the fourth sound signal is played when the operation or the ANC function is turned on and the preset condition is detected to be triggered. And then, continuing to play the fourth sound signal until the preset condition is triggered again, starting from S501, S1201, S1301, S1401 or S1501 again to execute each step in the ANC method as provided in fig. 5, 12, 13, 14 or 15, so as to play a new fourth sound signal, that is, update the fourth sound signal determined last time.
In some embodiments, because the electronic device may cause a change in the noise reduction space when the state of vibration, movement or gravity changes, for example, the gesture of wearing the earphone by the user may change, the preset triggering condition may include detecting that the electronic device is in vibration, movement or gravity state change, so that the ANC method provided in the embodiments of the present application may be executed again, and the fourth played sound signal may be updated in time.
The electronic device can judge whether the electronic device is vibrated, moved or changed in gravity state through a motion sensor such as a gyroscope sensor and an acceleration sensor.
In some embodiments, the preset condition may include that the current noise residual signal includes a new frequency band, or that the energy amplitude corresponding to a particular frequency band is increased, as compared to the historical noise residual signal. For example, the energy amplitude corresponding to 100Hz in the first sound signal is 90dB, after ANC is performed in the history, the energy amplitude corresponding to 100Hz is 60dB, and after ANC is currently performed, the energy amplitude corresponding to 100Hz is 70dB,70dB is greater than 60dB, so that the noise reduction space may be changed to make the sound leakage more serious, therefore, the ANC method provided by the embodiment of the present application may be performed again, and the fourth sound signal played may be updated in time, so as to improve the ANC effect.
The specific frequency band may be determined in advance by the electronic device, and the specific frequency band is not limited to 100Hz, and the frequency range of the specific frequency band is not limited in the embodiment of the present application.
It should be noted that, in practical application, the preset conditions are not limited to the above two conditions, and a related technician may determine at least one preset condition according to factors such as a usage scenario of the electronic device, a user portrait of a user group facing the electronic device, so that when the electronic device is triggered under any preset condition, each step in the ANC method provided in fig. 5, 12, 13, 14 or 15 is executed from S501, S1201, S1301, S1401 or S1501, and a fourth sound signal is played, so that the timing of executing the ANC provided in the embodiment of the present application is accurately controlled, on one hand, interference of a second sound signal played in an ANC process on a user is reduced as much as possible, and on the other hand, the ANC effect is also improved.
It should be further noted that, as indicated above, the electronic device may perform ANC in real time, periodically, or when a preset condition is triggered, according to the method as provided in fig. 5, 12, 13, 14, or 15, then at the same time, the electronic device may perform one or more steps in the method as provided in fig. 5, 12, 13, 14, or 15, for example, at a certain time, the electronic device may play both the second sound signal and the fourth sound signal generated based on the other second sound signal played the previous time, and acquire the first sound signal for determining the second sound signal played the next time.
Based on the same conception, the embodiment of the application also provides electronic equipment. Fig. 16 is a schematic structural diagram of an electronic device 1600 provided in an embodiment of the present application, and as shown in fig. 16, the electronic device 1600 provided in the embodiment includes: a memory 1610 and a processor 1620, the memory 1610 being for storing a computer program; the processor 1620 is configured to execute the method described in the above method embodiments when the computer program is called.
The electronic device 1600 provided in this embodiment may perform the above-mentioned method embodiments, and its implementation principle and technical effects are similar, and will not be described herein again.
Based on the same conception, the embodiment of the application also provides a chip system. The system-on-chip includes a processor coupled to a memory, the processor executing a computer program stored in the memory to implement the method described in the method embodiments above.
The chip system can be a single chip or a chip module formed by a plurality of chips.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method described in the above method embodiment.
Embodiments of the present application also provide a computer program product which, when run on an electronic device, causes the electronic device to execute the method described in the above method embodiments.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer memory, read-only memory (ROM), random access memory (random access memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (20)

1. A method of actively noise reducing ANC, comprising:
acquiring a first sound signal outside the noise reduction space;
in response to the first sound signal, playing a second sound signal inside the noise reduction space, wherein the second sound signal is a sound signal which is not perceived by human ears;
acquiring a third sound signal in the noise reduction space;
and playing a fourth sound signal inside the noise reduction space, wherein the fourth sound signal is used for eliminating part or all of the first sound signal.
2. The method of claim 1, wherein the second sound signal is different when the noise reduction scene indicated by the first sound signal is different.
3. The method according to claim 1 or 2, wherein the second sound signal comprises a sound signal masked by the first sound signal when the noise reduction scene indicated by the first sound signal is a steady noise scene.
4. A method according to claim 3, characterized in that the method further comprises:
when the first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal comprises a fifth sound signal corresponding to a first frequency band, a first time length of the fifth sound signal in the first sound signal is greater than or equal to a duration threshold, and energy fluctuation of the second energy amplitude corresponding to the fifth sound signal in the first time length is less than a fluctuation range threshold, the noise reduction scene is the stable noise scene.
5. The method of claim 4, wherein the second sound signal comprises a sound signal masked by the fifth sound signal.
6. The method of any of claims 1-5, wherein the second sound signal comprises a sound signal having an energy amplitude below a hearing threshold when the noise reduction scene indicated by the first sound signal is an unsteady noise scene.
7. The method of claim 6, wherein the method further comprises:
when the first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, and the first sound signal does not comprise a fifth sound signal corresponding to a first frequency band, the noise reduction scene is the unstable noise scene; or alternatively, the first and second heat exchangers may be,
when a first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal comprises a fifth sound signal corresponding to a first frequency band, but a first duration of the fifth sound signal in the first sound signal is less than a duration threshold, and the noise reduction scene is the unstable noise scene; or alternatively, the first and second heat exchangers may be,
when the first energy amplitude corresponding to the first sound signal is greater than or equal to an energy amplitude threshold, the first sound signal comprises a fifth sound signal corresponding to a first frequency band, a first time length of the fifth sound signal in the first sound signal is greater than or equal to a duration threshold, and energy fluctuation of the second energy amplitude corresponding to the fifth sound signal in the first time length is greater than or equal to a fluctuation range threshold, the noise reduction scene is the unstable noise scene.
8. The method of claim 4, 5 or 7, wherein the first frequency band is greater than 10Hz and less than 1000Hz.
9. The method of any of claims 1-8, wherein the second sound signal comprises at least one of infrasonic and ultrasonic waves when the noise reduction scene indicated by the first sound signal is a quiet scene.
10. The method according to claim 9, wherein the method further comprises:
and when the first energy amplitude corresponding to the first sound signal is smaller than an energy amplitude threshold, the noise reduction scene is the quiet scene.
11. The method according to any one of claims 1-10, wherein playing a fourth sound signal inside the noise reduction space comprises:
determining a noise reduction coefficient based on the second sound signal and the third sound signal;
generating the fourth sound signal based on the noise reduction coefficient;
and playing the fourth sound signal.
12. The method of claim 11, wherein the determining a noise reduction coefficient based on the second sound signal and the third sound signal comprises:
determining a first secondary path transfer function based on the second sound signal and the third sound signal;
Acquiring leakage state data corresponding to at least one second secondary path transfer function and each second secondary path transfer function;
determining leakage state data corresponding to the second secondary path transfer function with the smallest difference from the first secondary path transfer function as leakage state data corresponding to the first secondary path transfer function;
the noise reduction coefficient is determined based on leakage state data corresponding to the first secondary path transfer function.
13. A method of ANC comprising:
when different noise reduction scenes are located, respectively playing second sound signals corresponding to the different noise reduction scenes in the noise reduction space, wherein the second sound signals are sound signals which are not perceived by human ears;
acquiring a third sound signal in the noise reduction space, wherein the third sound signal at least comprises part of the second sound signal;
and playing a fourth sound signal inside the noise reduction space, wherein the fourth sound signal is used for eliminating part or all of the first sound signal from the outside of the noise reduction space.
14. The method of claim 13, wherein when the noise reduction scene is a steady noise scene, the second sound signal comprises a sound signal masked by the first sound signal.
15. The method of claim 14, wherein the second sound signal comprises a sound signal masked by a fifth sound signal when a first energy magnitude of the first sound signal is greater than or equal to an energy magnitude threshold, the first sound signal comprises a fifth sound signal corresponding to a first frequency band, a first time period of the fifth sound signal in the first sound signal is greater than or equal to a time period threshold, and an energy fluctuation of a second energy magnitude of the fifth sound signal in the first time period is less than a fluctuation range threshold.
16. The method of claim 15, wherein the first frequency band is greater than 10Hz and less than 1000Hz.
17. The method of any of claims 13-16, wherein when the noise reducing scene is an unsteady noise scene, the second sound signal comprises a sound signal having an energy magnitude below a hearing threshold.
18. The method of any of claims 13-17, wherein the second sound signal comprises at least one of infrasonic and ultrasonic when the noise reducing scene is a quiet scene.
19. An electronic device, comprising: a memory and a processor, the memory for storing a computer program; the processor is configured to perform the method of any of claims 1-18 when the computer program is invoked.
20. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1-18.
CN202111398001.2A 2021-11-23 2021-11-23 Active noise reduction method and electronic equipment Pending CN116153281A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111398001.2A CN116153281A (en) 2021-11-23 2021-11-23 Active noise reduction method and electronic equipment
PCT/CN2022/127015 WO2023093412A1 (en) 2021-11-23 2022-10-24 Active noise cancellation method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111398001.2A CN116153281A (en) 2021-11-23 2021-11-23 Active noise reduction method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116153281A true CN116153281A (en) 2023-05-23

Family

ID=86372419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111398001.2A Pending CN116153281A (en) 2021-11-23 2021-11-23 Active noise reduction method and electronic equipment

Country Status (2)

Country Link
CN (1) CN116153281A (en)
WO (1) WO2023093412A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105679300A (en) * 2015-12-29 2016-06-15 努比亚技术有限公司 Mobile terminal and noise reduction method
JP6197930B2 (en) * 2016-09-14 2017-09-20 ソニー株式会社 Ear hole mounting type sound collecting device, signal processing device, and sound collecting method
CN111599336B (en) * 2019-02-20 2023-04-07 上海汽车集团股份有限公司 Noise reduction system and method based on ultrasonic waves
CN110933555A (en) * 2019-12-19 2020-03-27 歌尔股份有限公司 TWS noise reduction earphone and noise reduction method and device thereof
CN112216300A (en) * 2020-09-25 2021-01-12 三一专用汽车有限责任公司 Noise reduction method and device for sound in driving cab of mixer truck and mixer truck
CN112270916A (en) * 2020-10-28 2021-01-26 江苏理工学院 Automobile noise suppression device and method based on automatic tracking

Also Published As

Publication number Publication date
WO2023093412A1 (en) 2023-06-01

Similar Documents

Publication Publication Date Title
US11671773B2 (en) Hearing aid device for hands free communication
CN109196877B (en) On/off-head detection of personal audio devices
US20220335924A1 (en) Method for reducing occlusion effect of earphone, and related apparatus
US20150078575A1 (en) Audio apparatus and methods
CN110913062B (en) Audio control method, device, terminal and readable storage medium
AU2015349054A1 (en) Method and apparatus for fast recognition of a user's own voice
CN111754969B (en) Noise reduction method and device, electronic equipment and noise reduction system
EP3979666A2 (en) A hearing device comprising an own voice processor
US11863938B2 (en) Hearing aid determining turn-taking
WO2015026859A1 (en) Audio apparatus and methods
US11589173B2 (en) Hearing aid comprising a record and replay function
CN117459875A (en) Headset noise reduction method, headset and computer readable storage medium
EP4250765A1 (en) A hearing system comprising a hearing aid and an external processing device
EP4064730A1 (en) Motion data based signal processing
WO2023093412A1 (en) Active noise cancellation method and electronic device
CN114697846A (en) Hearing aid comprising a feedback control system
CN116744169B (en) Earphone device, sound signal processing method and wearing fit testing method
CN114630223B (en) Method for optimizing functions of hearing-wearing device and hearing-wearing device
US20230054213A1 (en) Hearing system comprising a database of acoustic transfer functions
EP4351171A1 (en) A hearing aid comprising a speaker unit
CN116614742A (en) Clear voice transmitting and receiving noise reduction earphone
CN116778942A (en) Electronic equipment and voice noise reduction method and medium thereof
CN115706898A (en) Channel configuration method, stereo headphone and computer readable storage medium
Fulop et al. REVIEWS OF ACOUSTICAL PATENTS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination