EP3672274A1 - Dispositif et procédé de contrôle audio adaptatif basé sur une identification de scénario - Google Patents

Dispositif et procédé de contrôle audio adaptatif basé sur une identification de scénario Download PDF

Info

Publication number
EP3672274A1
EP3672274A1 EP19741628.2A EP19741628A EP3672274A1 EP 3672274 A1 EP3672274 A1 EP 3672274A1 EP 19741628 A EP19741628 A EP 19741628A EP 3672274 A1 EP3672274 A1 EP 3672274A1
Authority
EP
European Patent Office
Prior art keywords
ambient sound
sound signal
signal
user
pressure level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19741628.2A
Other languages
German (de)
English (en)
Other versions
EP3672274A4 (fr
Inventor
Jian Zhao
Jiandan LIU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaoniao Tingting Technology Co Ltd
Original Assignee
Beijing Xiaoniao Tingting Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaoniao Tingting Technology Co Ltd filed Critical Beijing Xiaoniao Tingting Technology Co Ltd
Publication of EP3672274A1 publication Critical patent/EP3672274A1/fr
Publication of EP3672274A4 publication Critical patent/EP3672274A4/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1783Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase handling or detecting of non-standard events or conditions, e.g. changing operating modes under specific operating conditions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1783Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase handling or detecting of non-standard events or conditions, e.g. changing operating modes under specific operating conditions
    • G10K11/17837Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase handling or detecting of non-standard events or conditions, e.g. changing operating modes under specific operating conditions by retaining part of the ambient acoustic environment, e.g. speech or alarm signals that the user needs to hear
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1787General system configurations
    • G10K11/17879General system configurations using both a reference signal and an error signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/22Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only 
    • H04R1/222Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only  for microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/22Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2410/00Microphones
    • H04R2410/05Noise reduction with a separate noise microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/07Use of position data from wide-area or local-area positioning systems in hearing devices, e.g. program or information selection

Definitions

  • the application relates to an electroacoustic transformation technology, and in particular to an adaptive audio control device and method based on scenario identification.
  • an audio playback device with a function of passive noise cancellation/active noise cancellation appears, for example, a noise-canceling headphone, so as to eliminate the effect of noises on the user.
  • the inventor finds that eliminating only the noise cannot satisfy the user's requirements for playback effects; the user hopes that the audio playback device is more intelligent, and can automatically regulate the playback effects to adapt to the current playback environment.
  • the equivalent continuous A sound level is usually used to evaluate environmental noises.
  • the environmental noise is less than 50dBA, people think that the environment is relatively quiet; when the noise is greater than 80dBA, people feel the environment is noisy; when the noise reaches 120dBA, people will find it is unbearable.
  • the noise is greater than 90dBA, the possibility of hearing impairment is significantly higher.
  • the disclosure aims to provide an adaptive audio control method based on scenario identification, so as to automatically regulate the playback effects according to a usage scenario of a user.
  • an adaptive audio control device based on scenario identification, which includes: an ambient sound acquisition microphone, an acceleration sensor, a location module, a control module, an audio signal volume adjustment module, an active noise cancellation module, and an ambient sound adjustment module. Output ends of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module are connected with a speaker respectively.
  • the control module includes a memory and a processor.
  • the memory stores a computer program. When executed by the processor, the computer program implements the following steps:
  • the ambient sound adjustment module includes any one of a combination of the following submodules: a wind noise suppression submodule, a voice enhancement submodule, a dynamic range control submodule, and an EQ processing submodule.
  • the operation that the usage scenario of the user is analyzed according to the acceleration data output by the acceleration sensor and the geographic location data output by the location module includes that:
  • the environment types include an indoor environment and a road environment;
  • the motion modes include any one of: a stationary mode, a walking mode, or a transportation mode.
  • the user if the movement speed is less than a first speed threshold and the cadence value is less than a first cadence value threshold, the user is in the stationary mode; if the movement speed is in a walking speed interval and the cadence value is in a walking cadence value interval, the user is in the walking mode; if the movement speed is greater than a second speed threshold, the user is in the transportation mode.
  • the device further includes a bone conduction microphone or an infrared proximity sensor.
  • the usage scenario of the user also includes a talking state of the user.
  • the computer program When executed by the processor, the computer program implements the following steps: it is determined, according to a signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • there are a plurality of ambient sound acquisition microphones including the microphones for acquiring the ambient sound of a real-time location of the user and the microphones for acquiring the ambient sound heard at an ear of the user.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal includes that:
  • that the dynamic range control submodule is controlled to perform the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that:
  • performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on a voice signal band and a honk signal band in the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that: it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module, and if the active noise cancellation module is enabled, a noise cancellation level of the active noise cancellation module is adjusted according to the sound pressure level of the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that: the wind noise suppression submodule and the dynamic range control submodule are controlled to be disabled.
  • the device is a headphone.
  • an adaptive audio control method based on scenario identification includes the following steps:
  • the ambient sound adjustment module includes any one of a combination of the following submodules: the wind noise suppression submodule, the voice enhancement submodule, the dynamic range control submodule, and the EQ processing submodule.
  • the operation that the usage scenario of the user is analyzed according to the acceleration data and the geographic location data includes that:
  • the environment types include the indoor environment and the road environment; the motion modes include any one of the followings: the stationary mode, the walking mode, and the transportation mode.
  • the user if the movement speed is less than the first speed threshold and the cadence value is less than the first cadence value threshold, the user is in the stationary mode; if the movement speed is in the walking speed interval and the cadence value is in the walking cadence value interval, the user is in the walking mode; if the movement speed is greater than the second speed threshold, the user is in the transportation mode.
  • the audio playback device further includes the bone conduction microphone or the infrared proximity sensor.
  • the usage scenario of the user also includes the talking state of the user.
  • the method further includes the following step: it is determined, according to the signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • that the ambient sound signal of surrounding environment of the user is acquired includes that: the ambient sound signal of the real-time location of the user is acquired, and the ambient sound signal heard at an ear of the user is acquired.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • that the dynamic range control submodule is controlled to perform the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that:
  • performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on the voice signal band and the honk signal band in the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that: it is determined, according to the sound pressure level of the ambient sound signal, whether to enable the active noise cancellation module, and if the active noise cancellation module is enabled, the noise cancellation level of the active noise cancellation module is adjusted according to the sound pressure level of the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the operation that the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that: the wind noise suppression submodule and the dynamic range control submodule are controlled to be disabled.
  • the audio playback device is a headphone.
  • a method for controlling an audio playback device which includes the following steps: the acceleration data of the user is acquired, and the usage scenario of the user is analyzed according to the acceleration data; the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed; and the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device are controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • the method further includes the following steps: the geographic location data of the user is acquired; and the usage scenario of the user is analyzed according to the acceleration data and the geographic location data.
  • a system for controlling an audio playback device which includes: one or more processors; and a memory which is coupled to at least one of the one or more processors. There are computer program instructions stored in the memory. When the computer program instructions are executed by the at least one processor, the system performs the method for controlling an audio playback device.
  • the method includes the following operation: the acceleration data of the user is acquired, and the usage scenario of the user is analyzed according to the acceleration data; the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed; and the audio signal volume, the active noise cancellation level, and the adjustment of the ambient sound signal of the audio playback device are controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • a computer program product When executed by the processor, the computer program product may realize the method for controlling an audio playback device described in the third aspect of the disclosure.
  • the adaptive audio control device and method based on scenario identification provided by the disclosure may analyze the usage scenario of the use, and automatically regulate the playback effects according to the usage scenario.
  • the disclosure presents an adaptive audio control device based on scenario identification.
  • the device may be a headphone, a loudspeaker box, or other electronic devices capable of playing an audio signal.
  • the device may carry out wired communication or wireless communication with terminal devices like a cell phone and a computer, so as to play the audio signal of the terminal devices.
  • the device may also store the audio signal, for example music, and the device may play the audio signal stored in it.
  • the device may also be set in the terminal device, as a part of the terminal device.
  • the adaptive audio control device based on scenario identification provided by the first embodiment of the disclosure includes: an ambient sound acquisition microphone 13, an acceleration sensor 11, a location module 12, a control module 21, an audio signal volume adjustment module 22, an active noise cancellation module 23, and an ambient sound adjustment module 24.
  • the acceleration sensor 11 is configured to acquire acceleration data of a user, and output the acceleration data to the control module 21.
  • the location module 12 is configured to acquire geographic location data of the user, and output the geographic location data to the control module 21.
  • the audio signal volume adjustment module 22 adjusts the volume of the audio signal
  • the audio signal is input in a speaker 30 for playback.
  • the ambient sound acquisition microphone 13 is configured to pick up an ambient sound signal, and feed the picked-up ambient sound signal to the control module 21, the active noise cancellation module 23 and the ambient sound adjustment module 24 respectively. Output ends of the active noise cancellation module 23 and the ambient sound adjustment module 24 are connected with the speaker 30 respectively.
  • the control module 21 is connected with the audio signal volume adjustment module 22, the active noise cancellation module 23 and the ambient sound adjustment module 21 respectively, so as to control their working; for example, the control module 21 enables/disables a certain module or submodule, or adjusts parameters of a certain module or submodule.
  • the active noise cancellation module 23 is configured to generate a corresponding noise cancellation signal aiming at the ambient sound signal, and output the noise cancellation signal to the speaker 30.
  • the noise cancellation signal and the ambient sound signal cancel each other in the ear canal of the user, so as to reduce the impact of the ambient sound on the user listening to the audio signal.
  • the active noise cancellation module 24 may have a feedback noise cancellation manner, a feed-forward noise cancellation manner, and a noise cancellation manner of feed-forward combined with feedback.
  • the active noise cancellation module 23 is enabled only when the sound pressure level of the ambient sound reaches 60dBA.
  • the active noise cancellation module 23 may be set with various noise cancellation levels; for example, when the sound level intensities of the ambient sound reach 60dBA, 70dBA, 80dBA and 90dBA respectively, each of them corresponds to a noise cancellation level. The stronger the sound pressure level of the ambient sound, the higher the noise cancellation level.
  • the ambient sound adjustment module 24 is configured to adjust the ambient sound signal, and output the adjusted ambient sound signal to the speaker 30.
  • the ambient sound adjustment module 24 includes the following submodules: a wind noise suppression submodule 241, a voice enhancement submodule 242, a dynamic range control submodule 243, and an EQ processing submodule 244.
  • the wind noise suppression submodule 241 is mainly configured to filter the wind noise in the ambient sound signal.
  • the wind noises mainly concentrate in very low frequency bands. Once the big wind noises are detected, different filters may be set to deal with, so as to reduce the impact of the wind noise on the user listening to the audio signal.
  • different filters may be set to deal with, so as to reduce the impact of the wind noise on the user listening to the audio signal.
  • the wind noise suppression submodule 241 may be disabled.
  • the voice enhancement submodule 242 is mainly configured to enhance the voice part in the ambient sound signal, suppress and reduce a noise interference, and improve a signal-to-noise ratio of the voice part, so that the user can hear the outside voice more clearly.
  • the voice enhancement submodule 241 when the user is in a talking state, the voice enhancement submodule 241 is enabled.
  • the voice enhancement submodule 241 is enabled.
  • the voice enhancement submodule 242 may perform enhancement processing to a voice signal in the ambient sound signal and perform suppression processing to an ambient noise, thereby realizing a voice enhancement function.
  • the dynamic range control submodule 243 is mainly configured to perform dynamic range adjustment on the ambient sound signal; for example, it is possible to first compress some impulse sounds and then feed them to the headphone, so as to avoid a very big distortion at the headphone.
  • the dynamic range control submodule 243 is always in an enabled state in all circumstances, so as to prevent a burst sound from startling and damaging the user.
  • the dynamic range control submodule 243 when the user is in the outdoor environment, the dynamic range control submodule 243 must be enabled; when the user is in the indoor environment, because there are a relatively few burst sounds in the indoor environment, the dynamic range control submodule 243 may be disabled.
  • the EQ processing submodule 244 is mainly configured to enhance and attenuate the ambient sound aiming at different frequency bands, so as to optimize listening feeling of the ambient sound. In a specific example, if it is needed to hear parts of the ambient sounds, the EQ processing submodule 244 will be enabled, so as to perform compensation enhancement to the ambient sounds on a part of the frequency bands.
  • FIG. 2 illustrates an adaptive audio control device based on scenario identification provided by another embodiment of the disclosure.
  • the embodiment of FIG. 2 has all the structures and functions provided by the embodiment of FIG. 1 .
  • the main difference is that the device in the embodiment of FIG. 2 further includes a bone conduction microphone 14.
  • the output end of the bone conduction microphone 14 is connected with the control module 21.
  • FIG. 3 illustrates an adaptive audio control device based on scenario identification provided by yet another embodiment of the disclosure.
  • the embodiment of FIG. 3 has all the structures and functions provided by the embodiment of FIG. 1 .
  • the main difference is that the device in the embodiment of FIG. 3 further includes an infrared proximity sensor 15 towards the front of the user.
  • the output end of the infrared proximity sensor 15 is connected with the control module 21.
  • the ambient sound adjustment module 24 includes the following submodules: the wind noise suppression submodule 241, the voice enhancement submodule 242, the dynamic range control submodule 243, and the EQ processing submodule 244. In other embodiment, the ambient sound adjustment module 24 may also include any one or a combination of the above submodules, or includes other submodules.
  • the device may also be equipped with a passive noise cancellation structure made of a sound insulating material.
  • the passive noise cancellation is physical noise cancellation, insulating the outside noises into the ear canal by means of a shell or an earmuff. This passive noise cancellation method has a relatively good effect on the noises above medium-high frequencies 1kHz.
  • the device may also be equipped with a manual volume adjustment device, a manual noise cancellation mode switch device, a manual ambient sound adjustment device, and other structures, so as to provide more selection modes for the user.
  • the left and right headphones are respectively equipped with the ambient sound acquisition microphones.
  • the ambient sound acquisition microphone For example, only the left headphone is equipped with the ambient sound acquisition microphone.
  • only the right headphone is equipped with the ambient sound acquisition microphone.
  • a microphone is set on the headphone shell and configured to acquire the sound of surrounding environment of the user.
  • a microphone is set in the headphone and configured to acquire the ambient sound heard at an ear of the user.
  • the control module 21 includes a memory and a processor.
  • the memory stores a computer program. When executed by the processor, the computer program implements the following steps.
  • the usage scenario of the user may be analyzed according to acceleration data output by an acceleration sensor.
  • geographic location data acquired by the location module 12 may also be acquired, and the usage scenario of the user is analyzed by means of the acceleration data and the geographic location data of the user.
  • a sound pressure level of an ambient sound signal acquired by the ambient sound acquisition microphone 13 is calculated, and an energy distribution and an spectral distribution of the ambient sound signal is analyzed.
  • Components of the ambient sound may be obtained by analyzing the energy distribution and the spectral distribution of the ambient sound signal, for example, whether the ambient sound contains a voice component, a warning sound component like an alarm honk, a wind noise component, and so on, and the energy of these components.
  • the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • the control module 21 may automatically adjust a noise cancellation parameter of the active noise cancellation module 24 according to the usage scenario.
  • the active noise cancellation module 23 is set with a plurality of noise cancellation modes in advance, and each noise cancellation mode corresponds to different noise cancellation parameters.
  • the control module 21 automatically adjusts the noise cancellation mode of the active noise cancellation module 23 according to the usage scenario, so as to achieve different noise cancellation levels or effects.
  • the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal. That is, the control module 21 considers comprehensively the sound pressure level of the ambient sound signal and the components of the ambient sound and the energy of each component, and controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24, so that the audio control adapts to the usage scenario of the user, and the adaptive audio control implemented according to the usage scenario of the user, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal is realized.
  • that the usage scenario of the user is analyzed at S101 includes the following operation.
  • an environment type of the user are determined according to the geographic location data.
  • the environment types include an indoor environment and a road environment.
  • a movement speed of the user is calculated according to the geographic location data, and a cadence value of the user is calculated according to the acceleration data.
  • a motion mode of the user are determined according to the movement speed and the cadence value.
  • the motion modes may include any one of the followings: a stationary mode, a mode of walking on road, and a transportation mode.
  • the motion modes may include a fitness mode.
  • the fitness mode contains running, cycling and other fitness methods.
  • the usage scenario of the user may be analyzed only according to the acquired acceleration data of the user instead of by acquiring the geographic location data.
  • the motion mode of the user may be determined only according to the cadence value of the user.
  • the geographic location data it is possible to calculate the movement speed of the user only by the geographic location data, but the geographic location data is not used for acquiring the environment type.
  • the usage scenario of the user may also include a talking state of the user.
  • the control module 21 determines whether the user is in the talking mode according to a signal output by the bone conduction microphone 14 or the infrared proximity sensor 15.
  • the usage scenario referred to in the embodiments of the disclosure at least includes the current motion mode of the user. Furthermore, the usage scenario may also include the environment type of the user and/or the talking state of the user, that is, whether the user is in the talking mode.
  • control module 21 may determine the environment type of the user according to the geographic location data.
  • the environment types include the indoor environment and the road environment.
  • the location module 12 may include a GPS module or a Beidou module, for example.
  • the location module first acquires information of the specific real-time location of the user, and then determines the environment type of the user according to the information of the specific real-time location.
  • the environment types may also be divided in more detail, so as to achieve a more flexible and intelligent audio control effect.
  • the outdoor environment type is divided into a road environment type and a non-road outdoor environment type.
  • the non-road outdoor environment type is divided into an open-air trade and catering fair type, an outdoor park and green space type, and so on.
  • the environment types may be divided into the following types:
  • the environment types in the embodiments of the disclosure may be divided into “indoor” and “outdoor”.
  • the "outdoor” environment type may further be subdivided into "outdoor sports ground”, “outdoor park and green space” and “outdoor fair”, for example.
  • the environment type of the user may be determined according to the selection of the user.
  • the specific motion mode of the user may also be determined according to the geographic location data in combination with the energy distribution and the spectral distribution of the ambient sound signal; for example, it can be accurately determined that the user is in the outdoor environment in combination with the geographic location data after determining that there is a very strong wind noise signal included in the ambient sound signal according to the energy distribution and the spectral distribution of the ambient sound signal.
  • control module 21 may calculate the movement speed of the user according to the geographic location data, and calculate the cadence value of the user according to the acceleration data.
  • the motion mode of the user are determined according to the movement speed and the cadence value.
  • the running speeds of cars, ships, trains and other transports are usually greater than 30km/h.
  • the second speed threshold may be set to 30km/h. For example, if the monitored movement speed of the user is about 60km/h, it can be determined that the user is taking transports.
  • the intervals of the movement speed and the cadence value may also be divided in more detail, so as to determine the movement state of the user in detail.
  • the motion mode of the user may also be divided in more detail; for example, the motion modes may also be divided into a stationary mode, a taking-a-walk mode, a fast-walking mode, a running mode, a cycling mode, and so on.
  • the cadence value of the user is in the interval of 2.5steps/s to 5steps/s, it is determined that the user is in the running mode.
  • the motion modes in the present embodiment may be divided into “motion” and “non-motion”.
  • the "motion” mode may also be further subdivided into “running”, “swimming” and “cycling” for example.
  • the specific motion mode of the user may be determined according to the selection of the user or the output of the related sensor.
  • the usage scenario of the user may be analyzed only according to the acquired acceleration data of the user instead of by acquiring the geographic location data.
  • the motion mode of the user may be determined only according to the cadence value of the user.
  • control module 21 may determine whether the user is in the talking state according to the situation where the bone conduction microphone 14 picks up the voice signal.
  • control module 21 may determine, according to the signal output by the infrared proximity sensor 15, whether there are other people within a certain distance range in front of the user; if there are other people within a certain distance range in front of the user, it is determined that the user is in the talking state. Or, if there are people, the control module 21 may comprehensively determine whether the user is in the talking state based on the determination situation of the environment type and the motion mode, for example, the user is in the open-air catering fair.
  • the "usage scenario” referred to in the embodiments of the disclosure is a composite scenario.
  • the “usage scenario” at least includes the environment type of the user and the current motion mode of the user, and further may include the talking state of the user. For example, if the environment type of the user is "outdoor”, and the motion mode is "motion", the “usage scenario” of the user is “outdoor motion”. For example, if the environment type of the user is "indoor”, and the motion mode is "motion”, the “usage scenario” of the user is “indoor motion”. For example, if the environment type of the user is "indoor”, the motion mode is "static”, and the talking state is “in the talking mode”, the "usage scenario” of the user is “indoor static talking”.
  • the usage scenario of the user may be estimated according to the acceleration data. For example, if the cadence value of the user is in the interval of 2.5steps/s to 5steps/s, it is determined that the user is in the running mode and in the road environment.
  • the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • the sound pressure level of the ambient sound signal here may use the equivalent continuous A sound level.
  • the usage scenario includes the current environment type of the user and the current motion mode of the user.
  • the function F(t) is defined to described the energy distribution and the spectral distribution situation of the 20-20kHz ambient sound of the position where the user is at a certain moment.
  • F(t) further includes F 0 (t) and Q(t).
  • the F 0 (t) is used to represent a frequency point corresponding to the maximum noise peak value at the present, and the Q(t) is used to represent a quality factor of the ambient sound at the present.
  • the greater the value of Q the more concentrated the energy distribution of the ambient sound, and the frequency is relatively single; correspondingly, it is a non-steady noise or a burst impulse noise in the noise environment, namely a honk, a knocking noise, an impact sound, and other burst sounds.
  • the smaller the value of Q the wider of the ambient sound on each frequency band, and the energy distribution of noises on the frequency band is more uniform; at this point, the corresponding noise environment is relatively stable steady-state noises; for example, in a certain restaurant environment of dinner time, there are mainly background noises made by talking or light collision of tableware, and the frequencies of the background noises F 0 are mainly between 200Hz and 300Hz.
  • the control module 21 determines the usage scenario of the user and the level of the ambient sound according to the thresholds and the queried or received real-time values of various sensor modules (not limited to the ambient sound acquisition microphone 13, the acceleration sensor 11, the location module 12, the bone conduction microphone 14/the infrared proximity sensor 15, and so on), and acquires the energy distribution and the spectral distribution situation of the ambient sound, namely obtaining P(t), M(t), L(t), F(t) and S(t).
  • the control module 21 queries the function Action(t) in real time, automatically generates control instructions according to various variables of Action(t), and send the corresponding instructions to the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 respectively, so that each module makes a response matching with the current scenario and ambient sound signal, namely implementing the automatic adjustment of the playback effects to adapt to the current playback environment.
  • the intervals to which the sound pressure level of the ambient sound signal belongs may include the followings:
  • intervals to which the sound pressure level of the ambient sound signal belongs may also be subdivided according to practical applications, and not completely limited to this definition.
  • the intervals to which the movement speed of the user belongs may include the followings, for example:
  • the intervals to which the movement speed of the user belongs may be divided in more detail, so as to help the control module 21 to determine accurately the usage scenario of the user in combination with the cadence value and the environment type of the user.
  • the intervals of the cadence value of the user may include the followings:
  • the intervals to which the cadence value belongs may also be subdivided according to practical applications, and not completely limited to this definition.
  • the motion modes (including the used transports) of the user may be comprehensively determined according to the specific environment type of the user, the interval of the movement speed of the user and the interval of the cadence value of the user; if the energy distribution and the spectral distribution of the ambient sound signal is further considered, the motion mode of the user may be determined more accurately.
  • the above functions also proves that the embodiments of the disclosure may comprehensively analyze the geographic location data, the movement speed, the cadence value of the user, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, thereby realizing the automatic adjustment of the playback effects according to the usage scenario.
  • the first usage scenario is that the user is in the road environment and in the walking mode. It is easy to be understood that in this usage scenario, the motion mode of the user may be estimated only through the acquired acceleration data, and then the usage scenario of the user is estimated. For example, if the acceleration data shows that the current cadence value of the user is in the interval of 0.5 steps/s-2.5 steps/s, it is determined that the user is in the walking mode and in the road environment.
  • the ambient sounds are mainly traffic noises on road and ambient low-frequency noises like the wind noises with different intensities.
  • F 0 of the ambient sound signal is usually near 100Hz, and the value of Q is relatively smaller, that is, the distribution of the low-frequency noises is relatively wide.
  • the sound level intensities vary depending on traffic conditions of different periods of time.
  • the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including: the wind noise suppression submodule 241 is controlled to perform suppressive filtering to a wind noise signal in the ambient sound signal.
  • the wind noise suppression submodule 241 may be a second order high-pass filter whose cut-off frequency f 0 is 300Hz, for example.
  • the voice enhancement submodule 242 It is monitored whether the ambient sound signal contains a voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule 242 is triggered to perform enhancement processing on the voice signal in the ambient sound signal. That is, in the first usage scenario, the voice enhancement submodule 242 is in a standby mode, and may be wakened by the voice signal detected by the control module 21 in real time.
  • the dynamic range control submodule 243 is controlled to perform dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal.
  • the ambient sound basically does not include useful information, and light amplification processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 40dBA and less than or equal to 50dBA, selective amplification processing is performed to the ambient sound signal; when sound pressure level of the ambient sound signal is greater than 50dBA and less than or equal to 60dBA, amplification and reduction processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 60dBA, it is determined that the environment is a relatively noisy, and attenuation processing is performed to the ambient sound signal.
  • the user can enjoy music while maintaining a certain ability of monitoring and sensing the outside external environment during moving.
  • the division ( ⁇ 40dBA, 40dBA-50dBA, 50dBA-60dBA, >60dBA) of the interval of the sound pressure level of the ambient sound signal is just an example.
  • the division of the interval may be adjusted according to the actual condition.
  • the EQ processing submodule 244 is controlled to perform the EQ compensation processing on the ambient sound signal, and output the ambient sound signal to the speaker 30 for playback.
  • the EQ compensation processing is performed to the voice signal band and the honk signal band in the ambient sound signal.
  • a noise cancellation level of the active noise cancellation module 23 is automatically adjusted according to the sound pressure level of the ambient sound signal.
  • the greater the sound pressure level of the ambient sound signal the higher the noise cancellation level of the active noise cancellation module 23, and the greater the degree of the active noise cancellation.
  • a noise cancellation signal generated by the active noise cancellation module 23 is output to the speaker 30.
  • the control module 21 may analyze whether there is a certain warning prompt in the ambient sound signal according to the energy distribution and the spectral distribution of the ambient sound signal. For example, the ambient sound acquisition microphone (13) picks up ambient noises in t to t 1 . If it is found, through frequency domain analysis, that pulse signals whose frequencies are 500Hz-1500Hz, and the quality factor Q is much greater than 1 appear in this period of time discontinuously or continuously, and the average energy of the pulse signals is higher than 10dB of the previous period of time, it is determined that there is a certain warning sound that the user needs to be aware of in the ambient sound signal.
  • control module 21 controls the active noise cancellation module 23 to perform active noise cancellation to the part, except the warning prompt, in the ambient sound signal, and controls the dynamic range control submodule 243 to perform the amplification processing to the warning prompt in the ambient sound signal, so as to ensure the safety and alertness of the user.
  • the working parameters of the audio signal volume adjustment module 22 are controlled according to the sound pressure level of the ambient sound signal reaching the speaker 30, so that the sound level intensities of the audio signal reaching the speaker 30 and the ambient sound signal reaching the speaker 30 keep the preset proportion.
  • the audio signal volume may be automatically controlled to become high, that is, when the outside environment is relatively noisy, the audio signal volume is turned up.
  • the audio signal volume may be automatically controlled to become low, that is, when the outside environment is relatively quiet, the audio signal volume is turned down, so as to ensure the hearing of the user.
  • the second usage scenario is that the user is in the road environment and in the transportation mode, the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including: it is monitored whether the ambient sound signal contains the voice signal; if the ambient sound signal contains a voice signal, the voice enhancement submodule 242 is triggered to perform enhancement processing on the voice signal in the ambient sound signal, and the EQ processing submodule 244 is triggered to perform EQ compensation processing on the voice signal in the ambient sound signal. That is, in the second usage scenario, the voice enhancement submodule 242 and the EQ processing submodule 244 are in the standby mode, and may be wakened by the voice signal detected by the control module 21 in real time.
  • the active noise cancellation module 23 is controlled to perform the active noise cancellation processing on the ambient sound signal according to a highest noise cancellation level. Or, the active noise cancellation module 23 is controlled to determine, according to the sound pressure level of the ambient sound signal, whether to be enabled, and if the active noise cancellation module 23 is enabled, the noise cancellation level of the active noise cancellation module 23 is automatically adjusted according to the sound pressure level of the ambient sound signal.
  • the working parameters of the audio signal volume adjustment module 22 are controlled according to the sound pressure level of the ambient sound signal reaching the speaker 30, so that the sound level intensities of the audio signal reaching the speaker 30 and the ambient sound signal reaching the speaker 30 keep the preset proportion.
  • the audio signal volume may be automatically controlled to become high, that is, when the outside environment is relatively noisy, the audio signal volume is turned up.
  • the audio signal volume may be automatically controlled to become low, that is, when the outside environment is relatively quiet, the audio signal volume is turned down, so as to ensure the hearing of the user.
  • the wind noise suppression submodule (241) is controlled to be disabled.
  • the dynamic range control submodule (243) when the user is in the road environment and in the transportation mode, it is monitored whether the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level or less than the preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the amplification processing on the ambient sound signal.
  • the upper limit of the sound pressure level is 60dBA, for example.
  • the lower limit of the sound pressure level is 40dBA, for example.
  • the transportation mode when it is determined that the user is in the transportation mode, it is possible to further determine which transport the user is taking. For example, it is possible to determine that the user is in the modes of cycling, taking a flight, taking a train, or taking a car according to the environment type, the height data in the geographic location data, the movement speed and the cadence value. For example, if the movement speed of the user reaches 250km/h and the user is on a trunk railway, it is possible to determine that the user is in the mode of taking a high-speed train.
  • the control module 21 may set a specific control way of the active noise cancellation module 23 and the ambient sound adjustment module 24, for example, set the noise cancellation level of the active noise cancellation module 23 when the user is taking the high-speed train to a relatively low level, according to the features of the ambient sounds corresponding to the subdivided transports, for example, there are many honks when taking a car, and it is relatively quiet in the high-speed train.
  • control module 21 may set the specific control way of the active noise cancellation module 23 and the ambient sound adjustment module 24 according to the features of the ambient sounds corresponding to the subdivided transports.
  • the user may talk with a partner, and there may be an external voice remind, for example, a voice remind about danger or a remind of arrival of vehicles in the second usage scenario, so in the two usage scenarios, the voice enhancement submodule 242 may be triggered to work by the voice signal in the ambient sound signal detected in real time.
  • the third usage scenario is that the user is in the indoor environment (for example, residence, administrative offices of education, medical treatment, research, or indoor areas of catering trade and business) and in the stationary mode and the talking mode, the control module 21 controls the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal, including:
  • the voice enhancement submodule 242 is controlled to perform the enhancement processing on the voice signal in the ambient sound signal.
  • the EQ processing submodule 244 is controlled to perform the EQ compensation processing on the voice signal band in ambient sound signal, and output the ambient sound signal to the speaker 30 for playback.
  • the wind noise suppression submodule 241 and the dynamic range control submodule 243 are controlled to be disabled.
  • the active noise cancellation module 23 is controlled to be disabled or perform the active noise cancellation processing on the ambient sound signal.
  • the audio signal volume adjustment module 22 is controlled to turn down the volume or stop playing the audio signal.
  • the adaptive audio control device in the embodiments of the disclosure may have a plurality of ambient sound acquisition microphones 13, including the microphone which is set on the headphone shell and configured to acquire the sound of surrounding environment of the user, and the microphone which is set in the headphone and configured to acquire the ambient sound heard at an ear of the user.
  • the manner of setting a plurality of microphones may acquire the ambient sound more accurately, and may reflect the situation of the ambient sound heard at an ear of the user.
  • the manner may be applied to the active noise cancellation function, is beneficial for locating an ambient sound source and regulating the proportion of the voice to the ambient sound.
  • the manner may also optimize noise cancellation quantity better, which is beneficial for more intelligent adaptive audio control.
  • control module 21 may also analyze the ambient sound signal acquired by the ambient sound acquisition microphone 13 to obtain the sound pressure level and the energy distribution and the spectral distribution of the ambient sound signal, and realize a more abundant scenario analysis with reference to the data acquired by the acceleration sensor, the location module, and other sensors, so as to control the audio signal volume adjustment module 22, the active noise cancellation module 23 and the ambient sound adjustment module 24 more delicately, and provide a better experience effect to the user.
  • the adaptive audio control device based on scenario identification may be realized by hardware, software or a combination of hardware and software.
  • the adaptive audio control method based on scenario identification includes the following steps.
  • the usage scenario of the user is analyzed.
  • the acceleration data of the user may be acquired, and the usage scenario of the user is analyzed according to the acceleration data.
  • the geographic location data of the user may also be acquired, and the usage scenario of the user is analyzed according to the acceleration data and the geographic location data.
  • the ambient sound signal of surrounding environment of the user is acquired, the sound pressure level of the ambient sound signal is calculated, and the energy distribution and the spectral distribution of the ambient sound signal is analyzed.
  • the working of the audio signal volume adjustment module, the active noise cancellation module, and the ambient sound adjustment module of the audio playback device is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal.
  • the audio playback device is a headphone.
  • the ambient sound adjustment module 24 includes any one of a combination of the following submodules: the wind noise suppression submodule 241, the voice enhancement submodule 242, the dynamic range control submodule 243, and the EQ processing submodule 244.
  • that the usage scenario of the user is analyzed at S401 includes that: the environment type of the user is determined according to the geographic location data; the movement speed of the user is calculated according to the geographic location data; the cadence value of the user is calculated according to the acceleration data; and the motion mode of the user is determined according to the movement speed and the cadence value of the user.
  • the environment types include the indoor environment and the road environment;
  • the movement modes include any one of the followings: the stationary mode, the mode of waling on road, and the transportation mode.
  • the user is in the stationary mode; if the movement speed is in the walking speed interval and the cadence value is in the walking cadence value interval, the user is in the walking mode; and if the movement speed is greater than the second speed threshold, the user is in the transportation mode.
  • the audio playback device further includes the bone conduction microphone or the infrared proximity sensor.
  • the usage scenario of the user also includes the talking state of the user.
  • the method further includes the following step of determining, according to the signal output by the bone conduction microphone or the infrared proximity sensor, whether the user is in the talking mode.
  • that the ambient sound signal of surrounding environment of the user is acquired includes that: the ambient sound signal of the real-time location of the user is acquired, and the ambient sound signal heard at an ear of the user is acquired.
  • the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • that the dynamic range control submodule 243 performs the dynamic range adjustment on the ambient sound signal according to the sound pressure level of the ambient sound signal includes that: when the sound pressure level of the ambient sound signal is greater than 40dBA and less than or equal to 50dBA, the amplification processing is performed to the ambient sound signal; when the sound pressure level of the ambient sound signal is greater than 60dBA, the attenuation processing is performed to the ambient sound signal.
  • performing the EQ compensation processing on the ambient sound signal includes performing the EQ compensation processing on the voice signal band and the honk signal band in the ambient sound signal.
  • the noise cancellation level of the active noise cancellation module 23 is adjusted according to the sound pressure level of the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the wind noise suppression submodule 241 when the user is in the road environment and in the transportation mode, the wind noise suppression submodule 241 is controlled to be disabled.
  • the dynamic range control submodule (243) when the user is in the road environment and in the transportation mode, it is monitored whether the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level or less than the preset lower limit of the sound pressure level; if the sound pressure level of the ambient sound signal is greater than the preset upper limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the attenuation processing on the ambient sound signal; and if the sound pressure level of the ambient sound signal is less than the preset lower limit of the sound pressure level, the dynamic range control submodule (243) is triggered to perform the amplification processing on the ambient sound signal.
  • the operation that the working of the audio signal volume adjustment module 22, the active noise cancellation module 23, and the ambient sound adjustment module 24 is controlled according to the usage scenario, the sound pressure level of the ambient sound signal, and the energy distribution and the spectral distribution of the ambient sound signal further includes that:
  • the wind noise suppression submodule 241 and the dynamic range control submodule 243 are controlled to be disabled.
  • each block of the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, the module, the program segment or portion of the code includes one or more executable instructions for implementing the specified logical functions.
  • the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and may sometimes be executed in a reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of the blocks in the block diagrams and/or flowcharts may be implemented with a dedicated hardware-based device that performs the specified function or action, or it may be implemented by a combination of the dedicated hardware and a computer instruction.
  • the computer program product provided by the embodiment of the disclosure includes a computer readable storage medium storing the program code, and the program code includes instructions for executing the method described in the above method embodiment. Specific implementation may be referred to the method embodiment, and will not be repeated herein.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the described apparatus embodiment is merely exemplary.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • functional units in the embodiments of the disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
  • the functions When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the disclosure essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product.
  • the software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of the disclosure.
  • the foregoing storage medium includes any medium that can store program code, such as a U disk, a removable hard disk, an ROM, an RAM, a magnetic disk, or an optical disc.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
EP19741628.2A 2018-01-17 2019-01-07 Dispositif et procédé de contrôle audio adaptatif basé sur une identification de scénario Withdrawn EP3672274A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810043127.XA CN110049403A (zh) 2018-01-17 2018-01-17 一种基于场景识别的自适应音频控制装置和方法
PCT/CN2019/070657 WO2019141102A1 (fr) 2018-01-17 2019-01-07 Dispositif et procédé de contrôle audio adaptatif basé sur une identification de scénario

Publications (2)

Publication Number Publication Date
EP3672274A1 true EP3672274A1 (fr) 2020-06-24
EP3672274A4 EP3672274A4 (fr) 2021-05-05

Family

ID=67273101

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19741628.2A Withdrawn EP3672274A4 (fr) 2018-01-17 2019-01-07 Dispositif et procédé de contrôle audio adaptatif basé sur une identification de scénario

Country Status (3)

Country Link
EP (1) EP3672274A4 (fr)
CN (1) CN110049403A (fr)
WO (1) WO2019141102A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3869821A1 (fr) * 2020-02-20 2021-08-25 Beijing Xiaoniao Tingting Technology Co., Ltd Procédé et dispositif de traitement de signal pour écouteur et écouteur
WO2022132721A1 (fr) * 2020-12-15 2022-06-23 Google Llc Détecteur d'ambiance pour annulation active du bruit (anc) à mode double

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830862A (zh) * 2019-10-10 2020-02-21 广东思派康电子科技有限公司 一种自适应降噪的降噪耳机
CN110996205A (zh) * 2019-11-28 2020-04-10 歌尔股份有限公司 耳机的控制方法、耳机及可读存储介质
CN111179984B (zh) * 2019-12-31 2022-02-08 Oppo广东移动通信有限公司 音频数据处理方法、装置及终端设备
CN113129917A (zh) * 2020-01-15 2021-07-16 荣耀终端有限公司 基于场景识别的语音处理方法及其装置、介质和系统
CN111447523B (zh) * 2020-03-31 2022-02-18 歌尔科技有限公司 耳机及其降噪方法、计算机可读存储介质
CN111294691B (zh) * 2020-03-31 2021-10-26 歌尔股份有限公司 耳机及其降噪方法、计算机可读存储介质
CN111586522B (zh) * 2020-05-20 2022-04-15 歌尔科技有限公司 一种耳机降噪方法、耳机降噪装置、耳机及存储介质
CN111698602A (zh) * 2020-06-19 2020-09-22 青岛歌尔智能传感器有限公司 耳机及其耳机控制方法、控制装置和可读存储介质
CN113873379B (zh) * 2020-06-30 2023-05-02 华为技术有限公司 一种模式控制方法、装置及终端设备
WO2022022585A1 (fr) * 2020-07-31 2022-02-03 华为技术有限公司 Dispositif électronique et procédé de réduction de bruit audio et support associé
CN114079838B (zh) * 2020-08-21 2024-04-09 华为技术有限公司 一种音频控制方法、设备及系统
CN111935584A (zh) * 2020-08-26 2020-11-13 恒玄科技(上海)股份有限公司 用于无线耳机组件的风噪处理方法、装置以及耳机
CN112312257B (zh) * 2020-09-08 2022-11-29 深圳市逸音科技有限公司 一种主动数字降噪智能3d耳机
CN112185409A (zh) * 2020-10-15 2021-01-05 福建瑞恒信息科技股份有限公司 一种双麦克风降噪方法和存储设备
CN112767908B (zh) * 2020-12-29 2024-05-21 安克创新科技股份有限公司 基于关键声音识别的主动降噪方法、电子设备及存储介质
CN112765395B (zh) * 2021-01-22 2023-09-19 咪咕音乐有限公司 音频播放方法、电子设备和存储介质
CN112954532A (zh) * 2021-03-06 2021-06-11 深圳市尊特数码有限公司 蓝牙耳机调节降噪等级的方法、系统、终端及存储介质
CN113505441B (zh) * 2021-07-29 2023-03-14 中国第一汽车股份有限公司 一种车辆风噪隔声性能评估方法、装置、设备及存储介质
CN114121033B (zh) * 2022-01-27 2022-04-26 深圳市北海轨道交通技术有限公司 基于深度学习的列车广播语音增强方法和系统
CN114554346B (zh) * 2022-02-24 2022-11-22 潍坊歌尔电子有限公司 Anc参数的自适应调整方法、设备及存储介质
CN114280571B (zh) * 2022-03-04 2022-07-19 北京海兰信数据科技股份有限公司 一种雨杂波信号的处理方法、装置及设备
WO2024119396A1 (fr) * 2022-12-07 2024-06-13 深圳市韶音科技有限公司 Dispositif acoustique à porter sur soi à oreille ouverte et procédé d'annulation active du bruit associé
CN115778046B (zh) * 2023-02-09 2023-06-09 深圳豪成通讯科技有限公司 基于数据分析的智能安全帽调节控制方法和系统
CN117041803B (zh) * 2023-08-30 2024-03-22 江西瑞声电子有限公司 耳机播放的控制方法、电子设备及存储介质
CN116961806B (zh) * 2023-09-21 2024-02-09 广东保伦电子股份有限公司 一种基于卫星授时的花车音频同步广播系统,装置及方法
CN117238322B (zh) * 2023-11-10 2024-01-30 深圳市齐奥通信技术有限公司 一种基于智能感知的自适应语音调控方法及系统

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9113240B2 (en) * 2008-03-18 2015-08-18 Qualcomm Incorporated Speech enhancement using multiple microphones on multiple devices
CN101458931A (zh) * 2009-01-08 2009-06-17 无敌科技(西安)有限公司 一种消除语音信号中的环境噪声的方法
EP2439961B1 (fr) * 2009-06-02 2015-08-12 Panasonic Intellectual Property Management Co., Ltd. Aide auditive, système d'aide auditive, procédé de détection de marche et procédé d'aide auditive
US9025782B2 (en) * 2010-07-26 2015-05-05 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for multi-microphone location-selective processing
US10218327B2 (en) * 2011-01-10 2019-02-26 Zhinian Jing Dynamic enhancement of audio (DAE) in headset systems
US9055367B2 (en) * 2011-04-08 2015-06-09 Qualcomm Incorporated Integrated psychoacoustic bass enhancement (PBE) for improved audio
JP2013102370A (ja) * 2011-11-09 2013-05-23 Sony Corp ヘッドホン装置、端末装置、情報送信方法、プログラム、ヘッドホンシステム
US9769556B2 (en) * 2012-02-22 2017-09-19 Snik Llc Magnetic earphones holder including receiving external ambient audio and transmitting to the earphones
JP5949061B2 (ja) * 2012-03-30 2016-07-06 ソニー株式会社 情報処理装置、情報処理方法、及びプログラム
CN103945062B (zh) * 2014-04-16 2017-01-18 华为技术有限公司 一种用户终端的音量调节方法、装置及终端
CN104158506A (zh) * 2014-07-29 2014-11-19 腾讯科技(深圳)有限公司 调节音量的方法、装置及终端
CN104618829A (zh) * 2014-12-29 2015-05-13 歌尔声学股份有限公司 耳机环境声音的调节方法和耳机
CN106285083B (zh) * 2015-05-12 2019-02-15 国网浙江省电力公司 一种变电站降噪方法
CN105530581A (zh) * 2015-12-10 2016-04-27 安徽海聚信息科技有限责任公司 一种基于声音识别的智能穿戴设备和控制方法
CN105611443B (zh) * 2015-12-29 2019-07-19 歌尔股份有限公司 一种耳机的控制方法、控制系统和耳机
CN106792315B (zh) * 2017-01-05 2023-11-21 歌尔科技有限公司 一种抵消环境噪声的方法和装置及一种主动降噪耳机
CN106678552B (zh) * 2017-01-05 2019-03-26 北京埃德尔黛威新技术有限公司 一种新型渗漏预警方法
CN107105359B (zh) * 2017-06-02 2019-10-18 歌尔科技有限公司 一种切换耳机工作模式方法和一种耳机
CN107484058A (zh) * 2017-09-26 2017-12-15 联想(北京)有限公司 耳机装置和控制方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3869821A1 (fr) * 2020-02-20 2021-08-25 Beijing Xiaoniao Tingting Technology Co., Ltd Procédé et dispositif de traitement de signal pour écouteur et écouteur
US11302298B2 (en) 2020-02-20 2022-04-12 Beijing Xiaoniao Tingting Technology Co., LTD. Signal processing method and device for earphone, and earphone
EP3869821B1 (fr) * 2020-02-20 2022-10-26 Beijing Xiaoniao Tingting Technology Co., Ltd Procédé et dispositif de traitement de signal pour écouteur et écouteur
WO2022132721A1 (fr) * 2020-12-15 2022-06-23 Google Llc Détecteur d'ambiance pour annulation active du bruit (anc) à mode double
US11468875B2 (en) 2020-12-15 2022-10-11 Google Llc Ambient detector for dual mode ANC
US11887576B2 (en) 2020-12-15 2024-01-30 Google Llc Ambient detector for dual mode ANC

Also Published As

Publication number Publication date
WO2019141102A1 (fr) 2019-07-25
CN110049403A (zh) 2019-07-23
EP3672274A4 (fr) 2021-05-05

Similar Documents

Publication Publication Date Title
EP3672274A1 (fr) Dispositif et procédé de contrôle audio adaptatif basé sur une identification de scénario
US10979814B2 (en) Adaptive audio control device and method based on scenario identification
KR102354215B1 (ko) 콘텍스트에 기초한 주변 사운드 향상 및 음향 노이즈 소거
CN102124758B (zh) 助听器、助听系统、步行检测方法和助听方法
KR101542027B1 (ko) 강한 노이즈 환경 하에서의 헤드셋 통신 방법 및 헤드셋
CN103959813B (zh) 耳孔可佩戴式声音收集设备,信号处理设备和声音收集方法
US11153677B2 (en) Ambient sound enhancement based on hearing profile and acoustic noise cancellation
CN106507258B (zh) 一种听力装置及其运行方法
CN109348327B (zh) 一种主动降噪系统
CN105530580A (zh) 听力系统
US11304001B2 (en) Speaker emulation of a microphone for wind detection
KR20180021368A (ko) 상황 인식력을 갖는 스포츠 헤드폰
CN109310525A (zh) 媒体补偿通过和模式切换
EP2617127B2 (fr) Procédé et système pour fournir à un utilisateur une aide auditive
CN104754462A (zh) 音量自动调节装置及方法和耳机
EP4165883A1 (fr) Transition de mode synchronisée
JP2016110050A (ja) 音声処理装置及び音声明瞭化装置並びに音声処理方法
US10643597B2 (en) Method and device for generating and providing an audio signal for enhancing a hearing impression at live events
US11812243B2 (en) Headset capable of compensating for wind noise
WO2015044000A1 (fr) Dispositif et procédé de superposition d'un signal acoustique
US11782673B2 (en) Controlling audio output
US20230169948A1 (en) Signal processing device, signal processing program, and signal processing method
WO2022230275A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
CN113038315A (zh) 一种语音信号处理方法及装置

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200317

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20210406

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 1/10 20060101AFI20210329BHEP

Ipc: H04R 3/00 20060101ALI20210329BHEP

Ipc: H04R 5/04 20060101ALI20210329BHEP

Ipc: H04R 1/22 20060101ALI20210329BHEP

Ipc: G10K 11/178 20060101ALN20210329BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20210823