CN114125639B - Audio signal processing method and device and electronic equipment - Google Patents

Audio signal processing method and device and electronic equipment Download PDF

Info

Publication number
CN114125639B
CN114125639B CN202111487952.7A CN202111487952A CN114125639B CN 114125639 B CN114125639 B CN 114125639B CN 202111487952 A CN202111487952 A CN 202111487952A CN 114125639 B CN114125639 B CN 114125639B
Authority
CN
China
Prior art keywords
sound
spectrum
gain parameter
audio
output signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111487952.7A
Other languages
Chinese (zh)
Other versions
CN114125639A (en
Inventor
冯海彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111487952.7A priority Critical patent/CN114125639B/en
Publication of CN114125639A publication Critical patent/CN114125639A/en
Application granted granted Critical
Publication of CN114125639B publication Critical patent/CN114125639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

The invention provides an audio signal processing method, an audio signal processing device and electronic equipment, comprising the following steps: acquiring a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter applied to the audio output signal according to the external scene category; and compensating the audio output signal through the first gain parameter so as to enable the loudspeaker to play the compensated audio output signal. According to the invention, the scene type of the user can be identified according to the environment sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environment noise on the sound quality can be reduced, and the listening experience of the user is improved.

Description

Audio signal processing method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of audio playing, in particular to an audio signal processing method and device and electronic equipment.
Background
As a commonly used electronic device, people often wear and listen to audio such as music in daily life, and due to different wearing environments of the headphones, when the headphones are in different wearing environments, the sound effects heard by users are inconsistent.
In the related art, in order to improve the consistency of the hearing feeling of the user when the earphone is in different wearing environments, a microphone for acquiring the environmental sound is generally added to the earphone, the environmental sound is acquired through the microphone, and then a counteracting sound opposite to the environmental sound is generated according to the environmental sound, when the earphone plays music, the counteracting sound is played by the earphone at the same time, so that the environmental sound heard by the user is reduced, and further, the difference of the hearing feeling caused by the different environments of the earphone worn by the user is reduced.
However, when the method for enhancing the hearing feeling is implemented, the cancellation sound is required to be superimposed on the music played by the earphone, so that the sound quality of the earphone is reduced due to the unavoidable cancellation of part of the music content, and therefore, how to enhance the sound quality of the earphone as much as possible while reducing the masking effect is a technical problem to be solved.
Disclosure of Invention
The embodiment of the invention provides an audio signal processing method, an audio signal processing device, an earphone and electronic equipment, which are used for solving the problem of poor tone quality of audio playing equipment in the prior art.
In a first aspect, an embodiment of the present invention provides an audio signal processing method, including:
Acquiring a first sound spectrum of an external environment;
determining an external scene category corresponding to the first sound spectrum;
determining a first gain parameter of an audio output signal according to the external scene category;
And compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
In a second aspect, an embodiment of the present invention provides an audio signal processing apparatus, including:
The acquisition module is used for acquiring a first sound frequency spectrum of the external environment;
a scene category module, configured to determine an external scene category corresponding to the first sound spectrum;
a parameter module for determining a first gain parameter of the audio output signal according to the external scene category;
And the compensation module is used for compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the audio output signal caused by the environmental noise of the external environment.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the audio signal processing method as provided by the present invention when the processor executes the program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where a computer program is stored on the computer readable storage medium, where the computer program when executed by a processor implements the steps of the audio signal processing method as provided by the present invention.
In an embodiment of the present invention, the method includes: acquiring a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal. The invention can identify the current scene category of the user according to the environmental sound of the user, determine the first gain parameter according to the scene category, and compensate the played sound through the first gain parameter, thereby reducing the influence of environmental noise on the sound quality and improving the listening experience of the user.
Drawings
Fig. 1 is a flowchart of steps of an audio signal processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of sound spectrum matching according to an embodiment of the present invention;
FIG. 3 is a schematic diagram showing the spectrum contrast of an external ear sound according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps of another audio signal processing method according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of an embodiment of the present invention providing an earphone to be worn on a standard human ear model;
FIG. 6 is a second audio spectrum comparison schematic diagram provided by an embodiment of the present invention;
fig. 7 is a block diagram of an audio signal processing apparatus according to an embodiment of the present invention;
Fig. 8 is a schematic diagram of a hardware structure of an earphone according to an embodiment of the present invention;
Fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a flowchart of steps of an audio signal processing method according to an embodiment of the present invention, where the method is applied to an electronic device, as shown in fig. 1, and the method may include:
step 101, a first sound spectrum of an external environment is acquired.
Ambient sounds collected in a typical environment will typically contain sounds of various frequencies, and the corresponding intensities of the various frequency sounds may be different in the different sounds, e.g., lei Yusheng may contain more low frequency sounds and piano sounds may contain more high frequency sounds.
The external sound of the external environment where the user is located can be collected through the external microphone, and after the external sound is processed through the audio processing module, a first sound frequency spectrum of the external environment can be obtained.
The sound spectrum is a spectrum reflecting the relationship between the respective frequencies of waves (e.g., sound, electromagnetic waves, etc.) and the vibrational energy. The specific sound spectrum can be expressed as the corresponding relation between each frequency and decibel in sound, wherein the decibel is a ratio, is a pure counting method, has no unit mark, and is the same in the decibel corresponding to the sound with different intensity under the same decibel calculation standard.
Step 102, determining an external scene category corresponding to the first sound spectrum.
Because the composition of each frequency in the environmental sound will have some differences under different environments, the external scene category corresponding to the first sound spectrum can be determined according to the obtained characteristics of the first sound spectrum. The external scene category refers to a scene category in which a user wears the earphone, for example, when the user uses the earphone on an aircraft, the acquired first sound spectrum is a spectrum of sound in an aircraft cabin, and the external scene category corresponding to the first sound spectrum is the aircraft cabin category; when the earphone is used by the user in the market, the acquired first sound frequency spectrum is the frequency spectrum of the sound in the market, and the external scene category corresponding to the first sound frequency spectrum is the market category.
Specifically, at least one spectrum feature corresponding to a preset scene category may be stored in advance in the memory, after the first sound spectrum is obtained, the features in the first sound spectrum are compared with spectrum features corresponding to all preset scene categories, and according to the comparison result, the preset scene category with the highest matching degree is determined as the external scene category.
Referring to fig. 2, fig. 2 shows a schematic diagram of sound spectrum matching provided by the embodiment of the present application, where, as shown in fig. 2, a thick solid line f is a spectrum feature corresponding to a preset scene class a, a dotted line g is an acquired first sound spectrum under a current external environment, and a thin solid line h is a spectrum feature corresponding to a preset scene class B.
Step 103, determining a first gain parameter of the audio output signal according to the external scene category.
Because of the masking effect (MASKING EFFECTS) of the human ear, when the human ear receives a plurality of sound stimuli with the same frequency, the human ear cannot completely receive the sound information of all the stimuli. In popular terms, when a user listens to music played by the earphone, the user also listens to the sound of the current environment, and the sound of one frequency in the current environment affects the receiving degree of the user on the audio information of the same frequency in the music.
Referring to fig. 3, fig. 3 shows a schematic diagram of spectrum comparison of an external sound provided by the embodiment of the present invention, as shown in fig. 3, the abscissa indicates sound frequency, the unit may be hertz (Hz), the ordinate indicates sound intensity, the unit may be decibel (db), a thick solid line a in fig. 3 may represent a spectrum curve in a standard environment (such as a quiet indoor environment), a dotted line b may represent a spectrum curve in an airport environment, a thin solid line c represents a spectrum curve in a coffee house environment, it can be seen that the low frequency part in the environmental sound of the airport is more, a stronger masking effect is formed for the low frequency sound played by the earphone, the high frequency part in the environmental sound of the coffee house is more, and a stronger masking effect is formed for the high frequency sound played by the earphone.
The memory may further store preset gain parameters of each preset scene category, and after determining the external scene category, the preset gain parameters corresponding to the external scene category stored in the memory may be obtained as the first gain parameters. The first gain parameter may be used to compensate for sound output by the earpiece speaker. The first gain parameter at least comprises a compensation amplitude of one sound frequency band, and according to the compensation amplitude, the corresponding sound frequency band played by the loudspeaker can be compensated.
Therefore, after the first gain parameter is determined, the sound played by the earphone can be compensated according to the first gain parameter, so that the images of the sound played by the earphone, which is free from masking effect, of all frequency sounds or the masking effect degree of the sounds is the same, and further the listening feeling of the user to music under the current environmental noise is ensured to be consistent with the listening feeling of the user to the music in the quiet environment.
The gain parameter refers to the degree of enlargement or reduction of the setting of the headphones for each frequency band of sound to be played. One gain parameter may include each frequency band within the earphone frequency response range, and the degree of enlargement or reduction corresponding to each frequency band. The gain amplitude may be proportional or a specific db value.
For example, if a frequency response of a headset is in a range of 100Hz to 10000Hz, the gain parameters may include a frequency band a [100Hz-5000Hz ], and a frequency band B (5000 Hz-10000Hz ], where the frequency band a corresponds to a gain of 10 db and the frequency band B corresponds to a gain of-10 db, when the headset plays sound with the gain parameters, the sound intensity of the frequency [100Hz-5000Hz ] is increased by 10 db, and the sound intensity of the frequency (5000 Hz-10000 Hz) is reduced by 10 db.
It should be noted that, the size of each frequency band in the gain parameter can be flexibly adjusted according to actual requirements, when each frequency band is smaller, the accuracy of the gain is higher, and when the frequency band is larger, the computing resource consumed by the gain is smaller. For example, the frequency response range of 100Hz to 10000Hz may be divided in units of 100 to obtain 99 frequency bands.
And 104, compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
After determining the corresponding first gain parameter according to the first sound frequency spectrum of the current external environment, when the speaker of the earphone plays sound, the audio processing module compensates the sound intensity of each frequency band in the audio output signal, generates an analog signal for driving the speaker according to the compensated audio output signal, and sends the analog signal to the speaker to enable the speaker to play sound.
In the process of using the earphone by the user, the first sound spectrum of the external environment can be continuously acquired at preset time intervals, if the external scene category corresponding to the first sound spectrum is detected to change (for example, the user gets on an airplane from a waiting hall of an airport and the external scene category changes from the airport to the airplane), the preset gain parameter corresponding to the external scene category can be re-acquired, and the first gain parameter can be re-determined according to the re-acquired preset gain parameter, so that the sound output by the earphone can be actively adapted according to the change of the external scene category.
In summary, the audio signal processing method provided by the embodiment of the invention includes the steps of obtaining a first sound frequency spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the invention, the scene type of the user can be identified according to the environment sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environment noise on the sound quality can be reduced, and the listening experience of the user is improved.
Fig. 4 is a flowchart of steps of another audio signal processing method according to an embodiment of the present invention, where the method is applied to an electronic device, as shown in fig. 4, and the method may include:
in step 201, a first sound spectrum of an external environment is acquired.
Optionally, step 201 may further include:
sub-step 2011, obtaining a first audio of an external environment;
After the user links the earphone and the terminal, or the user turns on the power switch of the earphone, the first audio of the current environment of the user can be obtained through the external microphone. It should be noted that, the earphone may also be provided with a wear detection module, and after detecting that the user wears the earphone, the first audio is acquired.
Substep 2012, determining the first sound spectrum according to a correspondence between sound frequency bands and sound intensities in the first audio.
After the audio processing module obtains the first audio, the first audio can be analyzed to obtain the intensity corresponding to each sound frequency in the first audio, wherein the sound intensity refers to the volume of sound of each frequency in the first audio, and the sound intensity can be represented by a decibel value or other modes capable of representing the sound intensity (such as sound pressure, amplitude, loudness, etc.). In the first audio, the corresponding sound intensity can be measured by the sound of different frequency bands, and then the corresponding relation between each sound frequency band and the sound intensity in the first audio can be determined by measuring the sound intensity of the sound of each frequency band in the first audio.
Since the first audio frequency contains a sound in a frequency range, for example, a sound in a frequency range of 10 to 20000 hz, and the frequency of the sound is continuous, the frequency contained in the first audio frequency is infinite, and it is impossible to determine the sound intensities of all frequencies, only the preset frequency sampling point can be sampled, and the sound intensity corresponding to the preset frequency sampling point can be obtained. And constructing a first sound frequency spectrum according to the corresponding relation between the preset frequency sampling point and the sound intensity.
Step 202, determining an external scene category corresponding to the first sound spectrum.
The technician can collect the environmental sounds in the environments corresponding to different scene categories, and construct the external ear sound frequency spectrum corresponding to each scene category according to the collected environmental sounds. When the earphone leaves the factory, at least one preset scene category, the outside-the-ear sound frequency spectrum and the corresponding relation between the preset scene category and the outside-the-ear sound frequency spectrum can be stored in a memory of the earphone.
Optionally, step 202 may further include:
Sub-step 2021, obtaining at least one pre-set outside-the-ear sound spectrum; wherein, there is a one-to-one correspondence between the at least one external sound spectrum and a preset scene category.
Because the corresponding relation between the preset scene category and the external sound spectrum is preset in the earphone, the external sound spectrum corresponding to the preset scene category stored in the memory can be acquired after the first sound spectrum is determined.
It should be noted that, the external ear sound spectrum corresponding to all the preset scene categories may be read at one time, or the external ear sound spectrum corresponding to one preset scene category may be read at a time according to a preset sequence for one comparison operation, so as to reduce the requirement on the running memory capacity.
Substep 2022, determining a spectral similarity of the at least one outside-the-ear sound spectrum to the first sound spectrum.
The spectral similarity of the external sound spectrum and the first sound spectrum can be determined by the average difference magnitudes of the sound intensities corresponding to the plurality of frequency characteristic points.
Specifically, the sound intensities corresponding to a preset number of sampling frequencies can be collected in the first sound spectrum, the sound intensities corresponding to the same sampling frequencies are collected in the external sound spectrum, the sound intensities corresponding to the sampling frequencies are compared, the difference value of the sound intensities corresponding to the sampling frequencies is calculated, then the average difference value is calculated according to the difference values of the sound intensities corresponding to all the sampling frequencies, and further, the inverse of the average difference value can be used as the frequency spectrum similarity, that is, the smaller the average difference value is, the higher the similarity is.
Further, since the distribution ratio of noise in the same environment over the respective frequencies tends to be similar, there may be a difference in the overall size of noise in the same environment. Therefore, the spectrum similarity can also be determined according to the difference degree between the ratio of the two sound intensities in at least one frequency pair in the first sound spectrum and the ratio of the two sound intensities in the same frequency pair in the external sound spectrum of the preset scene category.
Substep 2023 determines, as the external scene category, a preset scene category corresponding to an external sound spectrum having a spectrum similarity greater than or equal to a preset similarity.
Generally, if the environment in which the user is located matches a certain preset scene category, the similarity between the first sound spectrum and the external sound spectrum corresponding to the preset scene category is greater than or equal to the preset similarity.
Further, considering that the difference between the same preset scene categories also has the same size, for example, the noise characteristics in the cabin environments of different aircraft are approximately the same, and the noise characteristics of different street scene categories also have large differences due to the large differences of time, position, traffic flow, people flow and the like, so that different preset similarity can be set for different preset scene categories according to the noise differences of the same preset scene category measured in the field at different times, places and the like.
And under the condition that all the frequency spectrum similarities do not exceed the preset similarity, determining the preset scene category corresponding to the frequency spectrum similarity closest to the preset similarity as the external scene category. When more than one spectrum similarity exceeds the preset similarity, the preset scene category corresponding to the spectrum similarity with the maximum degree of exceeding the preset similarity can be determined as the external scene category.
Step 203, inquiring the one-to-one correspondence between the preset scene category and the preset gain parameter.
The technician can set corresponding preset gain parameters for different preset scene categories, store the preset gain parameters in a memory of the earphone when the earphone leaves the factory, and store the corresponding relation between the preset scene categories and the preset gain parameters.
The technical staff can establish an acoustic laboratory, simulate environmental noise corresponding to a preset scene category in the acoustic laboratory, compare the detected spectrum curve of the earphone under the preset scene category with the spectrum curve under the standard scene category, change the gain parameter at the same time, enable the spectrum curve of the earphone under the preset scene category to be consistent with the spectrum curve of the earphone under the standard scene category, record the adjustment value of the gain parameter, and set the adjustment value of the gain parameter as the preset gain parameter corresponding to the preset scene category.
And 204, determining a preset gain parameter corresponding to the external scene type as the first gain parameter.
After the external scene category is determined, a corresponding relation between the preset scene category and the preset gain parameter can be queried from a memory, and the preset gain parameter corresponding to the external scene category is obtained as a first gain parameter.
Optionally, step 204 may further include:
sub-step 2041, acquires an audio mode.
Sub-step 2042, determining a reference gain parameter based on said sound effect pattern.
Since the earphone may have multiple sound effect modes (rock sound effect, popular sound effect, etc.), each sound effect mode corresponds to a different reference gain parameter, a user may already apply a certain reference gain parameter when using the earphone, and the first gain parameter may also be obtained by overlapping preset gain parameters corresponding to external scene categories based on the applied reference gain parameter, so as to avoid affecting the sound effect mode selected by the user when eliminating the masking effect.
The reference gain parameter may be a default gain parameter of the earphone when leaving the factory and a gain parameter corresponding to different sound effect modes. For example, the gain amplitude corresponding to each frequency band in the default gain parameter is 0, that is, the earphone plays the received audio signals of each frequency band according to the intensity of 100%; in the gain parameters corresponding to the rock sound effect mode, the gain amplitude corresponding to the low audio frequency section is 20 dB, and the gain amplitude corresponding to the high audio frequency section is-20 dB, so that the bass effect in music is highlighted.
In sub-step 2042, the first gain parameter is determined according to the preset gain parameter corresponding to the external scene category and the reference gain parameter.
The gain amplitude corresponding to each frequency band in the first gain parameter can be calculated through the gain amplitude corresponding to each frequency band in the preset gain parameter corresponding to the external scene category and the gain amplitude corresponding to each frequency band in the reference gain parameter. The gain amplitude may be proportional or a specific db value.
Therefore, in the case where the gain magnitudes are proportional, the method for calculating the gain magnitudes corresponding to each frequency band in the first gain parameter may be: multiplying the gain amplitude corresponding to the same frequency band in the preset gain parameter corresponding to the external scene category by the gain amplitude corresponding to the same frequency band in the reference gain parameter. Under the condition that the gain amplitude is a decibel value, the calculation method for calculating the gain amplitude corresponding to each frequency band in the first gain parameter can be as follows: and adding the gain amplitude corresponding to the same frequency band in the preset gain parameters corresponding to the external scene types and the gain amplitude corresponding to the same frequency band in the reference gain parameters.
After the first gain effect is determined, the sound of each frequency band required to be played by the earphone can be directly gained and played according to the first gain parameter, so that the sound output by the earphone is consistent with the hearing sensation of the standard scene type under the external scene type, and the noise in the external scene type is prevented from distorting the sound played by the earphone heard by the user.
Step 205, a second audio spectrum of the in-ear environment is acquired.
Optionally, step 205 may include:
Sub-step 2051, obtaining, by an in-ear microphone, a second audio of an in-ear environment;
In the embodiment of the invention, the in-ear sound of the in-ear environment of the user can be collected through the in-ear microphone, and the second audio spectrum of the in-ear environment can be obtained after the in-ear sound is processed by the audio processing module.
Sub-step 2052, determining the second audio spectrum according to a correspondence between sound bands and sound intensities in the second audio.
The manner of determining the second audio spectrum in this step is similar to that of determining the first audio spectrum in the substep 2012, and the description thereof is omitted.
Step 206, determining a second gain parameter according to the second audio spectrum and the in-ear sound spectrum corresponding to the external scene category.
In the development process of the earphone, an acoustic engineer can wear the earphone on a standard human ear, acquire the acoustic characteristics of the earphone through a microphone positioned at a cochlea of the standard human ear and adjust the sound quality of the earphone so as to design the earphone with better sound quality. Therefore, the closer the shape of the ear canal chamber of the user is to the standard human ear, the better the sound quality perceived by the user will be.
After the earphone is calibrated, the earphone can be worn on a standard human ear model in an acoustic laboratory in a standard wearing mode, environmental noise of a preset scene category is simulated in the acoustic laboratory, sound in the ear is collected through an in-ear microphone of the earphone, and an in-ear sound frequency spectrum of the earphone under each preset scene category is determined. In essence, the in-ear sound spectrum is the ambient noise in the external environment received by the in-ear microphone of the headset from the user's ear canal.
Referring to fig. 5, fig. 5 shows a schematic diagram of wearing an earphone on a standard human ear model according to an embodiment of the present invention, as shown in fig. 5, where the earphone includes an earphone housing 45, a speaker 42, an in-ear microphone 43 and an out-of-ear microphone 44, the earphone is worn on the standard human ear model 46, the in-ear microphone 43 may receive ambient sound entering the ear canal cavity 47 from the ear canal cavity 47 of the standard human ear model 46, the out-of-ear microphone 44 may directly receive ambient sound from outside the ear canal cavity 47 of the standard human ear model 46, and the speaker 42 plays an audio signal. The technician can wear the headset on the standard human ear model in the manner shown in fig. 5, thereby performing the above-described measurement.
In the process of using the earphone, the user can cause poor fitting degree between the earphone shell and ears due to incorrect wearing mode, unstable wearing and other reasons, so that the second sound frequency spectrum corresponding to the sound played by the earphone heard by the user is inconsistent with the sound frequency spectrum in the ear. It is also possible that the second audio spectrum corresponding to the sound played by the earphone heard by the user is inconsistent with the in-ear sound spectrum due to the different shape of the user's ear canal chamber from the standard human ear. Both of the above conditions can negatively impact the sound quality of the earphone.
In order to solve the above problem, the sound played by the earphone can be compensated according to the difference between the second sound spectrum and the inner ear sound spectrum.
Referring to fig. 6, fig. 6 shows a second audio spectrum comparison schematic diagram provided by the embodiment of the present invention, as shown in fig. 6, where a thick solid line d represents a spectrum curve of residual environmental sound collected by an in-ear microphone in an ear canal cavity when a standard human ear model wears an earphone in a correct wearing manner in an environment corresponding to a certain preset scene category. The dashed line e represents a second audio spectrum of residual ambient sound collected by the in-ear microphone in the user's ear canal cavity when the user is not wearing the headphones correctly in the environment corresponding to the preset scene category. It can be seen that in the solid line box a in fig. 6, the second audio spectrum has a lower intensity in the low frequency portion, which is generally caused by poor fitting of the earphone to the ear, and in the solid line box B in fig. 6, the second audio spectrum has a lower intensity in the high frequency portion, which is generally caused by the difference between the ear canal cavity of the user and the ear canal cavity of the standard human ear.
Optionally, step 206 may include:
in step 2061, an in-ear sound spectrum corresponding to the external scene category is acquired.
Because the earphone is built in the memory when leaving the factory, each preset scene category and the in-ear sound spectrum and the corresponding relation between the preset scene category and the in-ear sound spectrum are built in.
Therefore, after the external scene category is determined, the in-ear sound spectrum corresponding to the external scene category can be obtained from the memory according to the corresponding relation between the preset scene category and the in-ear sound spectrum.
Substep 2062, comparing the second audio spectrum with the in-ear sound spectrum to determine a sound intensity difference corresponding to each sound frequency segment.
When the earphone is not worn correctly and/or the ear canal cavity of the user is not standard, the sound intensity of part or all frequency bands in the sound played by the earphone can be changed, so that the sound heard by the user is consistent with the sound heard by the standard human ear when the earphone is worn correctly, better tone quality performance is achieved, the difference value of the sound intensity in each frequency band between the second sound frequency spectrum and the in-ear sound frequency spectrum can be determined, and the sound output of the earphone is compensated according to the difference value.
Specifically, the response frequency range of the earphone may be divided into a plurality of sound frequency bands, and the sound intensity of each sound frequency band in the second sound frequency spectrum is different from the sound intensity of each sound frequency band in the in-ear sound frequency spectrum, so as to determine the compensation value corresponding to each sound frequency band. It is easy to understand that the higher the resolution of the segmented audio frequency band is, the better the final compensation effect is, but more operation resources are required to be consumed, and the number of the audio frequency bands can be flexibly selected by a technician according to actual requirements.
Substep 2063, determining the second gain parameter based on the sound intensity differences corresponding to the respective sound frequency bands.
After determining the intensity difference value corresponding to each sound frequency band, the sound intensity difference value corresponding to each sound frequency band can be directly determined as a compensation value of the corresponding sound frequency band, and a second gain parameter is constructed, wherein the second gain parameter comprises the compensation value of each frequency band.
For example, when the user is in an airport environment, after calculation, the frequency band a in the second audio spectrum is 10 db different from the frequency band a in the preset sound spectrum corresponding to the airport scene category, and then the 10 db can be determined as the compensation value corresponding to the frequency band a in the second gain parameter.
Furthermore, the compensation value may be a compensated proportion value besides a specific db value, so that the proportion of the phase difference between each frequency band in the second audio spectrum and the same frequency band in the pre-audio frequency band may be determined according to the sound intensity difference corresponding to each frequency band, and the second gain parameter may be constructed according to the proportion.
Step 207, determining a target gain parameter according to the first gain parameter and the second gain parameter.
In order to enable a user to hear better tone quality, the first gain parameter and the second gain parameter can be added to obtain a target gain parameter, and the audio output signal is compensated through the target gain parameter, so that the tone quality reduction caused by environmental noise can be reduced, the tone quality reduction caused by the wearing problem of the earphone can be reduced, and the tone quality effect of the user when using the earphone is greatly improved.
And step 208, compensating the audio output signal according to the target gain parameter, and playing the compensated audio output signal through a loudspeaker.
After the target gain parameter is determined, when the loudspeaker of the earphone plays sound, the sound intensity of the corresponding frequency band in the audio output signal is compensated according to the compensation amplitude corresponding to each frequency band in the target gain parameter, and an analog signal for driving the loudspeaker is generated according to the compensated audio output signal, and the analog signal is sent to the loudspeaker, so that the loudspeaker plays sound.
In summary, the audio signal processing method provided by the embodiment of the invention includes the steps of obtaining a first sound frequency spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the invention, the scene type of the user can be identified according to the environment sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environment noise on the sound quality can be reduced, and the listening experience of the user is improved.
Fig. 7 is a block diagram of an audio signal processing apparatus according to an embodiment of the present invention, and as shown in fig. 7, the audio signal processing apparatus includes:
an acquisition module 301, configured to acquire a first sound spectrum of an external environment;
a scene category module 302, configured to determine an external scene category corresponding to the first sound spectrum;
a parameter module 303, configured to determine a first gain parameter of the audio output signal according to the external scene category;
The compensation module 304 is configured to compensate the audio output signal according to the first gain parameter, and play the compensated audio output signal, so as to reduce a masking effect of the audio output signal caused by environmental noise of the external environment.
Optionally, the apparatus further includes:
The first acquisition submodule is used for acquiring first audio of the external environment;
And the first spectrum sub-module is used for determining the first sound spectrum according to the corresponding relation between the sound frequency band and the sound intensity in the first audio.
Optionally, the scene category module includes:
The sub-module of the outer ear spectrum is used for obtaining at least one preset outer ear sound spectrum; wherein, there is a one-to-one correspondence between the at least one external sound spectrum and a preset scene category;
a similarity submodule for determining a spectral similarity of the at least one extra-aural sound spectrum to the first sound spectrum;
And the scene category sub-module is used for determining a preset scene category corresponding to the external sound spectrum with the spectrum similarity larger than or equal to the preset similarity as the external scene category.
Optionally, the parameter module includes:
The preset gain parameter submodule is used for inquiring the one-to-one correspondence between the preset scene category and the preset gain parameter;
And the first gain parameter submodule is used for determining a preset gain parameter corresponding to the external scene category as the first gain parameter.
Optionally, the first gain parameter submodule includes:
the sound effect acquisition sub-module is used for acquiring a sound effect mode;
A reference gain parameter sub-module, configured to determine a reference gain parameter according to the sound effect mode;
And the first gain parameter calculation sub-module is used for determining the first gain parameter according to the preset gain parameter corresponding to the external scene category and the reference gain parameter.
Optionally, the compensation module includes:
the second frequency spectrum acquisition submodule is used for acquiring a second sound frequency spectrum of the in-ear environment;
the second gain parameter submodule is used for determining a second gain parameter according to the second audio spectrum and the in-ear sound spectrum corresponding to the external scene category;
A target gain parameter sub-module, configured to determine a target gain parameter according to the first gain parameter and the second gain parameter;
And compensating the audio output signal according to the target gain parameter, and playing the compensated audio output signal through a loudspeaker.
Optionally, the second spectrum acquisition submodule includes:
the second audio sub-module is used for acquiring second audio of the in-ear environment through the in-ear microphone;
And the second frequency spectrum determining submodule is used for determining the second sound frequency spectrum according to the corresponding relation between the sound frequency band and the sound intensity in the second sound frequency.
Optionally, the second gain parameter submodule includes:
the in-ear spectrum sub-module is used for acquiring an in-ear sound spectrum corresponding to the external scene category;
the intensity difference submodule is used for comparing the second sound frequency spectrum with the in-ear sound frequency spectrum and determining sound intensity differences corresponding to each sound frequency band;
And the second gain parameter determining submodule is used for determining the second gain parameter according to the sound intensity difference value corresponding to each sound frequency band.
In summary, the apparatus provided in the embodiment of the present invention includes obtaining a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the invention, the scene type of the user can be identified according to the environment sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environment noise on the sound quality can be reduced, and the listening experience of the user is improved.
The embodiment of the application also provides a headset, and referring to fig. 8, fig. 8 shows a hardware structure block diagram of the headset provided by the embodiment of the application.
As shown in fig. 8, the headset may include a processor 40, a digital-to-analog conversion module 41, a speaker 42, an in-ear microphone 43, and an out-of-ear microphone 44. Wherein the in-ear microphone 43 is operable to pick up sound from inside the ear canal of the user when the user wears the earphone; the out-of-ear microphone 44 may be used to pick up sound from outside the user's ear canal while the user is wearing the headset, i.e., may pick up sound from the environment in which the user is currently located; the digital-to-analog conversion module 41 may receive the analog audio signals collected by the in-ear microphone 43 and the out-of-ear microphone 44, and convert the received analog audio signals into digital audio signals, and send the digital audio signals to the processor 40; the processor 40 may send the digital audio signal to the digital-to-analog conversion module 41, where the digital-to-analog conversion module 41 converts the digital audio signal to an analog audio signal and drives the diaphragm of the speaker 42 to vibrate so that the speaker 42 emits sound.
A processor 40 for performing the following process:
Acquiring a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
The embodiment of the application also provides an electronic device, and referring to fig. 9, fig. 9 shows a schematic hardware structure of the electronic device.
The electronic device 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and power source 511. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than illustrated, or may combine certain components, or may have a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
A processor 510 for performing the following process:
Acquiring a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
In the embodiment of the invention, a first sound spectrum of an external environment is acquired; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the invention, the scene type of the user can be identified according to the environment sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environment noise on the sound quality can be reduced, and the listening experience of the user is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 510; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 502, such as helping the user to send and receive e-mail, browse web pages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 500. The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used for receiving an audio or video signal. The input unit 504 may include a graphics processor (Graphics Processing Unit, GPU) 5041 and a microphone 5042, the graphics processor 5041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphics processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. Microphone 5042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 501 in case of a phone call mode.
The electronic device 500 also includes at least one sensor 505, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or the backlight when the electronic device 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 505 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 506 is used to display information input by a user or information provided to the user. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 5071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). Touch panel 5071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, physical keyboards, function keys (e.g., volume control keys, switch keys, etc.), trackballs, mice, joysticks, and so forth, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 510 to determine a type of touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of touch event. Although in fig. 9, the touch panel 5071 and the display panel 5061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and an external device.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509, and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 510.
The electronic device 500 may also include a power supply 511 (e.g., a battery) for powering the various components, and preferably the power supply 511 may be logically connected to the processor 510 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 500 includes some functional modules, which are not shown, and will not be described herein.
Preferably, the embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program stored in the memory 509 and capable of running on the processor 510, where the computer program when executed by the processor 510 implements each process of the foregoing embodiment of the audio signal processing method, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned embodiments of the audio signal processing method, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. The computer readable storage medium is, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (10)

1. A method of audio signal processing, the method comprising:
Acquiring a first sound spectrum of an external environment;
determining an external scene category corresponding to the first sound spectrum;
determining a first gain parameter of an audio output signal according to the external scene category;
compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the audio output signal caused by the environmental noise of the external environment;
The compensating the audio output signal according to the first gain parameter, playing the compensated audio output signal to reduce a masking effect of the audio output signal caused by environmental noise of the external environment, including:
acquiring a second sound spectrum of the in-ear environment; the second sound spectrum is a sound spectrum corresponding to residual environmental sound in the ear canal cavity of the user when the earphone is not worn correctly;
Determining a second gain parameter according to the second audio spectrum and the in-ear sound spectrum corresponding to the external scene category; the in-ear sound spectrum is a sound spectrum corresponding to environmental noise in an external environment received by an in-ear microphone of the earphone from an ear canal of a user;
Determining a target gain parameter according to the first gain parameter and the second gain parameter;
compensating the audio output signal according to the target gain parameter, and playing the compensated audio output signal;
The playing the compensated audio output signal comprises:
generating an analog signal for driving a speaker according to the compensated audio output signal;
And sending the analog signal to a loudspeaker so that the loudspeaker plays sound.
2. The method of claim 1, wherein the acquiring the first sound spectrum of the external environment comprises:
Acquiring a first audio of an external environment;
and determining the first sound frequency spectrum according to the corresponding relation between the sound frequency band and the sound intensity in the first audio.
3. The method of claim 1, wherein the determining the external scene category corresponding to the first sound spectrum comprises:
acquiring at least one preset external ear sound frequency spectrum; wherein, there is a one-to-one correspondence between the at least one external sound spectrum and a preset scene category;
determining a spectral similarity of the at least one extra-aural sound spectrum to the first sound spectrum;
And determining a preset scene category corresponding to the external sound spectrum with the spectrum similarity larger than or equal to the preset similarity as the external scene category.
4. A method according to claim 3, wherein said determining a first gain parameter applied to the audio output signal in accordance with the external scene category comprises:
Inquiring the one-to-one correspondence between the preset scene category and the preset gain parameter;
And determining a preset gain parameter corresponding to the external scene type as the first gain parameter.
5. The method of claim 4, wherein determining the preset gain parameter corresponding to the external scene category as the first gain parameter comprises:
Acquiring an audio mode;
Determining a reference gain parameter according to the sound effect mode;
and determining the first gain parameter according to the preset gain parameter corresponding to the external scene category and the reference gain parameter.
6. The method of claim 2, wherein the acquiring a second audio spectrum of the in-ear environment comprises:
acquiring a second audio of the in-ear environment;
And determining the second audio spectrum according to the corresponding relation between the sound frequency band and the sound intensity in the second audio.
7. The method of claim 2, wherein determining the second gain parameter from the second audio spectrum and the in-ear sound spectrum corresponding to the external scene category comprises:
Acquiring an in-ear sound frequency spectrum corresponding to the external scene category;
comparing the second sound frequency spectrum with the in-ear sound frequency spectrum to determine a sound intensity difference value corresponding to each sound frequency band;
and determining the second gain parameter according to the sound intensity difference value corresponding to each sound frequency band.
8. An audio signal processing apparatus, the apparatus comprising:
The acquisition module is used for acquiring a first sound frequency spectrum of the external environment;
a scene category module, configured to determine an external scene category corresponding to the first sound spectrum;
A parameter module for determining a first gain parameter applied to the audio output signal according to the external scene category;
the compensation module is used for compensating the audio output signal through the first gain parameter and playing the compensated audio output signal through a loudspeaker so as to reduce the masking effect of the audio output signal caused by the environmental noise of the external environment;
The compensation module is specifically used for acquiring a second sound spectrum of the in-ear environment; determining a second gain parameter according to the second audio spectrum and the in-ear sound spectrum corresponding to the external scene category; determining a target gain parameter according to the first gain parameter and the second gain parameter; compensating the audio output signal according to the target gain parameter, and playing the compensated audio output signal; the playing the compensated audio output signal comprises: generating an analog signal for driving a speaker according to the compensated audio output signal; sending the analog signal to a loudspeaker so that the loudspeaker plays sound;
The second sound spectrum is a sound spectrum corresponding to residual environmental sound in the ear canal cavity of the user when the earphone is not worn correctly; the in-ear sound spectrum is a sound spectrum corresponding to ambient noise in an external environment received by an in-ear microphone of the headset from an ear canal of a user.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the audio signal processing method according to any one of claims 1 to 7 when the program is executed.
10. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the audio signal processing method according to any one of claims 1 to 7.
CN202111487952.7A 2021-12-06 2021-12-06 Audio signal processing method and device and electronic equipment Active CN114125639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111487952.7A CN114125639B (en) 2021-12-06 2021-12-06 Audio signal processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111487952.7A CN114125639B (en) 2021-12-06 2021-12-06 Audio signal processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114125639A CN114125639A (en) 2022-03-01
CN114125639B true CN114125639B (en) 2024-08-16

Family

ID=80367679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111487952.7A Active CN114125639B (en) 2021-12-06 2021-12-06 Audio signal processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114125639B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117813652A (en) * 2022-05-10 2024-04-02 北京小米移动软件有限公司 Audio signal encoding method, device, electronic equipment and storage medium
CN116367063B (en) * 2023-04-23 2023-11-14 郑州大学 Bone conduction hearing aid equipment and system based on embedded
CN117194885B (en) * 2023-09-06 2024-05-14 东莞市同和光电科技有限公司 Optical interference suppression method for infrared receiving chip and infrared receiving chip

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9020160B2 (en) * 2012-11-02 2015-04-28 Bose Corporation Reducing occlusion effect in ANR headphones
DE102016204448A1 (en) * 2015-03-31 2016-10-06 Sony Corporation Procedure and device
WO2017101067A1 (en) * 2015-12-17 2017-06-22 华为技术有限公司 Ambient sound processing method and device
CN106131751B (en) * 2016-08-31 2019-09-13 深圳市麦吉通科技有限公司 Audio-frequency processing method and audio output device

Also Published As

Publication number Publication date
CN114125639A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN114125639B (en) Audio signal processing method and device and electronic equipment
CN109918039B (en) Volume adjusting method and mobile terminal
CN108737921B (en) Play control method, system, earphone and mobile terminal
CN108540900B (en) Volume adjusting method and related product
CN109086027B (en) Audio signal playing method and terminal
CN111385714B (en) Method for determining voice coil temperature of loudspeaker, electronic device and storage medium
CN107562406B (en) Volume adjusting method, mobile terminal and computer readable storage medium
CN111370018B (en) Audio data processing method, electronic device and medium
CN109951602B (en) Vibration control method and mobile terminal
CN108430004B (en) Loudspeaker amplitude adjusting device, adjusting method and mobile terminal
CN111343540B (en) Piano audio processing method and electronic equipment
WO2021104450A1 (en) Electronic device and volume adjustment method therefor
CN107786751A (en) A kind of method for broadcasting multimedia file and mobile terminal
CN107749306B (en) Vibration optimization method and mobile terminal
CN111049972B (en) Audio playing method and terminal equipment
CN111182118B (en) Volume adjusting method and electronic equipment
CN110995909B (en) Sound compensation method and device
CN110058837B (en) Audio output method and terminal
CN109873894B (en) Volume adjusting method and mobile terminal
CN108827338B (en) Voice navigation method and related product
CN110677770B (en) Sound production control method, electronic device, and medium
CN109348366B (en) Method for adjusting volume by using electrical parameters and mobile terminal
CN109769175B (en) Audio processing method and electronic equipment
CN109270352B (en) Amplitude adjusting method and sound generating device
CN110839108A (en) Noise reduction method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant