CN114125639A - Audio signal processing method and device and electronic equipment - Google Patents
Audio signal processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN114125639A CN114125639A CN202111487952.7A CN202111487952A CN114125639A CN 114125639 A CN114125639 A CN 114125639A CN 202111487952 A CN202111487952 A CN 202111487952A CN 114125639 A CN114125639 A CN 114125639A
- Authority
- CN
- China
- Prior art keywords
- sound
- spectrum
- gain parameter
- audio
- ear
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005236 sound signal Effects 0.000 title claims abstract description 36
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000001228 spectrum Methods 0.000 claims abstract description 164
- 238000000034 method Methods 0.000 claims abstract description 42
- 230000007613 environmental effect Effects 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 15
- 230000000694 effects Effects 0.000 claims description 21
- 230000000873 masking effect Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 10
- 230000003595 spectral effect Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 14
- 210000000613 ear canal Anatomy 0.000 description 13
- 230000006870 function Effects 0.000 description 11
- 238000005070 sampling Methods 0.000 description 8
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 210000003477 cochlea Anatomy 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1083—Reduction of ambient noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
Abstract
The invention provides an audio signal processing method, an audio signal processing device and electronic equipment, wherein the audio signal processing method comprises the following steps: acquiring a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter to apply to an audio output signal according to the external scene category; and compensating the audio output signal through the first gain parameter so as to enable a loudspeaker to play the compensated audio output signal. According to the method and the device, the scene type of the user can be identified according to the environmental sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environmental noise on the sound quality can be reduced, and the listening experience of the user is improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of audio playing, in particular to an audio signal processing method and device and electronic equipment.
Background
As a common electronic device, people can often wear and listen to audio such as music in daily life, and due to different wearing environments of the earphones, sound effects heard by users are inconsistent when the earphones are in different wearing environments.
In the related art, in order to improve the consistency of the user's listening feelings when the headset is in different wearing environments, a microphone for acquiring the ambient sound is usually added to the headset, the ambient sound is acquired by the microphone, and then, a cancelling sound opposite to the ambient sound is generated according to the ambient sound.
However, when the above method for improving the listening feeling is implemented, it is necessary to superimpose a cancellation sound on the music played by the earphone, and it is inevitable to cancel part of the music content, which results in the degradation of the sound quality of the earphone.
Disclosure of Invention
The embodiment of the invention provides an audio signal processing method and device, an earphone and electronic equipment, and aims to solve the problem that in the prior art, the tone quality of audio playing equipment is poor.
In a first aspect, an embodiment of the present invention provides an audio signal processing method, where the method includes:
acquiring a first sound spectrum of an external environment;
determining an external scene category corresponding to the first sound spectrum;
determining a first gain parameter of an audio output signal according to the external scene category;
and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
In a second aspect, an embodiment of the present invention provides an audio signal processing apparatus, including:
the acquisition module is used for acquiring a first sound spectrum of an external environment;
a scene category module to determine an external scene category corresponding to the first sound spectrum;
a parameter module for determining a first gain parameter of an audio output signal according to the external scene category;
and the compensation module is used for compensating the audio output signal according to the first gain parameter and playing the compensated audio output signal so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the audio signal processing method provided in the present invention when executing the program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the audio signal processing method according to the present invention.
In the embodiment of the invention, the method comprises the following steps: acquiring a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the method and the device, the scene type of the user can be identified according to the environmental sound of the user, the first gain parameter is determined according to the scene type, the played sound is compensated through the first gain parameter, the influence of environmental noise on the sound quality can be reduced, and the listening experience of the user is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of an audio signal processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a sound spectrum matching according to an embodiment of the present invention;
fig. 3 is a diagram illustrating a comparison of the spectrum of an extra-aural sound according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating steps of another audio signal processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a standard human ear model with a headset according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a second audio spectrum comparison provided by the embodiment of the present invention;
fig. 7 is a block diagram of an audio signal processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an earphone according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a flowchart illustrating steps of an audio signal processing method applied to an electronic device according to an embodiment of the present invention, where the method may include:
The environmental sound collected in a normal environment generally includes sounds of various frequencies, and the intensity of the sounds of various frequencies may be different in different sounds, for example, thunderstorm sound may include more low-frequency sounds, and piano sound may include more high-frequency sounds.
The external sound of the external environment where the user is located can be collected through the external microphone, and the first sound spectrum of the external environment can be obtained after the external sound is processed by the audio processing module.
A sound spectrum is a spectral line that reflects the relationship between the individual frequencies of a wave (e.g., sound, electromagnetic waves, etc.) and vibrational energy. Specifically, the sound spectrum can be expressed as a corresponding relationship between each frequency and decibels in sound, where decibel is a ratio, a pure counting method is used, no unit label is provided, and decibel ratios corresponding to sounds of different intensities are the same under the same decibel calculation standard.
Because there are some differences in the composition of each frequency in the environmental sound under different environments, the external scene category corresponding to the first sound spectrum may be determined according to the acquired feature of the first sound spectrum. The external scene category refers to a scene category where the user wears the earphone, for example, when the user uses the earphone on an airplane, the acquired first sound frequency spectrum is a frequency spectrum of sound in the airplane cabin, and the external scene category corresponding to the first sound frequency spectrum is the airplane cabin category; when the user uses the earphone in the mall, the acquired first sound spectrum is the spectrum of the sound inside the mall, and the external scene category corresponding to the first sound spectrum is the mall category.
Specifically, the spectrum features corresponding to at least one preset scene category may be pre-stored in the memory, after the first sound spectrum is acquired, the features in the first sound spectrum are compared with the spectrum features corresponding to all the preset scene categories, and according to the comparison result, the preset scene category with the highest matching degree is determined as the external scene category.
Referring to fig. 2, fig. 2 shows a sound spectrum matching schematic diagram provided in an embodiment of the present application, as shown in fig. 2, a thick solid line f is a spectrum feature corresponding to a preset scene type a, a dotted line g is a first sound spectrum obtained in a current external environment, and a thin solid line h is a spectrum feature corresponding to a preset scene type B.
When the human ear receives a plurality of sound stimuli of the same frequency, the human ear cannot completely receive sound information of all the stimuli due to Masking Effects (Masking Effects) of the human ear. Generally speaking, when a user listens to music played by an earphone, the user can also listen to sound of the current environment of the user, and sound of one frequency in the current environment can affect the acceptance degree of the user to audio information with the same frequency in the music.
Referring to fig. 3, fig. 3 shows a comparison diagram of an extra-aural sound spectrum according to an embodiment of the present invention, as shown in fig. 3, an abscissa represents a sound frequency in hertz (Hz), and an ordinate represents a sound intensity in decibels (db), a thick solid line a in fig. 3 may represent a spectrum curve in a standard environment (e.g., a quiet indoor environment), a dotted line b may represent a spectrum curve in an airport environment, and a thin solid line c represents a spectrum curve in a cafe environment.
The memory may further store preset gain parameters of each preset scene type, and after the external scene type is determined, the preset gain parameters corresponding to the external scene type stored in the memory may be acquired as the first gain parameters. The first gain parameter may be used to compensate for sound output by the earpiece speaker. The first gain parameter includes a compensation amplitude of at least one audio segment, and a corresponding audio segment played by the speaker can be compensated according to the compensation amplitude.
Therefore, after the first gain parameter is determined, the sound played by the earphone can be compensated according to the first gain parameter, so that the sound of each frequency in the sound played by the earphone is not subjected to the image of the masking effect, or the masking effect degree is the same, and the condition that the listening feeling of the user to the music under the current environmental noise is consistent with the listening feeling of the user to the music in the quiet environment is further ensured.
The gain parameter is the degree of amplification or reduction set by the earphone for each frequency band of the sound to be played. One gain parameter may include each frequency band within the frequency response range of the headset and the corresponding degree of amplification or reduction for each frequency band. It should be noted that, the gain amplitude may be proportional or may be a specific decibel value.
For example, if the frequency response range of a certain earphone is 100Hz to 10000Hz, the gain parameters may include a frequency band a [100Hz-5000Hz ], and a frequency band B (5000Hz-10000 Hz), where the gain corresponding to the frequency band a is 10 db, and the gain corresponding to the frequency band B is-10 db, so that when the earphone plays sound with the gain parameters, the sound intensity at the frequency of [100Hz-5000Hz ] is increased by 10 db, and the sound intensity at the frequency of (5000Hz-10000Hz ] is decreased by 10 db.
It should be noted that the size of each frequency band in the gain parameter can be flexibly adjusted according to actual requirements, when each frequency band is smaller, the accuracy of the gain is higher, and when the frequency band is larger, the calculation resource consumed by the gain is smaller. For example, the frequency response range of 100Hz to 10000Hz may be divided in units of 100 to obtain 99 frequency bands.
And 104, compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
After the corresponding first gain parameter is determined according to the first sound spectrum of the current external environment, when the speaker of the earphone plays sound, the sound intensity of each frequency band in the audio output signal is compensated through the audio processing module, an analog signal for driving the speaker is generated according to the compensated audio output signal, and the analog signal is sent to the speaker, so that the speaker plays sound.
During the process of using the earphone by the user, the first sound spectrum of the external environment may be continuously acquired at preset time intervals, if it is detected that the external scene type corresponding to the first sound spectrum changes (for example, the user gets on an airplane from a terminal hall of an airport, and the external scene type changes from the airport to the airplane), the preset gain parameter corresponding to the external scene type may be newly acquired, and the first gain parameter may be newly determined according to the newly acquired preset gain parameter, so that the sound output by the earphone may be actively adapted according to the change of the external scene type.
To sum up, an audio signal processing method provided by the embodiment of the present invention includes obtaining a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the method and the device, the scene type of the user can be identified according to the environmental sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environmental noise on the sound quality can be reduced, and the listening experience of the user is improved.
Fig. 4 is a flowchart of steps of another audio signal processing method provided by an embodiment of the present invention, which is applied to an electronic device, and as shown in fig. 4, the method may include:
in step 201, a first sound spectrum of an external environment is obtained.
Optionally, step 201 may further include:
sub-step 2011, obtaining a first audio of the external environment;
after the user links the earphone with the terminal or turns on a power switch of the earphone, the first audio of the current environment where the user is located can be acquired through the out-of-ear microphone. It should be noted that the headset may also include a wearing detection module, and start to acquire the first audio after detecting that the user wears the headset.
Substep 2012, determining the first sound spectrum according to the correspondence between the sound segments and the sound intensities in the first audio.
After the audio processing module obtains the first audio, the audio processing module may analyze the first audio to obtain intensities corresponding to the sound frequencies in the first audio, where the sound intensity refers to a volume of a sound at each frequency in the first audio, and the sound intensity may be represented by a decibel value or may be represented by other manners (e.g., sound pressure, amplitude, loudness, etc.) that may represent the sound intensity. In the first audio, the sound of different frequency bands can be measured to obtain the corresponding sound intensity, and then the corresponding relation between each sound band and the sound intensity in the first audio can be determined by measuring the sound intensity of the sound of each frequency band in the first audio.
Since the first audio contains sound in a frequency range, for example, sound of 10 to 20000 hz, and the frequency of the sound is continuous, the frequencies contained in the first audio are infinite, and it is impossible to determine the sound intensity of all frequencies, therefore, only the preset frequency sampling points can be sampled to obtain the sound intensity corresponding to the preset frequency sampling points. And constructing a first sound frequency spectrum according to the corresponding relation between the preset frequency sampling point and the sound intensity.
Technicians can collect environmental sounds in environments corresponding to different scene types, and construct an ear-to-ear sound spectrum corresponding to each scene type according to the collected environmental sounds. When the earphone leaves the factory, at least one preset scene category and the sound spectrum outside the ear and the corresponding relation between the two categories can be stored in the memory of the earphone.
Optionally, step 202 may further include:
substep 2021, obtaining at least one preset sound spectrum outside the ear; and the at least one sound spectrum outside the ear is in one-to-one correspondence with the preset scene type.
Since the corresponding relationship between the preset scene type and the sound spectrum outside the ear is preset in the earphone, the sound spectrum outside the ear corresponding to the preset scene type stored in the memory can be obtained after the first sound spectrum is determined.
It should be noted that the out-of-ear sound frequency spectrums corresponding to all the preset scene types may be read at one time, or the out-of-ear sound frequency spectrums corresponding to one preset scene type may be read each time for one comparison operation according to a preset sequence, so as to reduce the requirement on the operating memory capacity.
Sub-step 2022, determining a spectral similarity of said at least one out-of-ear sound spectrum to said first sound spectrum.
The spectral similarity of the spectrum of the out-of-ear sound to the first sound spectrum may be determined by the average magnitude of the difference in sound intensity corresponding to the plurality of frequency feature points.
Specifically, the sound intensities corresponding to a preset number of sampling frequencies may be collected in the first sound spectrum, the sound intensities corresponding to the same sampling frequencies are collected in the ear-to-ear sound spectrum, the sound intensities corresponding to the sampling frequencies are compared, a difference value between the sound intensities corresponding to the sampling frequencies is calculated, an average difference value is calculated according to the difference values between the sound intensities corresponding to all the sampling frequencies, and then, a reciprocal of the average difference value may be used as a spectrum similarity, that is, the smaller the average difference value is, the higher the similarity is.
Further, since the distribution ratio of the noise in the same environment at each frequency tends to be similar, the overall magnitude of the noise in the same environment may be different. Therefore, the spectrum similarity may also be determined according to a difference between a ratio of two sound intensities in at least one frequency pair in the first sound spectrum and a ratio of two sound intensities in the same frequency pair in the ear-to-ear sound spectrum of the preset scene type.
Sub-step 2023, determining a preset scene category corresponding to the spectrum similarity of the ear-to-ear sound greater than or equal to the preset similarity as the external scene category.
Generally, if the environment where the user is located matches a certain preset scene type, the similarity of the first sound spectrum and the sound spectrum outside the ear corresponding to the preset scene type is greater than or equal to the preset similarity.
Further, considering that the difference between the same preset scene categories also exists, for example, noise characteristics in different aircraft cabin environments are approximately the same, while noise characteristics in different street scene categories may also have a larger difference due to different time and location, traffic flow, pedestrian flow, and the like, so that different preset similarities may be set for different preset scene categories according to the noise difference of the same preset scene category measured in the field at different time, location, and the like.
When all the spectrum similarities do not exceed the preset similarity, the preset scene category corresponding to the spectrum similarity closest to the preset similarity may be determined as the external scene category. When more than one spectrum similarity exceeds the preset similarity, the preset scene category corresponding to the spectrum similarity with the maximum degree exceeding the preset similarity can be determined as the external scene category.
Technicians can set corresponding preset gain parameters for different preset scene categories, store the preset gain parameters in a memory of the earphone when the earphone leaves a factory, and simultaneously store the corresponding relation between the preset scene categories and the preset gain parameters.
Technical personnel can establish an acoustic laboratory, simulate the environmental noise corresponding to the preset scene category in the acoustic laboratory, compare the detected spectrum curve of the earphone under the preset scene category with the spectrum curve under the standard scene category, simultaneously change the gain parameter, make the spectrum curve of the earphone under the preset scene category consistent with the spectrum curve of the earphone under the standard scene category, record the adjustment value of the gain parameter, and set the adjustment value of the gain parameter as the preset gain parameter corresponding to the preset scene category.
After the external scene type is determined, the corresponding relationship between the preset scene type and the preset gain parameter can be inquired from the memory, and the preset gain parameter corresponding to the external scene type is obtained as the first gain parameter.
Optionally, step 204 may further include:
and a substep 2041 of obtaining a sound effect mode.
And a substep 2042 of determining a reference gain parameter according to the sound effect mode.
The earphone may have multiple sound effect modes (rock sound effect, popular sound effect and the like), each sound effect mode corresponds to different reference gain parameters, a user may already apply a certain reference gain parameter when using the earphone, and the first gain parameter may be obtained by superposing the applied reference gain parameter on a preset gain parameter corresponding to an external scene category, so as to avoid influencing the sound effect mode selected by the user when eliminating the masking effect.
The reference gain parameter may be a default gain parameter of the earphone when the earphone is shipped from a factory and a gain parameter corresponding to different sound effect modes. For example, the gain amplitude corresponding to each frequency band in the default gain parameter is 0, that is, the earphone plays the received audio signal of each frequency band according to the intensity of 100%; in the gain parameters corresponding to the rock and roll sound effect mode, the gain amplitude corresponding to the low audio frequency band is 20 decibels, and the gain amplitude corresponding to the high audio frequency band is-20 decibels, so that the bass effect in music is highlighted.
Substep 2042, determining the first gain parameter according to the preset gain parameter corresponding to the external scene type and the reference gain parameter.
The gain amplitude corresponding to each frequency band in the first gain parameter can be calculated through the gain amplitude corresponding to each frequency band in the preset gain parameters corresponding to the external scene type and the gain amplitude corresponding to each frequency band in the reference gain parameter. It should be noted that, the gain amplitude may be proportional or may be a specific decibel value.
Therefore, when the gain amplitude is proportional, the method for calculating the gain amplitude corresponding to each frequency band in the first gain parameter may be: and multiplying the gain amplitude corresponding to the same frequency band in the preset gain parameters corresponding to the external scene type with the gain amplitude corresponding to the same frequency band in the reference gain parameters. In the case that the gain amplitude is a decibel value, the method for calculating the gain amplitude corresponding to each frequency band in the first gain parameter may be: and adding the gain amplitude corresponding to the same frequency band in the preset gain parameters corresponding to the external scene type and the gain amplitude corresponding to the same frequency band in the reference gain parameters.
After the first gain effect is determined, the sound of each frequency band required to be played by the earphone can be directly gained according to the first gain parameter and played, so that the sound output by the earphone is consistent with the hearing sense in the standard scene category in the external scene category, and the sound played by the earphone heard by the user is prevented from being distorted by the noise in the external scene category.
In step 205, a second audio spectrum of the in-ear environment is obtained.
Optionally, step 205 may include:
substep 2051, acquiring a second audio of the in-ear environment by an in-ear microphone;
in the embodiment of the invention, the in-ear sound of the environment in the ear of the user can be collected through the in-ear microphone, and the second audio frequency spectrum of the environment in the ear can be obtained after the in-ear sound is processed by the audio processing module.
Sub-step 2052 determines the second audio spectrum from the correspondence between the sound segments and the sound intensities in the second audio.
The manner in which the second sound spectrum is determined in this step is similar to the manner in which the first sound spectrum is determined in sub-step 2012, and the embodiments of the present invention are not described again.
In the development process of the earphone, an acoustic engineer wears the earphone on a standard human ear, acquires the acoustic characteristics of the earphone through a microphone positioned at the cochlea of the standard human ear and adjusts the tone quality of the earphone so as to design the earphone with better tone quality. Thus, the closer the user's ear canal cavity shape is to a standard human ear, the better the sound quality will be perceived by the user.
After the adjustment work of the earphone is finished, the earphone can be worn on a standard human ear model in an acoustic laboratory in a standard wearing mode, the environmental noise of preset scene categories is simulated in the acoustic laboratory, the sound in the ear is collected through an in-ear microphone of the earphone, and the in-ear sound frequency spectrum of the earphone under each preset scene category is determined. In essence, the in-ear sound spectrum is the ambient noise in the external environment that the in-ear microphone of the earphone receives from within the ear canal of the user.
Referring to fig. 5, fig. 5 shows a schematic diagram of wearing an earphone on a standard human ear model according to an embodiment of the present invention, as shown in fig. 5, where the earphone includes an earphone housing 45, a speaker 42, an in-ear microphone 43 and an out-of-ear microphone 44, the earphone is worn on the standard human ear model 46, the in-ear microphone 43 may receive ambient sound entering an ear canal cavity 47 of the standard human ear model 46 from the ear canal cavity 47, the out-of-ear microphone 44 may directly receive ambient sound from outside the ear canal cavity 47 of the standard human ear model 46, and the speaker 42 plays an audio signal. The technician may wear the headset on a standard human ear model in the manner shown in fig. 5 to perform the above-described measurement.
When a user uses the earphone, due to reasons such as incorrect wearing mode and unstable wearing, the fitting degree between the earphone shell and the ear is poor, and a second sound frequency spectrum corresponding to the sound played by the earphone heard by the user is inconsistent with the sound frequency spectrum in the ear. And the shape of the ear canal cavity of the user is different from that of a standard human ear, so that a second sound frequency spectrum corresponding to the sound played by the earphone heard by the user is inconsistent with the sound frequency spectrum in the ear. Both of these conditions can negatively impact the sound quality of the headset.
In order to solve the above problem, the sound played by the earphone can be compensated according to the difference between the second sound frequency spectrum and the inner ear sound frequency spectrum.
Referring to fig. 6, fig. 6 shows a schematic diagram of a comparison of a second audio spectrum provided by an embodiment of the present invention, as shown in fig. 6, where a thick solid line d represents a spectrum curve of residual ambient sound collected by an in-ear microphone in an ear canal cavity when a standard human ear model wears an earphone in a correct wearing manner in an environment corresponding to a certain preset scene category. The dashed line e represents a second audio spectrum of residual ambient sound collected by the in-ear microphone in the ear canal cavity of the user when the user is not wearing the headset correctly in the environment corresponding to the preset scene category. It can be seen that in the solid line box a of fig. 6, the intensity of the second audio spectrum is lower in the low frequency part, which is generally caused by the poor fitting of the earphone to the ear, and in the solid line box B of fig. 6, the intensity of the second audio spectrum is lower in the high frequency part, which is generally caused by the difference between the ear canal cavity of the user and the ear canal cavity of the standard human ear.
Optionally, step 206 may include:
sub-step 2061, obtaining the in-ear sound spectrum corresponding to the external scene category.
The earphone is manufactured in a factory, and each preset scene type and the in-ear sound frequency spectrum and the corresponding relation between the preset scene type and the in-ear sound frequency spectrum are already built in the memory.
Therefore, after the external scene type is determined, the in-ear sound spectrum corresponding to the external scene type can be acquired from the memory according to the corresponding relationship between the preset scene type and the in-ear sound spectrum.
Sub-step 2062, comparing the second audio spectrum with the in-ear sound spectrum, and determining the sound intensity difference corresponding to each sound audio segment.
When the earphone is worn incorrectly and/or the ear canal cavity of the user is not standard, the sound intensity of part or all frequency bands of the sound played by the earphone can be changed, in order to compensate the changes, the sound heard by the user is consistent with the sound heard by a standard human ear when the earphone is worn correctly, a better sound quality performance is achieved, the difference value of the sound intensity of each frequency band between the second sound frequency spectrum and the sound frequency spectrum in the ear can be determined, and the sound output of the earphone is compensated according to the difference value.
Specifically, the response frequency range of the earphone may be divided into a plurality of audio segments, the sound intensity of each audio segment in the second audio spectrum is differentiated from the sound intensity of each audio segment in the ear audio spectrum, and the compensation value corresponding to each audio segment is determined. It is easy to understand that the higher the resolution of the segmented audio segments is, the better the final compensation effect is, but more operation resources need to be consumed, and a technician can flexibly select the number of the audio segments according to actual needs.
In sub-step 2063, the second gain parameter is determined according to the sound intensity difference corresponding to each audio segment.
After the intensity difference corresponding to each audio segment is determined, the sound intensity difference corresponding to each audio segment may be directly determined as the compensation value of the corresponding audio segment, and a second gain parameter is constructed, where the second gain parameter includes the compensation value of each frequency segment.
For example, when the user is in an airport environment, and the frequency band a in the second audio spectrum is calculated to be 10 db different from the frequency band a in the preset sound spectrum corresponding to the airport scene category, 10 db may be determined as the compensation value corresponding to the frequency band a in the second gain parameter.
Further, the compensation value may be a compensated proportional value in addition to a specific decibel value or other numerical value, so that the difference between each frequency band in the second audio spectrum and the same audio band in the pre-audio band may be determined according to the sound intensity difference corresponding to each audio band, and a second gain parameter may be constructed according to the difference.
In order to make the user can hear better tone quality, can add first gain parameter and second gain parameter, obtain the target gain parameter, compensate audio output signal through the target gain parameter, not only can reduce the tone quality decline that ambient noise leads to, can also reduce the tone quality decline that the earphone worn the problem and lead to, promoted the tone quality effect when the user used the earphone greatly.
And 208, compensating the audio output signal according to the target gain parameter, and playing the compensated audio output signal through a loudspeaker.
After the target gain parameter is determined, when the loudspeaker of the earphone plays sound, the sound intensity of the corresponding frequency band in the audio output signal is compensated according to the compensation amplitude corresponding to each frequency band in the target gain parameter, an analog signal for driving the loudspeaker is generated according to the compensated audio output signal, and the analog signal is sent to the loudspeaker, so that the loudspeaker plays sound.
To sum up, an audio signal processing method provided by the embodiment of the present invention includes obtaining a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the method and the device, the scene type of the user can be identified according to the environmental sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environmental noise on the sound quality can be reduced, and the listening experience of the user is improved.
Fig. 7 is a block diagram of an audio signal processing apparatus according to an embodiment of the present invention, and as shown in fig. 7, the audio signal processing apparatus includes:
an obtaining module 301, configured to obtain a first sound spectrum of an external environment;
a scene category module 302 to determine an external scene category corresponding to the first sound spectrum;
a parameter module 303, configured to determine a first gain parameter of an audio output signal according to the external scene category;
the compensation module 304 is configured to compensate the audio output signal according to the first gain parameter, and play the compensated audio output signal to reduce a masking effect of the environmental noise of the external environment on the audio output signal.
Optionally, the apparatus further comprises:
the first obtaining submodule is used for obtaining a first audio frequency of an external environment;
and the first frequency spectrum submodule is used for determining the first sound frequency spectrum according to the corresponding relation between the sound frequency segment in the first audio frequency and the sound intensity.
Optionally, the scene classification module includes:
the system comprises an ear-outside frequency spectrum submodule and a sound-outside frequency spectrum submodule, wherein the ear-outside frequency spectrum submodule is used for acquiring at least one preset ear-outside sound frequency spectrum; wherein, a one-to-one correspondence relationship exists between the at least one sound spectrum outside the ear and a preset scene category;
a similarity submodule for determining a spectral similarity of the at least one out-of-ear sound spectrum to the first sound spectrum;
and the scene category submodule is used for determining a preset scene category corresponding to the spectrum similarity of the ear-to-ear sound which is greater than or equal to the preset similarity as the external scene category.
Optionally, the parameter module includes:
the preset gain parameter submodule is used for inquiring the one-to-one corresponding relation between the preset scene category and the preset gain parameter;
and the first gain parameter submodule is used for determining a preset gain parameter corresponding to the external scene type as the first gain parameter.
Optionally, the first gain parameter submodule includes:
the sound effect acquisition submodule is used for acquiring a sound effect mode;
the reference gain parameter submodule is used for determining a reference gain parameter according to the sound effect mode;
and the first gain parameter calculation submodule is used for determining the first gain parameter according to the preset gain parameter corresponding to the external scene category and the reference gain parameter.
Optionally, the compensation module includes:
the second frequency spectrum acquisition submodule is used for acquiring a second sound frequency spectrum of the environment in the ear;
the second gain parameter submodule is used for determining a second gain parameter according to the second sound frequency spectrum and the in-ear sound frequency spectrum corresponding to the external scene type;
the target gain parameter submodule is used for determining a target gain parameter according to the first gain parameter and the second gain parameter;
and compensating the audio output signal according to the target gain parameter, and playing the compensated audio output signal through a loudspeaker.
Optionally, the second spectrum obtaining sub-module includes:
the second audio submodule is used for acquiring second audio of the in-ear environment through the in-ear microphone;
and the second frequency spectrum determining submodule is used for determining the second sound frequency spectrum according to the corresponding relation between the sound frequency segment in the second audio frequency and the sound intensity.
Optionally, the second gain parameter sub-module includes:
the in-ear spectrum submodule is used for acquiring an in-ear sound spectrum corresponding to the external scene type;
the intensity difference submodule is used for comparing the second sound frequency spectrum with the in-ear sound frequency spectrum and determining the sound intensity difference corresponding to each sound audio section;
and the second gain parameter determining submodule is used for determining the second gain parameter according to the sound intensity difference value corresponding to each sound audio segment.
In summary, an apparatus provided in an embodiment of the present invention includes obtaining a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the method and the device, the scene type of the user can be identified according to the environmental sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environmental noise on the sound quality can be reduced, and the listening experience of the user is improved.
An embodiment of the present application further provides an earphone, and referring to fig. 8, fig. 8 shows a hardware structure block diagram of an earphone provided in an embodiment of the present invention.
As shown in fig. 8, the headset may include a processor 40, a digital-to-analog conversion module 41, a speaker 42, an in-ear microphone 43, and an out-of-ear microphone 44. Wherein the in-ear microphone 43 is operable to pick up sound from inside the ear canal of the user when the user is wearing the headset; the ear microphone 44 may be used to pick up sound from outside the ear canal of the user when the user is wearing the headset, i.e. may pick up sound of the environment in which the user is currently located; the digital-to-analog conversion module 41 may receive analog audio signals collected by the in-ear microphone 43 and the out-of-ear microphone 44, convert the received analog audio signals into digital audio signals, and send the digital audio signals to the processor 40; the processor 40 may send the digital audio signal to the digital-to-analog conversion module 41, and the digital-to-analog conversion module 41 converts the digital audio signal into an analog audio signal and drives the diaphragm of the speaker 42 to vibrate, so that the speaker 42 emits sound.
A processor 40 for performing the following processes:
acquiring a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
An electronic device is further provided in the embodiments of the present application, and referring to fig. 9, fig. 9 shows a hardware structure schematic diagram of an electronic device provided in the present invention.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 510 for performing the following process:
acquiring a first sound spectrum of an external environment; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
In the embodiment of the invention, a first sound spectrum of an external environment is acquired; determining an external scene category corresponding to the first sound spectrum; determining a first gain parameter of an audio output signal according to the external scene category; and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal. According to the method and the device, the scene type of the user can be identified according to the environmental sound of the user, the first gain parameter is determined according to the scene type, the sound played by the loudspeaker is compensated through the first gain parameter, the influence of environmental noise on the sound quality can be reduced, and the listening experience of the user is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 9, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program stored in the memory 509 and capable of running on the processor 510, where the computer program, when executed by the processor 510, implements each process of the above-mentioned audio signal processing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned audio signal processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (11)
1. A method of audio signal processing, the method comprising:
acquiring a first sound spectrum of an external environment;
determining an external scene category corresponding to the first sound spectrum;
determining a first gain parameter of an audio output signal according to the external scene category;
and compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
2. The method of claim 1, wherein the compensating the audio output signal according to the first gain parameter, and playing the compensated audio output signal to reduce a masking effect of the ambient noise of the external environment on the audio output signal comprises:
acquiring a second audio frequency spectrum of the environment in the ear;
determining a second gain parameter according to the second sound frequency spectrum and the in-ear sound frequency spectrum corresponding to the external scene type;
determining a target gain parameter according to the first gain parameter and the second gain parameter;
and compensating the audio output signal according to the target gain parameter, and playing the compensated audio output signal.
3. The method of claim 1, wherein the obtaining the first sound spectrum of the external environment comprises:
acquiring a first audio frequency of an external environment;
and determining the first sound spectrum according to the corresponding relation between the sound frequency segment in the first audio frequency and the sound intensity.
4. The method of claim 1, wherein the determining an external scene class corresponding to the first sound spectrum comprises:
acquiring at least one preset sound spectrum outside the ear; wherein, a one-to-one correspondence relationship exists between the at least one sound spectrum outside the ear and a preset scene category;
determining a spectral similarity of the at least one out-of-ear sound spectrum to the first sound spectrum;
and determining a preset scene category corresponding to the spectrum similarity of the ear-to-ear sound which is greater than or equal to the preset similarity as the external scene category.
5. The method of claim 4, wherein determining the first gain parameter to apply to the audio output signal based on the external scene category comprises:
inquiring the one-to-one corresponding relation between the preset scene categories and the preset gain parameters;
and determining a preset gain parameter corresponding to the external scene type as the first gain parameter.
6. The method according to claim 5, wherein the determining the preset gain parameter corresponding to the external scene category as the first gain parameter comprises:
acquiring a sound effect mode;
determining a reference gain parameter according to the sound effect mode;
and determining the first gain parameter according to the preset gain parameter corresponding to the external scene type and the reference gain parameter.
7. The method of claim 2, wherein obtaining a second audio spectrum of the in-ear environment comprises:
acquiring a second audio frequency of the environment in the ear;
and determining the second audio spectrum according to the corresponding relation between the sound frequency segment in the second audio and the sound intensity.
8. The method of claim 2, wherein determining a second gain parameter based on the second audio spectrum and the in-ear sound spectrum corresponding to the external scene category comprises:
acquiring an in-ear sound spectrum corresponding to the external scene category;
comparing the second sound frequency spectrum with the in-ear sound frequency spectrum to determine the sound intensity difference corresponding to each sound frequency section;
and determining the second gain parameter according to the sound intensity difference corresponding to each sound audio segment.
9. An audio signal processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first sound spectrum of an external environment;
a scene category module to determine an external scene category corresponding to the first sound spectrum;
a parameter module to determine a first gain parameter to apply to an audio output signal according to the external scene category;
and the compensation module is used for compensating the audio output signal through the first gain parameter and playing the compensated audio output signal through a loudspeaker so as to reduce the masking effect of the environmental noise of the external environment on the audio output signal.
10. An electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the audio signal processing method according to any one of claims 1 to 8 when executing the program.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the audio signal processing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111487952.7A CN114125639B (en) | 2021-12-06 | 2021-12-06 | Audio signal processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111487952.7A CN114125639B (en) | 2021-12-06 | 2021-12-06 | Audio signal processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114125639A true CN114125639A (en) | 2022-03-01 |
CN114125639B CN114125639B (en) | 2024-08-16 |
Family
ID=80367679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111487952.7A Active CN114125639B (en) | 2021-12-06 | 2021-12-06 | Audio signal processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114125639B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116367063A (en) * | 2023-04-23 | 2023-06-30 | 郑州大学 | Bone conduction hearing aid equipment and system based on embedded |
WO2023216119A1 (en) * | 2022-05-10 | 2023-11-16 | 北京小米移动软件有限公司 | Audio signal encoding method and apparatus, electronic device and storage medium |
CN117194885A (en) * | 2023-09-06 | 2023-12-08 | 东莞市同和光电科技有限公司 | Optical interference suppression method for infrared receiving chip and infrared receiving chip |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140126735A1 (en) * | 2012-11-02 | 2014-05-08 | Daniel M. Gauger, Jr. | Reducing Occlusion Effect in ANR Headphones |
US20160295325A1 (en) * | 2015-03-31 | 2016-10-06 | Sony Corporation | Method and device |
CN106131751A (en) * | 2016-08-31 | 2016-11-16 | 深圳市麦吉通科技有限公司 | Audio-frequency processing method and audio output device |
CN107533839A (en) * | 2015-12-17 | 2018-01-02 | 华为技术有限公司 | A kind of processing method and equipment to surrounding environment sound |
-
2021
- 2021-12-06 CN CN202111487952.7A patent/CN114125639B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140126735A1 (en) * | 2012-11-02 | 2014-05-08 | Daniel M. Gauger, Jr. | Reducing Occlusion Effect in ANR Headphones |
US20160295325A1 (en) * | 2015-03-31 | 2016-10-06 | Sony Corporation | Method and device |
CN107533839A (en) * | 2015-12-17 | 2018-01-02 | 华为技术有限公司 | A kind of processing method and equipment to surrounding environment sound |
CN106131751A (en) * | 2016-08-31 | 2016-11-16 | 深圳市麦吉通科技有限公司 | Audio-frequency processing method and audio output device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023216119A1 (en) * | 2022-05-10 | 2023-11-16 | 北京小米移动软件有限公司 | Audio signal encoding method and apparatus, electronic device and storage medium |
CN116367063A (en) * | 2023-04-23 | 2023-06-30 | 郑州大学 | Bone conduction hearing aid equipment and system based on embedded |
CN116367063B (en) * | 2023-04-23 | 2023-11-14 | 郑州大学 | Bone conduction hearing aid equipment and system based on embedded |
CN117194885A (en) * | 2023-09-06 | 2023-12-08 | 东莞市同和光电科技有限公司 | Optical interference suppression method for infrared receiving chip and infrared receiving chip |
CN117194885B (en) * | 2023-09-06 | 2024-05-14 | 东莞市同和光电科技有限公司 | Optical interference suppression method for infrared receiving chip and infrared receiving chip |
Also Published As
Publication number | Publication date |
---|---|
CN114125639B (en) | 2024-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114125639B (en) | Audio signal processing method and device and electronic equipment | |
CN109918039B (en) | Volume adjusting method and mobile terminal | |
CN111385714B (en) | Method for determining voice coil temperature of loudspeaker, electronic device and storage medium | |
CN109086027B (en) | Audio signal playing method and terminal | |
CN107562406B (en) | Volume adjusting method, mobile terminal and computer readable storage medium | |
CN111343540B (en) | Piano audio processing method and electronic equipment | |
CN111405416B (en) | Stereo recording method, electronic device and storage medium | |
JP5526060B2 (en) | Hearing aid adjustment device | |
CN109788402B (en) | Audio signal processing method and audio signal processing device | |
CN110460721B (en) | Starting method and device and mobile terminal | |
CN111314560A (en) | Method for adjusting sound loudness and communication terminal | |
CN109547894B (en) | Amplitude adjustment method and device for electroacoustic device and mobile terminal | |
CN111370018A (en) | Audio data processing method, electronic device and medium | |
CN107749306B (en) | Vibration optimization method and mobile terminal | |
CN111182118B (en) | Volume adjusting method and electronic equipment | |
CN111124346B (en) | Electronic equipment and volume adjusting method thereof | |
CN110995909B (en) | Sound compensation method and device | |
CN111049972B (en) | Audio playing method and terminal equipment | |
CN110058837B (en) | Audio output method and terminal | |
CN109873894B (en) | Volume adjusting method and mobile terminal | |
CN109039355B (en) | Voice prompt method and related product | |
CN108827338B (en) | Voice navigation method and related product | |
CN108810787B (en) | Foreign matter detection method and device based on audio equipment and terminal | |
CN110677770B (en) | Sound production control method, electronic device, and medium | |
CN109348366B (en) | Method for adjusting volume by using electrical parameters and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |