CN112992167A - Audio signal processing method and device and electronic equipment - Google Patents
Audio signal processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN112992167A CN112992167A CN202110171783.XA CN202110171783A CN112992167A CN 112992167 A CN112992167 A CN 112992167A CN 202110171783 A CN202110171783 A CN 202110171783A CN 112992167 A CN112992167 A CN 112992167A
- Authority
- CN
- China
- Prior art keywords
- audio signal
- path
- target
- processed
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000005236 sound signal Effects 0.000 title claims abstract description 397
- 238000003672 processing method Methods 0.000 title description 10
- 238000012545 processing Methods 0.000 claims abstract description 151
- 238000000034 method Methods 0.000 claims abstract description 48
- 230000006870 function Effects 0.000 claims description 15
- 230000003111 delayed effect Effects 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 17
- 230000003595 spectral effect Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 230000005238 low-frequency sound signal Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 4
- 241000208967 Polygala cruciata Species 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005474 detonation Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The embodiment of the disclosure discloses a method and a device for processing an audio signal and an electronic device, wherein the method for processing the audio signal comprises the following steps: acquiring an audio signal to be processed; dividing the audio signal to be processed into two paths to obtain a first path of audio signal and a second path of audio signal; acquiring a low-frequency processing mode, wherein the low-frequency processing mode comprises a target low-frequency type and a target processing mode; determining whether the second channel of audio signals contains audio signals corresponding to the target low-frequency type, if so, processing the second channel of audio signals according to the target processing mode to obtain a processed second channel of audio signals; and overlapping the first path of audio signal and the processed second path of audio signal to obtain a target audio signal.
Description
Technical Field
The present disclosure relates to the field of signal processing technologies, and in particular, to a method and an apparatus for processing an audio signal, and an electronic device.
Background
The requirement of people to the performance of intelligence audio class product is higher and higher at present, requires the small and exquisite portable of intelligence audio class product on the one hand, and on the other hand requires that intelligence audio class product has the tone quality of preferred.
The smaller the size of the intelligent audio products is, the lower the bass resonance frequency cannot be sufficiently low, and the better low-frequency sound effect cannot be realized. In the related art, an equalizer is usually used to boost the output power of the low-frequency signal, but this approach has a limited low-frequency boosting, which causes a problem of severe distortion of the audio signal, and cannot achieve a better low-frequency sound effect. In addition, this approach can also cause damage to the speaker.
Therefore, it is necessary to provide a new audio signal processing method to obtain better low-frequency sound effect.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a technical solution for processing an audio signal to obtain a better low-frequency sound effect.
According to a first aspect of embodiments of the present disclosure, there is provided a method for processing an audio signal, the method including:
acquiring an audio signal to be processed;
dividing the audio signal to be processed into two paths to obtain a first path of audio signal and a second path of audio signal;
acquiring a low-frequency processing mode, wherein the low-frequency processing mode comprises a target low-frequency type and a target processing mode;
determining whether the second path of audio signal contains an audio signal corresponding to the target low-frequency type;
if yes, processing the second channel of audio signals according to the target processing mode to obtain a processed second channel of audio signals;
and overlapping the first path of audio signal and the processed second path of audio signal to obtain a target audio signal.
Optionally, the determining whether the second path of audio signal includes an audio signal corresponding to the target low-frequency type includes:
extracting characteristic information of the second path of audio signal;
and determining whether the second path of audio signals contains audio signals corresponding to the target low-frequency type or not according to the characteristic information.
Optionally, the determining, according to the feature information, whether the second path of audio signal includes an audio signal corresponding to the target low-frequency type includes:
identifying the feature information based on a preset identification function and a feature library corresponding to the target low-frequency type to obtain a probability value that the second path of audio signal contains the audio signal corresponding to the target low-frequency type;
determining that the second path of audio signals contains audio signals corresponding to the target low-frequency type under the condition that the probability value is larger than a set threshold value;
and determining that the audio signal corresponding to the target low-frequency type is not contained in the second path of audio signal under the condition that the probability value is smaller than or equal to a set threshold value.
Optionally, the determining, according to the feature information, whether the second path of audio signal includes an audio signal corresponding to the target low-frequency type includes:
and identifying the characteristic information based on a preset identification model, and determining whether the second path of audio signals contains audio signals corresponding to the target low-frequency type.
Optionally, the feature information includes mel-frequency cepstral coefficient features and auxiliary features.
Optionally, before extracting the feature information of the second path of audio signal, the method further includes:
and carrying out filtering processing on the second path of audio signals.
Optionally, the processing the second channel of audio signal according to the target processing manner to obtain a processed second channel of audio signal includes:
and performing enhancement or attenuation processing on the audio signal corresponding to the target low-frequency type in the second path of audio signal.
Optionally, the superimposing the first channel of audio signal and the processed second channel of audio signal to obtain a target audio signal includes:
selecting a reference signal from the processed second path of audio signal;
determining the time difference between the first path of audio signal and the processed second path of audio signal according to the position of the reference signal in the processed second path of audio signal and the position of the reference signal in the first path of audio signal;
according to the time difference, carrying out delay processing on the first path of audio signal to obtain a delayed first path of audio signal;
and superposing the delayed first path of audio signal and the processed second path of audio signal to obtain the target audio signal.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for processing an audio signal, the apparatus comprising:
the acquisition module is used for acquiring an audio signal to be processed;
the audio signal splitting module is used for splitting the audio signal to be processed into two paths to obtain a first path of audio signal and a second path of audio signal;
the low-frequency processing mode input module is used for acquiring a low-frequency processing mode, wherein the low-frequency processing mode comprises a target low-frequency type and a target processing mode;
a determining module, configured to determine whether the second channel of audio signal includes an audio signal corresponding to the target low-frequency type;
the processing module is used for processing the second channel of audio signals according to the target processing mode to obtain the processed second channel of audio signals;
and the superposition module is used for carrying out superposition processing on the first path of audio signal and the processed second path of audio signal to obtain a target audio signal.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, comprising a processor and a memory, the memory storing computer instructions, which when executed by the processor, perform the method provided by the first aspect of the embodiments of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the method provided by the first aspect of the embodiments of the present disclosure.
According to the embodiment of the disclosure, the audio signal to be processed is divided into the first path of audio signal and the second path of audio signal, and the audio signal corresponding to the low-frequency processing mode in the second path of audio signal is processed according to the obtained low-frequency processing mode, so that sound distortion caused by the integral pulling-up of the audio signal can be avoided. Furthermore, the first path of audio signal and the processed second path of audio signal are superposed, so that a better low-frequency sound effect can be obtained while the sound effect of a high-frequency part in the original audio signal is ensured. In addition, the method and the device can provide various low-frequency processing modes for the user, so that the audio signals are processed according to the low-frequency processing mode selected by the user, the requirements of different users on different low-frequency sound effects can be met, and the user experience is better.
Other features of, and advantages with, the disclosed embodiments will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope. For a person skilled in the art, it is possible to derive other relevant figures from these figures without inventive effort.
FIG. 1 is a diagram of a hardware configuration of an electronic device that may be used to implement an embodiment;
FIG. 2 is a flow diagram of a method of processing an audio signal according to one embodiment;
FIG. 3 is a schematic diagram of an interface for selecting a low frequency processing mode, according to one embodiment;
fig. 4 is a flow diagram of a method of processing an audio signal according to an example;
fig. 5 is a block diagram of the structure of an audio signal processing apparatus according to an embodiment;
FIG. 6 is a block diagram of the architecture of an electronic device according to one embodiment.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the embodiments of the present disclosure unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 is a hardware configuration diagram of an electronic device that can be used to implement the audio signal processing method of one embodiment.
In one embodiment, the electronic device 1000 may be as shown in fig. 1, including a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a microphone 1700, and a speaker 1800. The processor 1100 may include, but is not limited to, a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, various bus interfaces such as a serial bus interface (including a USB interface), a parallel bus interface, and the like. Communication device 1400 is capable of wired or wireless communication, for example. The display device 1500 is, for example, a liquid crystal display, an LED display, a touch display, or the like. The input device 1600 includes, for example, a touch screen, a keyboard, and the like. The microphone 1700 may be used for inputting voice information. The speaker 1800 may be used to output voice information.
In one embodiment, the electronic device 1000 may be an electronic device with communication functionality, service processing capabilities. The electronic device 1000 may be, for example, a mobile terminal, a laptop, a tablet computer, a palmtop computer, a wearable device, an earphone, a smart speaker, and the like.
In this embodiment, the memory 1200 of the electronic device 1000 is configured to store instructions for controlling the processor 1100 to operate to implement or support the implementation of the audio signal processing method according to any of the embodiments. The skilled person can design the instructions according to the solution disclosed in the present specification. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
Those skilled in the art will appreciate that although a plurality of devices of the electronic apparatus 1000 are shown in fig. 1, the electronic apparatus 1000 of the embodiments of the present specification may refer to only some of the devices, for example, only the processor 1100 and the memory 1200.
The electronic device 1000 shown in fig. 1 is merely illustrative and is in no way intended to limit the description, its applications, or uses.
Various embodiments and examples according to the present disclosure are described below with reference to the drawings.
< method examples >
Fig. 2 illustrates a method of processing an audio signal according to an embodiment of the present disclosure, which may be implemented by the electronic device 1000 shown in fig. 1, for example.
The audio signal processing method provided by this embodiment may include the following steps S2100 to S2500.
In step S2100, an audio signal to be processed is acquired.
Step S2200 is to divide the audio signal to be processed into two paths to obtain a first path of audio signal and a second path of audio signal.
The audio signal to be processed is an audio signal input by a user. In this step, the audio signal to be processed input by the user is divided into two paths, wherein one path is a first path of audio signal, and the other path is a second path of audio signal. And the first path of audio signal is used as an original audio signal and stored, the second path of audio signal is processed, and then the first path of audio signal and the processed second path of audio signal are superposed, so that a target audio signal with better low-frequency sound effect can be obtained.
Step S2300, a low frequency processing mode is obtained, wherein the low frequency processing mode includes a target low frequency type and a target processing mode.
The target low frequency type is used to distinguish the kind of the low frequency sound signal. The low frequency sound signal may be a sound signal having a frequency of less than 1 kHz. The frequency of the low frequency sound signal may be set as it is, for example, in a range of 50Hz to 300 Hz. The target low frequency type may include one low frequency sound signal or may include a plurality of different low frequency sound signals. The target low frequency types may include at least drumbeats, blasts, seismic, archery, for example.
The target treatment mode may include both an enhancement mode and a reduction mode.
The low frequency processing modes may be, for example, drumhead enhancement, detonation enhancement, seismic enhancement, drumhead and seismic enhancement, drumhead attenuation, and the like.
In one embodiment of the present disclosure, the step of acquiring the low frequency processing mode may further include: steps S3100 to S3200.
Step S3100, providing an interactive interface.
The interactive interface may be, for example, an interface capable of connecting an input device. The input device includes, for example, a touch screen, a keyboard, a mouse, and the like, which can be used by a user to input information.
Step S3200, acquiring a low frequency processing mode input through the interactive interface.
In particular implementations, as shown in FIG. 3, a selection interface for a low frequency processing mode may be presented on the electronic device, and the low frequency processing mode may be determined in response to a user input.
In this embodiment, the user may also select the low frequency processing mode through the terminal device, and transmit the low frequency processing mode input by the user to the electronic device, so that the electronic device processes the audio signal according to the low frequency processing mode.
Step S2400 determines whether the second channel of audio signal includes an audio signal corresponding to the target low-frequency type.
In one embodiment of the present disclosure, the step of determining whether the second path of audio signal contains an audio signal corresponding to the target low frequency type may further include: steps S4100 to S4200.
Step S4100 extracts feature information of the second channel of audio signal.
Before extracting the feature information of the second path of audio signal, the audio signal processing method may further include: and carrying out filtering processing on the second path of audio signal.
In specific implementation, a band-pass filter is set for a selected target low-frequency type, for example, drumming is generally 50 to 1000Hz, and a filtering bandwidth of the band-pass filter can be set to 50 to 1000 Hz. In this embodiment, before extracting the feature information of the second channel of audio signal, the second channel of audio signal is filtered to reduce the audio signal, so that the redundancy can be reduced, the process of feature recognition is simplified, and the recognition speed is increased.
The audio signals of different low frequency types have respective characteristics. The feature information may include mel-frequency cepstral coefficient features and assist features. Illustratively, the assist features may include at least one of a common centroid feature, a spectral skewness feature, a spectral peak-state feature, a common roll-off feature, a time-domain centroid feature, an accent feature, an energy feature.
The following describes the extraction process of mel-frequency cepstrum coefficient features by taking drumbeat as an example.
Step one, dividing the second path of audio signals after filtering into a plurality of sections of audio signals, wherein each section of audio signal is a frame, and the number of points of the adopted sequence of each frame of audio signal is 1024 points.
And secondly, performing pre-emphasis processing on each frame of audio signal, and performing discrete Fourier transform (FFT) on each pre-emphasized frame of audio signal to obtain a discrete power spectrum (Sn).
And step three, calculating power values of all frequencies through M triangular window functions to obtain M parameter powers Pm.
And step four, calculating the natural logarithm of the M parameter powers Pm, further performing discrete cosine transform, and removing direct current components to obtain the Mel cepstrum coefficient characteristics.
In this example, the mel cepstrum coefficient characteristics, the first order difference signal of the mel cepstrum coefficient, and the second order difference signal of the mel cepstrum coefficient are used as characteristic information for identifying whether the audio signal corresponding to the target low frequency type is included in the second path of audio signal.
In this embodiment, the mel-frequency cepstrum coefficient feature and the assistant feature of the second channel of audio signal may be extracted to jointly determine whether the second channel of audio signal includes an audio signal corresponding to the target low-frequency type. For example, for drumbeats, a spectral centroid feature, a spectral skewness feature, an accent signal feature, and a spectral roll-off feature may be selected as the auxiliary features. According to the embodiment of the disclosure, one or more auxiliary features and the Mel cepstrum coefficient feature are selected to be used for identifying whether the second path of audio signals contains the audio signals corresponding to the target low-frequency type, so that the identification accuracy can be improved.
Step S4200, determining whether the second channel of audio signal includes an audio signal corresponding to the target low frequency type according to the characteristic information.
In this embodiment, whether the audio signal includes the audio signal corresponding to the low frequency type may be determined based on a preset recognition function, and whether the audio signal includes the audio signal corresponding to the low frequency type may also be determined based on a preset recognition model. These two modes will be described below.
In a more specific example, the step of determining whether the second path of audio signal contains the audio signal corresponding to the target low-frequency type according to the characteristic information may further include: steps S5100 to S5300.
Step S5100, based on the preset identification function and the feature library corresponding to the target low-frequency type, identifies the feature information to obtain a probability value that the second channel of audio signal includes an audio signal corresponding to the target low-frequency type.
The feature library corresponding to the target low-frequency type is used for storing feature information of the target low-frequency type. In specific implementation, a feature library can be established by using the extracted feature information of the target low-frequency type. For example, for drum sound, mel cepstrum coefficient features, spectral centroid features, spectral skewness features, accent signal features and spectral roll-off features of the drum sound are collected, and a feature library of the drum sound is established based on the features.
The predetermined recognition function may be, for example, a likelihood function.
For example, the probability value a that the second audio signal contains the audio signal corresponding to the target low frequency type can be obtained by the following formula:
P(Wi/O)=[P(O/Wi)*P(Wi)]/P(O)
a=argmax{P(Wi/O)}
wherein, P (W)iO) is the prior probability of comparison of the extracted feature information with the feature library, [ P (O/W)i)*P(Wi)]And a is a probability value that the second path of audio signal contains the audio signal corresponding to the target low-frequency type, wherein the probability value is the posterior probability of the comparison of the extracted characteristic information and the characteristic library.
In step S5200, when the probability value is greater than the set threshold, it is determined that the second channel of audio signals includes an audio signal corresponding to the target low-frequency type.
And step S5300, under the condition that the probability value is less than or equal to the set threshold value, determining that the audio signal corresponding to the target low-frequency type is not contained in the second path of audio signal.
The set threshold value can be used for the second path of audio signals without audio signals corresponding to the target low-frequency type. In some embodiments, the set threshold may be preset according to the experimental simulation results.
Taking the identification of the drumbeats as an example, comparing the extracted feature information with a feature library of the drumbeats to obtain a probability value of the corresponding audio signal of the drumbeats, comparing the probability value with a set threshold, if the probability value is greater than the set threshold, indicating that the second channel of audio signal contains the drumbeats, and if the probability value is less than or equal to the set threshold, indicating that the second channel of audio signal does not contain the drumbeats.
In another embodiment of the present disclosure, the step of determining whether the second path of audio signal contains an audio signal corresponding to the target low frequency type according to the characteristic information may further include: step S6100.
Step S6100, recognizing the characteristic information based on the preset recognition model, and determining whether the second path of audio signal includes an audio signal corresponding to the target low frequency type.
The recognition model may be, for example, a pre-trained neural network model. According to the implementation of the method and the device, the characteristic information is identified based on the preset identification model, whether the second path of audio signals contain the audio signals corresponding to the target low-frequency type or not is determined, the identification accuracy can be improved, and the identification speed is increased.
After determining that the second path of audio signals contains the audio signals corresponding to the target low-frequency type, processing can be performed on the characteristic parts corresponding to the target low-frequency type in the second path of audio signals, so that sound distortion caused by integral pulling-up of the audio signals can be avoided, and a better low-frequency sound effect can be obtained.
And step S2500, if so, processing the second channel of audio signal according to the target processing mode to obtain a processed second channel of audio signal.
In this embodiment, after the second channel of audio signals is acquired and the low-frequency processing mode is determined, it is determined whether the second channel of audio signals includes an audio signal corresponding to the target low-frequency type, and the second channel of audio signals is processed according to the target processing mode when it is determined that the second channel of audio signals includes an audio signal corresponding to the target low-frequency type.
In an embodiment of the present disclosure, the step of processing the second channel of audio signal according to the target processing manner to obtain a processed second channel of audio signal may further include: and performing enhancement or attenuation processing on the audio signal corresponding to the target low-frequency type in the second path of audio signal.
In a more specific example, the step of processing the second channel of audio signal according to the target processing manner to obtain a processed second channel of audio signal may further include: and S7100 to S7200.
And S7100, under the condition that the target processing mode is the first mode, performing enhancement processing on the audio signal corresponding to the target low-frequency type in the second path of audio signal.
The first way may be, for example, enhancement. The enhancement processing may be, for example, enhancing the low-frequency sound effect by using a harmonic enhancement method. In specific implementation, harmonic enhancement processing may be performed on all extracted feature information, or harmonic enhancement processing may be performed on part of the feature information.
For example, taking the drum sound processing by the harmonic enhancement method as an example, after 12-dimensional feature information of the drum sound is extracted, the first 5 frequency points are selected according to the sequence from low to high in frequency to perform the harmonic enhancement processing. The 5 frequency points are respectively f1、f2、f3、f4、f5The second path of audio signal after the harmonic enhancement processing can be obtained by the following formula:
Sx=[a1*P(f1)+b1*P(2*f1)+c1*P(3*f1)]
+[a2*P(f2)+b2*P(2*f2)+c2*P(3*f2)]
+[a3*P(f3)+b3*P(2*f3)+c3*P(3*f3)]
+[a4*P(f4)+b4*P(2*f4)+c4*P(3*f4)]
+[a5*P(f5)+b5*P(2*f5)+c5*P(3*f5)]
wherein f is1Corresponding second and third harmonic enhancement frequenciesAre respectively 2 x f1、3*f1,f1The power of the original signal corresponding to the corresponding harmonic frequency point is P (f)1)、P(2*f1)、P(3*f1);f2The corresponding second and third harmonic enhancement frequencies are 2 x f2、3*f2,f2The power of the original signal corresponding to the corresponding harmonic frequency point is P (f)2)、P(2*f2)、P(3*f2);f3The corresponding second and third harmonic enhancement frequencies are 2 x f3、3*f3,f3The power of the original signal corresponding to the corresponding harmonic frequency point is P (f)3)、P(2*f3)、P(3*f3);f4The corresponding second and third harmonic enhancement frequencies are 2 x f4、3*f4,f4The power of the original signal corresponding to the corresponding harmonic frequency point is P (f)4)、P(2*f4)、P(3*f4);f5The corresponding second and third harmonic enhancement frequencies are 2 x f5、3*f5,f5The power of the original signal corresponding to the corresponding harmonic frequency point is P (f)5)、P(2*f5)、P(3*f5)。
And step S7200, under the condition that the target processing mode is the second mode, performing attenuation processing on the audio signal corresponding to the target low-frequency type in the second path of audio signal.
The second way may be, for example, attenuation. The attenuation may be, for example, by using a harmonic attenuation method. For example, when the electronic device is used at night, the drumbeat reduction mode may be selected to avoid interfering with surrounding people.
And step S2600, performing superposition processing on the first path of audio signal and the processed second path of audio signal to obtain a target audio signal.
In this embodiment, after obtaining the processed second channel of audio signal, the first channel of audio signal and the processed second channel of audio signal are superimposed, that is, the pre-stored unprocessed original audio signal and the processed audio signal are superimposed to obtain the target audio signal.
In an embodiment of the present disclosure, the step of performing superposition processing on the first path of audio signal and the processed second path of audio signal to obtain the target audio signal may further include: and S8100-S8400.
And S8100, selecting a reference signal from the processed second path of audio signal.
And S8200, determining the time difference between the first path of audio signal and the processed second path of audio signal according to the position of the reference signal in the processed second path of audio signal and the position of the reference signal in the first path of audio signal.
And S8300, according to the time difference, delaying the first path of audio signal to obtain a delayed first path of audio signal.
And step S8400, overlapping the delayed first path of audio signal with the processed second path of audio signal to obtain a target audio signal.
Illustratively, f may be determined1The position of the corresponding signal in the processed second path of audio signal, namely the position of the signal corresponding to the lowest frequency in the processed second path of audio signal, and then f is determined in the first path of audio signal through correlation calculation and periodic signal retrieval1Corresponding to the position of the signal in the first audio signal, i.e. f1And calculating the time difference between the first path of audio signal and the processed second path of audio signal according to the position of the signal in the original audio signal, delaying the first path of audio signal, and overlapping the delayed first path of audio signal and the processed second path of audio signal to obtain a target audio signal.
Illustratively, the corresponding frequency bands of the first path of audio signal and the processed second path of audio signal can be respectively intercepted, then the intercepted frequency bands are subjected to fourier transform, and are superposed on a frequency domain to obtain a target audio signal.
According to the embodiment of the disclosure, the audio signal to be processed is divided into the first path of audio signal and the second path of audio signal, and the audio signal corresponding to the low-frequency processing mode in the second path of audio signal is processed according to the obtained low-frequency processing mode, so that sound distortion caused by the integral pulling-up of the audio signal can be avoided. Furthermore, the first path of audio signal and the processed second path of audio signal are superposed, so that a better low-frequency sound effect can be obtained while the sound effect of a high-frequency part in the original audio signal is ensured. In addition, the method and the device can provide various low-frequency processing modes for the user, so that the audio signals are processed according to the low-frequency processing mode selected by the user, the requirements of different users on different low-frequency sound effects can be met, and the user experience is better.
In one embodiment of the present disclosure, after acquiring the audio signal to be processed and the low frequency processing mode input by the user, the audio signal processing method further includes: and outputting the audio signal to be processed and a low-frequency processing mode input by a user to a cloud server, so that the cloud server can identify whether the second path of audio signal contains an audio signal corresponding to a target low-frequency type, process a selected part of the second path of audio signal according to a target processing mode under the condition that the second path of audio signal contains the audio signal corresponding to the target low-frequency type, obtain a processed second path of audio signal, and perform superposition processing on the first path of audio signal and the processed second path of audio signal to obtain a target audio signal.
According to the embodiment of the disclosure, the process of executing signal processing by the cloud server can be collected, the power consumption of the electronic equipment can be reduced, and the processing speed is increased.
The following describes a method for processing an audio signal by using a specific example. Referring to fig. 4, the audio signal processing method includes the following steps.
Step S401, obtaining an audio signal to be processed, and dividing the audio signal to be processed into two paths to obtain a first path of audio signal and a second path of audio signal.
And step S402, carrying out filtering processing on the second path of audio signal.
And step S403, extracting characteristic information of the second path of audio signal.
Step S404, according to the characteristic information, determining whether the second path of audio signal contains the audio signal corresponding to the selected low-frequency processing mode, if so, executing step S405, otherwise, ending the process.
Step S405, according to the selected low-frequency processing mode, processing the selected part contained in the second path of audio signal to obtain the processed second path of audio signal.
And step S406, obtaining the target audio signal by the first path of audio signal and the processed second path of audio signal.
In step S407, the target audio signal is output.
According to the specific example, the audio signal to be processed is divided into the first path of audio signal and the second path of audio signal, and the audio signal corresponding to the low-frequency processing mode in the second path of audio signal is processed according to the obtained low-frequency processing mode, so that sound distortion caused by pulling up the whole audio signal can be avoided. Furthermore, the first path of audio signal and the processed second path of audio signal are superposed, so that a better low-frequency sound effect can be obtained while the sound effect of a high-frequency part in the original audio signal is ensured. In addition, the method and the device can provide various low-frequency processing modes for the user, so that the audio signals are processed according to the low-frequency processing mode selected by the user, the requirements of different users on different low-frequency sound effects can be met, and the user experience is better.
< apparatus embodiment >
Referring to fig. 5, an embodiment of the present disclosure provides an apparatus 500 for processing an audio signal, where the apparatus 500 for processing an audio signal includes an obtaining module 510, an audio signal splitting module 520, a low frequency processing mode input module 530, a determining module 540, a processing module 550, and a superimposing module 560.
The obtaining module 510 is configured to obtain an audio signal to be processed.
The audio signal splitting module 520 is configured to split the audio signal to be processed into two paths to obtain a first path of audio signal and a second path of audio signal.
The low frequency processing mode input module 530 is configured to obtain a low frequency processing mode input by a user, where the low frequency processing mode includes a target low frequency type and a target processing mode.
The determining module 540 is configured to determine whether the second path of audio signal includes an audio signal corresponding to the target low-frequency type.
In one embodiment of the present disclosure, the determining module 540 may further include a feature extracting unit and a determining unit.
The feature extraction unit is used for extracting feature information of the second path of audio signal.
The determining unit is used for determining whether the second path of audio signals contains audio signals corresponding to the target low-frequency type according to the characteristic information.
In a more specific example, the determining unit is specifically configured to identify the feature information based on a preset identification function and a feature library corresponding to the target low-frequency type, and obtain a probability value that the second channel of audio signal includes an audio signal corresponding to the target low-frequency type.
The determining unit is further specifically configured to determine that the second channel of audio signals includes an audio signal corresponding to the target low-frequency type when the probability value is greater than a set threshold.
The determining unit is further specifically configured to determine that the second channel of audio signal does not include an audio signal corresponding to the target low-frequency type when the probability value is less than or equal to a set threshold.
In a more specific example, the determining unit is further specifically configured to identify the characteristic information based on a preset identification model, and determine whether the second path of audio signal includes an audio signal corresponding to the target low-frequency type.
Wherein the feature information comprises mel-frequency cepstrum coefficient features and auxiliary features. The assist features may include, for example, at least one of a general centroid feature, a spectral skewness feature, a spectral peak-state feature, a general roll-off feature, a temporal centroid feature, an accent feature.
In one embodiment of the present disclosure, the determining module 540 may further include a filtering unit.
The filtering unit is used for filtering the second path of audio signals before extracting the characteristic information of the second path of audio signals.
The processing module 550 is configured to process the second channel of audio signal according to the target processing manner, so as to obtain a processed second channel of audio signal.
In an embodiment of the present disclosure, the processing module 550 is specifically configured to perform enhancement or attenuation processing on the audio signal of the second path corresponding to the target low-frequency type.
The superposition module 560 is configured to perform superposition processing on the first channel of audio signal and the processed second channel of audio signal to obtain a target audio signal.
In one embodiment of the present disclosure, the overlay module 560 includes a selecting unit, a time difference determining unit, a delay processing unit, and an overlay processing unit.
The selecting unit is used for selecting a reference signal from the processed second path of audio signals.
The time difference determining unit is used for determining the time difference between the first path of audio signal and the processed second path of audio signal according to the position of the reference signal in the processed second path of audio signal and the position of the reference signal in the first path of audio signal.
The delay processing unit is used for carrying out delay processing on the first path of audio signal according to the time difference to obtain a delayed first path of audio signal.
The superposition processing unit is used for carrying out superposition processing on the delayed first path of audio signal and the processed second path of audio signal to obtain the target audio signal.
According to the embodiment of the disclosure, the audio signal to be processed is divided into the first path of audio signal and the second path of audio signal, and the audio signal corresponding to the low-frequency processing mode in the second path of audio signal is processed according to the obtained low-frequency processing mode, so that sound distortion caused by the integral pulling-up of the audio signal can be avoided. Furthermore, the first path of audio signal and the processed second path of audio signal are superposed, so that a better low-frequency sound effect can be obtained while the sound effect of a high-frequency part in the original audio signal is ensured. In addition, the method and the device can provide various low-frequency processing modes for the user, so that the audio signals are processed according to the low-frequency processing mode selected by the user, the requirements of different users on different low-frequency sound effects can be met, and the user experience is better.
< apparatus embodiment >
Referring to fig. 6, an embodiment of the present disclosure further provides an electronic device 600. The electronic device 600 may be, for example, the electronic device 1000 shown in fig. 1.
The electronic device 600 includes a processor 610 and a memory 620.
The memory 620 is used to store executable computer programs.
The processor 610 is configured to execute the method of processing an audio signal according to any of the preceding method embodiments, under control of the executable computer program.
In one embodiment, the electronic device 1000 may be an electronic device with communication functionality, service processing capabilities. The electronic device 1000 may be, for example, a mobile terminal, a laptop, a tablet computer, a palmtop computer, a wearable device, an earphone, a smart speaker, and the like.
According to the electronic device provided by the embodiment of the disclosure, the audio signal to be processed is divided into the first path of audio signal and the second path of audio signal, and the audio signal corresponding to the low-frequency processing mode in the second path of audio signal is processed according to the obtained low-frequency processing mode, so that sound distortion caused by the integral pulling-up of the audio signal can be avoided. Furthermore, the first path of audio signal and the processed second path of audio signal are superposed, so that a better low-frequency sound effect can be obtained while the sound effect of a high-frequency part in the original audio signal is ensured. In addition, the method and the device can provide various low-frequency processing modes for the user, so that the audio signals are processed according to the low-frequency processing mode selected by the user, the requirements of different users on different low-frequency sound effects can be met, and the user experience is better.
< computer-readable storage Medium >
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer instructions, which, when executed by a processor, perform the audio signal processing method provided by the disclosed embodiments.
The disclosed embodiments may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement aspects of embodiments of the disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations for embodiments of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the disclosed embodiments by personalizing the custom electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of the computer-readable program instructions.
Various aspects of embodiments of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the embodiments of the present disclosure is defined by the appended claims.
Claims (11)
1. A method of processing an audio signal, the method comprising:
acquiring an audio signal to be processed;
dividing the audio signal to be processed into two paths to obtain a first path of audio signal and a second path of audio signal;
acquiring a low-frequency processing mode, wherein the low-frequency processing mode comprises a target low-frequency type and a target processing mode;
determining whether the second path of audio signal contains an audio signal corresponding to the target low-frequency type;
if yes, processing the second channel of audio signals according to the target processing mode to obtain a processed second channel of audio signals;
and overlapping the first path of audio signal and the processed second path of audio signal to obtain a target audio signal.
2. The method of claim 1, wherein the determining whether the second path of audio signal contains an audio signal corresponding to the target low frequency type comprises:
extracting characteristic information of the second path of audio signal;
and determining whether the second path of audio signals contains audio signals corresponding to the target low-frequency type or not according to the characteristic information.
3. The method according to claim 2, wherein said determining whether the audio signal corresponding to the target low-frequency type is included in the second path of audio signals according to the characteristic information comprises:
identifying the feature information based on a preset identification function and a feature library corresponding to the target low-frequency type to obtain a probability value that the second path of audio signal contains the audio signal corresponding to the target low-frequency type;
determining that the second path of audio signals contains audio signals corresponding to the target low-frequency type under the condition that the probability value is larger than a set threshold value;
and determining that the audio signal corresponding to the target low-frequency type is not contained in the second path of audio signal under the condition that the probability value is smaller than or equal to a set threshold value.
4. The method according to claim 2, wherein said determining whether the audio signal corresponding to the target low-frequency type is included in the second path of audio signals according to the characteristic information comprises:
and identifying the characteristic information based on a preset identification model, and determining whether the second path of audio signals contains audio signals corresponding to the target low-frequency type.
5. The method of claim 2, wherein the feature information comprises mel-frequency cepstral coefficient features and auxiliary features.
6. The method according to claim 2, wherein before extracting the feature information of the second audio signal, the method further comprises:
and carrying out filtering processing on the second path of audio signals.
7. The method according to claim 1, wherein the processing the second channel of audio signal according to the target processing manner to obtain a processed second channel of audio signal comprises:
and performing enhancement or attenuation processing on the audio signal corresponding to the target low-frequency type in the second path of audio signal.
8. The method of claim 1, wherein the superimposing the first channel of audio signal and the processed second channel of audio signal to obtain a target audio signal comprises:
selecting a reference signal from the processed second path of audio signal;
determining the time difference between the first path of audio signal and the processed second path of audio signal according to the position of the reference signal in the processed second path of audio signal and the position of the reference signal in the first path of audio signal;
according to the time difference, carrying out delay processing on the first path of audio signal to obtain a delayed first path of audio signal;
and superposing the delayed first path of audio signal and the processed second path of audio signal to obtain the target audio signal.
9. An apparatus for processing an audio signal, the apparatus comprising:
the acquisition module is used for acquiring an audio signal to be processed;
the audio signal splitting module is used for splitting the audio signal to be processed into two paths to obtain a first path of audio signal and a second path of audio signal;
the low-frequency processing mode input module is used for acquiring a low-frequency processing mode, wherein the low-frequency processing mode comprises a target low-frequency type and a target processing mode;
a determining module, configured to determine whether the second channel of audio signal includes an audio signal corresponding to the target low-frequency type;
the processing module is used for processing the second channel of audio signals according to the target processing mode to obtain the processed second channel of audio signals;
and the superposition module is used for carrying out superposition processing on the first path of audio signal and the processed second path of audio signal to obtain a target audio signal.
10. An electronic device comprising a processor and a memory, the memory storing computer instructions that, when executed by the processor, perform the method of any of claims 1-8.
11. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110171783.XA CN112992167A (en) | 2021-02-08 | 2021-02-08 | Audio signal processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110171783.XA CN112992167A (en) | 2021-02-08 | 2021-02-08 | Audio signal processing method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112992167A true CN112992167A (en) | 2021-06-18 |
Family
ID=76347493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110171783.XA Pending CN112992167A (en) | 2021-02-08 | 2021-02-08 | Audio signal processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112992167A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114040317A (en) * | 2021-09-22 | 2022-02-11 | 北京车和家信息技术有限公司 | Sound channel compensation method and device, electronic equipment and storage medium |
CN114067817A (en) * | 2021-11-08 | 2022-02-18 | 易兆微电子(杭州)股份有限公司 | Bass enhancement method, bass enhancement device, electronic equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101188877A (en) * | 2006-11-22 | 2008-05-28 | 三星电子株式会社 | Method and apparatus to enhance low frequency component of audio signal by calculating fundamental frequency of audio signal |
WO2012163144A1 (en) * | 2011-10-08 | 2012-12-06 | 华为技术有限公司 | Audio signal encoding method and device |
US20130163784A1 (en) * | 2011-12-27 | 2013-06-27 | Dts Llc | Bass enhancement system |
CN105405448A (en) * | 2014-09-16 | 2016-03-16 | 科大讯飞股份有限公司 | Sound effect processing method and apparatus |
CN105516860A (en) * | 2016-01-19 | 2016-04-20 | 青岛海信电器股份有限公司 | Virtual basetone generating method, virtual basetone generating device and terminal |
CN205610868U (en) * | 2016-03-23 | 2016-09-28 | 中名(东莞)电子有限公司 | Portable dull and stereotyped bass of initiative reinforcing earphone |
WO2016150085A1 (en) * | 2015-03-23 | 2016-09-29 | 深圳市冠旭电子有限公司 | Dynamic low-frequency enhancement method and system based on equal loudness contour |
CN106504760A (en) * | 2016-10-26 | 2017-03-15 | 成都启英泰伦科技有限公司 | Broadband background noise and speech Separation detecting system and method |
CN108235184A (en) * | 2018-01-17 | 2018-06-29 | 潍坊歌尔电子有限公司 | A kind of bass bossting circuit and audio-frequence player device |
CN109243485A (en) * | 2018-09-13 | 2019-01-18 | 广州酷狗计算机科技有限公司 | Restore the method and apparatus of high-frequency signal |
US20200084567A1 (en) * | 2018-03-21 | 2020-03-12 | Sonos, Inc. | Systems and Methods of Adjusting Bass Levels of Multi-Channel Audio Signals |
EP3696814A1 (en) * | 2019-02-15 | 2020-08-19 | Shenzhen Goodix Technology Co., Ltd. | Speech enhancement method and apparatus, device and storage medium |
CN112005300A (en) * | 2018-05-11 | 2020-11-27 | 华为技术有限公司 | Voice signal processing method and mobile equipment |
-
2021
- 2021-02-08 CN CN202110171783.XA patent/CN112992167A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101188877A (en) * | 2006-11-22 | 2008-05-28 | 三星电子株式会社 | Method and apparatus to enhance low frequency component of audio signal by calculating fundamental frequency of audio signal |
WO2012163144A1 (en) * | 2011-10-08 | 2012-12-06 | 华为技术有限公司 | Audio signal encoding method and device |
US20130163784A1 (en) * | 2011-12-27 | 2013-06-27 | Dts Llc | Bass enhancement system |
CN105405448A (en) * | 2014-09-16 | 2016-03-16 | 科大讯飞股份有限公司 | Sound effect processing method and apparatus |
WO2016150085A1 (en) * | 2015-03-23 | 2016-09-29 | 深圳市冠旭电子有限公司 | Dynamic low-frequency enhancement method and system based on equal loudness contour |
CN105516860A (en) * | 2016-01-19 | 2016-04-20 | 青岛海信电器股份有限公司 | Virtual basetone generating method, virtual basetone generating device and terminal |
CN205610868U (en) * | 2016-03-23 | 2016-09-28 | 中名(东莞)电子有限公司 | Portable dull and stereotyped bass of initiative reinforcing earphone |
CN106504760A (en) * | 2016-10-26 | 2017-03-15 | 成都启英泰伦科技有限公司 | Broadband background noise and speech Separation detecting system and method |
CN108235184A (en) * | 2018-01-17 | 2018-06-29 | 潍坊歌尔电子有限公司 | A kind of bass bossting circuit and audio-frequence player device |
US20200084567A1 (en) * | 2018-03-21 | 2020-03-12 | Sonos, Inc. | Systems and Methods of Adjusting Bass Levels of Multi-Channel Audio Signals |
CN112005300A (en) * | 2018-05-11 | 2020-11-27 | 华为技术有限公司 | Voice signal processing method and mobile equipment |
CN109243485A (en) * | 2018-09-13 | 2019-01-18 | 广州酷狗计算机科技有限公司 | Restore the method and apparatus of high-frequency signal |
EP3696814A1 (en) * | 2019-02-15 | 2020-08-19 | Shenzhen Goodix Technology Co., Ltd. | Speech enhancement method and apparatus, device and storage medium |
Non-Patent Citations (2)
Title |
---|
岳宏等: "光学小波变换在视觉系统的应用研究", 《光子学报》 * |
边世勇: "带低频增强的听觉激励器原理及应用", 《广播与电视技术》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114040317A (en) * | 2021-09-22 | 2022-02-11 | 北京车和家信息技术有限公司 | Sound channel compensation method and device, electronic equipment and storage medium |
CN114040317B (en) * | 2021-09-22 | 2024-04-12 | 北京车和家信息技术有限公司 | Sound channel compensation method and device for sound, electronic equipment and storage medium |
CN114067817A (en) * | 2021-11-08 | 2022-02-18 | 易兆微电子(杭州)股份有限公司 | Bass enhancement method, bass enhancement device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10679612B2 (en) | Speech recognizing method and apparatus | |
CN106658284B (en) | Addition of virtual bass in the frequency domain | |
US9536540B2 (en) | Speech signal separation and synthesis based on auditory scene analysis and speech modeling | |
WO2020034779A1 (en) | Audio processing method, storage medium and electronic device | |
CN112992167A (en) | Audio signal processing method and device and electronic equipment | |
CN110688518A (en) | Rhythm point determining method, device, equipment and storage medium | |
CN113921022B (en) | Audio signal separation method, device, storage medium and electronic equipment | |
JP2022185114A (en) | echo detection | |
CN113327586B (en) | Voice recognition method, device, electronic equipment and storage medium | |
CN106653049A (en) | Addition of virtual bass in time domain | |
CN113170260A (en) | Audio processing method and device, storage medium and electronic equipment | |
CN110853606A (en) | Sound effect configuration method and device and computer readable storage medium | |
CN108234793A (en) | A kind of means of communication, device, electronic equipment and storage medium | |
CN116472579A (en) | Machine learning for microphone style transfer | |
KR20200028852A (en) | Method, apparatus for blind signal seperating and electronic device | |
CN112466328B (en) | Breath sound detection method and device and electronic equipment | |
CN106910494B (en) | Audio identification method and device | |
US20230081543A1 (en) | Method for synthetizing speech and electronic device | |
Close et al. | Non intrusive intelligibility predictor for hearing impaired individuals using self supervised speech representations | |
Srinivas et al. | A classification-based non-local means adaptive filtering for speech enhancement and its FPGA prototype | |
US10540990B2 (en) | Processing of speech signals | |
CN114220430A (en) | Multi-sound-zone voice interaction method, device, equipment and storage medium | |
CN112634921B (en) | Voice processing method, device and storage medium | |
CN114360572A (en) | Voice denoising method and device, electronic equipment and storage medium | |
CN113470686B (en) | Voice enhancement method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210618 |
|
RJ01 | Rejection of invention patent application after publication |