CN110830884B - Audio processing method and audio equalizer - Google Patents

Audio processing method and audio equalizer Download PDF

Info

Publication number
CN110830884B
CN110830884B CN201810897147.3A CN201810897147A CN110830884B CN 110830884 B CN110830884 B CN 110830884B CN 201810897147 A CN201810897147 A CN 201810897147A CN 110830884 B CN110830884 B CN 110830884B
Authority
CN
China
Prior art keywords
sampling
audio
generate
frequency
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810897147.3A
Other languages
Chinese (zh)
Other versions
CN110830884A (en
Inventor
赵盈盈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Priority to CN201810897147.3A priority Critical patent/CN110830884B/en
Publication of CN110830884A publication Critical patent/CN110830884A/en
Application granted granted Critical
Publication of CN110830884B publication Critical patent/CN110830884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Tone Control, Compression And Expansion, Limiting Amplitude (AREA)

Abstract

The invention provides an audio processing method and an audio equalizer, which can be realized in an embedded system and cannot cause overlarge computation amount. More specifically, the audio processing method and the audio equalizer utilize a Caesar Bessel derived window and an overlap-and-add method to eliminate the signal distortion of the time domain audio during the conversion, and the user can flexibly set the type and number of the filters to meet the audio effect desired by the user. Accordingly, the audio processing method and the audio equalizer provided by the embodiment of the invention can be realized in an embedded system with small calculation amount, and can generate a required filter according to the audio effect which a user wants to achieve.

Description

Audio processing method and audio equalizer
Technical Field
The present invention relates to an audio processing method and an audio equalizer, and more particularly, to an audio processing method and an audio equalizer for equalizing (equalizing) a sound signal.
Background
The equalizer (equalizer) for sound signals was used early to compensate for the deficiencies of digital-to-analog converters (DACs), power Amplifiers (AMPs) or speaker units. In recent years, the multimedia industry has often been used to beautify the sound, making it more pleasing to the ear. A typical equalizer design employs an Infinite Impulse Response (IIR) filter in cascade.
However, in an embedded system, the equalizer using the cascade IIR filter is large in operation amount, and therefore, the equalizer must be designed in other ways.
Disclosure of Invention
The invention provides an audio processing method and an audio equalizer, which can be realized in an embedded system and cannot cause overlarge computation amount. In addition, the present invention employs a Kaiser-Bessel derivative (KBD) window and an overlap and add (OLA) method to eliminate the signal distortion of the audio signal during the conversion. The audio processing method and the audio equalizer can also provide the user with flexibility to set the types and the number of the filters in the equalizer to meet the audio effect desired by the user.
The embodiment of the invention provides an audio processing method, which is suitable for an audio equalizer. The audio processing method comprises the following steps: (A) reading a time domain audio; (B) windowing (windowing) the time domain audio to generate a plurality of sample blocks (blocks), wherein each sample block has a plurality of sample points (samples), and each adjacent sample block has a predetermined proportion of overlapping parts; (C) applying a Kazabeto derivative (KBD) window to each sampling block to generate a result value corresponding to each sampling point in each sampling block; (D) converting the sampling blocks into a plurality of frequency bands of a frequency domain by a Modified Discrete Cosine Transform (MDCT), wherein each frequency band has a frequency point and the frequency point corresponds to a frequency value; (E) balancing the frequency bands to generate a plurality of adjusted frequency bands, wherein the frequency point of each adjusted frequency band corresponds to an adjustment frequency value; (F) converting the adjusted frequency bands into a plurality of new sampling blocks in a time domain by an Inverse MDCT (IMDCT) mode, wherein each sampling point in each new sampling block corresponds to a new result value; (G) applying a KBD restore window to each new sample block to compensate for the new result value corresponding to each sample point in each new sample block; and (H) aliasing each sample block by an overlap and add (OLA) method according to the overlapped portion to generate a new time domain audio.
An embodiment of the invention provides an audio equalizer, which includes a receiver and a processor. The receiver receives an audio signal and converts the audio signal into a time domain audio. The processor is coupled to the receiver and is configured to perform the following steps: (A) reading time domain audio; (B) windowing (windowing) the time domain audio to generate a plurality of sample blocks (blocks), wherein each sample block has a plurality of sample points (samples), and each adjacent sample block has a predetermined proportion of overlapping parts; (C) applying a Kazabeto derivative (KBD) window to each sampling block to generate a result value corresponding to each sampling point in each sampling block; (D) converting the sampling blocks into a plurality of frequency bands of a frequency domain by a Modified Discrete Cosine Transform (MDCT), wherein each frequency band has a frequency point and the frequency point corresponds to a frequency value; (E) balancing the frequency bands to generate a plurality of adjusted frequency bands, wherein the frequency point of each adjusted frequency band corresponds to an adjustment frequency value; (F) converting the adjusted frequency bands into a plurality of new sampling blocks in a time domain by an Inverse MDCT (IMDCT) mode, wherein each sampling point in each new sampling block corresponds to a new result value; (G) applying a KBD restore window to each new sample block to compensate for the new result value corresponding to each sample point in each new sample block; and (H) aliasing each sample block by an overlap and add (OLA) method according to the overlapped portion to generate a new time domain audio.
For a better understanding of the nature and technical content of the present invention, reference should be made to the following detailed description of the invention and to the accompanying drawings, which are provided for purposes of illustration only and are not intended to limit the scope of the invention as defined by the appended claims.
Drawings
Fig. 1 is a schematic diagram of an audio equalizer according to an embodiment of the present invention.
Fig. 2 is a flowchart of an audio processing method according to an embodiment of the invention.
FIG. 3 is a block diagram of a plurality of sampling blocks according to an embodiment of the invention.
FIG. 4 is a diagram illustrating the application of a KBD window to each sample block according to an embodiment of the present invention.
Fig. 5 is a diagram illustrating an embodiment of converting a plurality of sampling blocks into a plurality of frequency bands of a frequency domain.
Fig. 6A is a detailed flowchart of equalizing the frequency bands according to an embodiment of the present invention.
FIG. 6B is a diagram of reference waveforms according to an embodiment of the present invention.
Fig. 6C is a schematic diagram of an adjustment waveform and a synthesized waveform according to an embodiment of the invention.
Fig. 6D is a schematic diagram of an adjusted frequency band according to an embodiment of the invention.
Fig. 7 is a diagram illustrating a plurality of new sampling blocks for converting a plurality of adjusted frequency bands into a time domain according to an embodiment of the invention.
FIG. 8 is a diagram illustrating the application of a KBD window to each new sample block according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of new time-domain audio according to an embodiment of the invention.
Description of the symbols
100: audio frequency equalizer
110: receiver with a plurality of receivers
120: processor with a memory having a plurality of memory cells
Sa: sound signal
x (t): time domain audio
y (t): new time domain audio
S210, S220, S230, S240, S250, S260, S270, S280: step (ii) of
BLK1, BLK2, BLK3, BLKn: sampling block
S0: numerical value
P50: sampling point
S0': result value
FB1, FB2, FB3, FBn: frequency band
x (f1), x (f2), x (f3), x (fn): frequency value
fc1, fc2, fc3, fcn: frequency point
S610, S620, S630, S640: step (ii) of
WAVELr: reference waveform
W1: adjusting waveforms
W2: adjusting waveforms
W3: adjusting waveforms
Wad: superimposed waveform
Wcom: synthesized waveform
FB1 ', FB 2', FB3 ', FBn': adjusted frequency band
x '(f 1), x' (f2), x '(f 3), x' (fn): adjusting frequency value
BLK1 ', BLK 2', BLK3 ', BLKn': new sample block
S1: numerical value
S1': new result value
Detailed Description
Hereinafter, the present invention will be described in detail by illustrating various exemplary embodiments thereof through the accompanying drawings. The inventive concept may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Moreover, in the figures, like reference numerals may be used to indicate like elements.
The audio processing method and the audio equalizer provided by the embodiment of the invention can be realized in an embedded system, and cannot cause overlarge operation amount. More specifically, the audio processing method and the audio equalizer use a KBD window and an OLA method to eliminate the signal distortion of the time domain audio during the conversion, and the user can flexibly set the type and number of the filters to meet the audio effect desired by the user. Accordingly, the audio processing method and the audio equalizer provided by the embodiment of the invention can be realized in an embedded system with small calculation amount, and can generate a required filter according to the audio effect which a user wants to achieve. The audio processing method and the audio equalizer disclosed in the present invention will be further described below.
First, please refer to fig. 1, which shows a schematic diagram of an audio equalizer according to an embodiment of the present invention. As shown in fig. 1, the audio equalizer 100 is disposed in an embedded system (not shown in the drawings) and receives a sound signal Sa. The user may adjust the sound signal Sa through the audio equalizer 100 and output a sound effect desired by the user. In this embodiment, the audio equalizer 100 is a certain component (e.g., a microprocessor) or a combination of multiple components (e.g., a microprocessor and a sound signal receiver) in an embedded system, which is not limited in this respect.
The audio equalizer 100 includes a receiver 110 and a processor 120. The receiver 110 receives an audio signal Sa and converts the audio signal Sa into a time-domain audio x (t). In the present embodiment, the time domain audio x (t) is the presentation of the sound signal Sa on the time axis. The conversion of the sound signal Sa into the time-domain audio x (t) is well known to those skilled in the art, and therefore will not be described herein.
The processor 120 is coupled to the receiver 110 and configured to perform the following steps to eliminate signal distortion of the time-domain audio during conversion, and to provide the user with flexibility to set the filter without causing excessive computation. Please refer to fig. 2, which shows a flowchart of an audio processing method according to an embodiment of the invention. First, the processor 120 reads the time domain audio x (t) to further process the time domain audio x (t) of a time domain (time domain) (step S210). In this embodiment, the sampling frequency of the time domain audio x (t) is 48000 Hz.
Next, the processor 120 windows (windowing) the time-domain audio x (t) to generate a plurality of sample blocks (blocks). Each sampling block has a plurality of sampling points (samples), and each adjacent sampling block has an overlapping (overlap) portion with a predetermined ratio (step S220). As shown in fig. 3, the processor 120 windows the time-domain audio x (t) to generate a plurality of sample blocks BLK1, BLK2, BLK3 … BLKn. There are multiple sampling points (not shown in the figure) in each sampling block BLK1-BLKn, and each adjacent sampling block BLK1-BLKn has a 50% overlap. In the present embodiment, each of the sampling blocks BLK1-BLKn has 1024 sampling points, and each sampling point corresponds to a value. For example, the 50 th sampling point P50 of the sampling block BLK1 corresponds to the value S0.
In the next step, the processor 120 further converts the signal from a time domain to a frequency domain, and the signal is distorted during the conversion. Therefore, in the next step S230, the processor 120 applies the KBD window for compensating the signal distortion to each sampling block, so that the distorted signal can be compensated when the signal is converted from the frequency domain back to the time domain.
Therefore, after the processor 120 obtains a plurality of sampling blocks (step S220), the processor 120 applies a KBD window to each sampling block to generate a result value corresponding to each sampling point in each sampling block (step S230). In connection with the above embodiment, as shown in FIG. 4, the processor 120 sets the range of the KBD window to 0-1023, and multiplies the 0-1023 values of the KBD window by 1024 sampling points in each of the sampling blocks BLK1-BLKn, respectively, to generate a result value corresponding to each sampling point in each of the sampling blocks BLK 1-BLKn. For example, the 50 th sampling point P50 of the sampling block BLK1 corresponds to the value S0. The processor 120 multiplies the 50 th value in the KBD window by the 50 th sampling point P50 of the sampling block BLK1 to generate a result S0' of the sampling point P50.
Next, the processor 120 converts the sample blocks into a plurality of frequency bands of a frequency domain by an MDCT method (step S240). Each frequency band has a frequency point, and each frequency point corresponds to a frequency value. In connection with the above embodiments, as shown in fig. 5, the processor 120 converts the plurality of sampling blocks BLK1-BLKn into a plurality of frequency bands FB1, FB2, FB3 … FBn of a frequency domain by MDCT. The frequency bands FB1, FB2, FB3 … FBn have frequency points fc1, fc2, fc3 … fcn, respectively, and the frequency points fc1, fc2, fc3 … fcn correspond to the frequency values X (f1), X (f2), X (f3) … X (fn).
Since the sampling frequency of the time-domain audio x (t) is 48000Hz, the adjacent sampling blocks BLK1-BLKn have 50% overlap, and each sampling block BLK1-BLKn has 1024 sampling points. Therefore, the frequency bins fc1, fc2, fc3 … fcn of the frequency bands FB1-FBn will be set to 0(Hz), 46.875(Hz), 93.75(Hz) … 23953 (Hz). The process of the processor 120 converting the plurality of sampling blocks BLK1-BLKn in a time domain into the plurality of frequency bands FB1, FB2, FB3 … FBn in a frequency domain by MDCT is well known in the art, and therefore will not be described herein again. In addition, the processor 120 generates the plurality of frequency bands FB1-FBn through MDCT, which does not require complex operations, so that the audio equalizer 100 can be implemented in an embedded system without causing an excessive amount of operations.
Next, the processor 120 equalizes the frequency bands to generate a plurality of adjusted frequency bands (step S250). The processor 120 can balance the frequency bands according to the user's requirement, so that the adjusted frequency bands can meet the user's desired sound effect. More specifically, as shown in fig. 6A, in the process of equalizing the frequency bands by the processor 120, a reference waveform is first generated (step S610). For example, the processor 120 generates a sine wave (sine wave) as a model for adjusting the frequency bands to form the reference waveform. As shown in FIG. 6B, the processor 120 corresponds 0-2 π of the sinusoidal waveform to a bandwidth of 0-2000 and a sinusoidal waveform value of 0-1 to a gain value of 0-1 to thereby form a reference waveform WAVEL. The reference waveform may be a sawtooth waveform or other suitable waveforms, but the invention is not limited thereto.
Next, the processor 120 applies at least one parameter set to the reference waveform to generate at least one adjustment waveform having the number of parameter sets (step S620). Each parameter set has a predetermined frequency point, a predetermined frequency band and a predetermined gain value, so that the processor 120 generates corresponding adjustment waveforms according to the data in the parameter sets. Taking the above example into account, as shown in fig. 6C, the parameter sets have three groups, which respectively represent a first parameter set PF1 for low-pass filtering, a second parameter set PF2 for band-pass filtering, and a third parameter set PF3 for another band-pass filtering.
The preset frequency point Fc of the first parameter set PF1 is 50Hz and the preset Gain value Gain is 6 dB. The preset frequency point Fc of the second parameter set PF2 is 1000Hz, the preset frequency band BW is 1000Hz, and the preset Gain value Gain is 6 dB. The preset frequency point Fc of the third parameter set PF3 is 3000Hz, the preset frequency band BW is 3000Hz, and the preset Gain value Gain is 6 dB. Therefore, the processor 120 generates the adjustment waveforms W1, W2 and W3 according to the first parameter group PF1, the second parameter group PF2 and the third parameter group PF 3. The number of the parameter sets can be designed according to the number and effect of the adjustment waveforms that the user wants to generate, which is not limited by the invention.
Then, the processor 120 superimposes the adjusted waveforms to generate a superimposed waveform, and limits the gain of the superimposed waveform according to a predetermined maximum gain to generate a composite waveform (step S630). In connection with the above example, as also shown in fig. 6C, the processor 120 superimposes the three adjusted waveforms W1, W2, and W3 to generate a superimposed waveform Wad, and limits the gain of the superimposed waveform Wad according to a preset maximum gain value to generate the composite waveform Wcom (shown as a solid line portion in fig. 6C). In connection with the above example, the maximum gain is preset to be, for example, 6dB, so as to avoid pop noise output from the processor 120. Therefore, the processor 120 will limit the maximum gain value of the superimposed waveform Wad to 6dB to generate the composite waveform Wcom. If the processor 120 does not have the problem of pop output, the superimposed waveform Wad may be directly used as the synthesized waveform Wcom, which is not limited by the present invention.
Finally, the processor 120 applies the synthesized waveform to the plurality of frequency bands of the frequency domain generated in step S240 to generate an adjusted frequency band (step S640). Taking the above example, the processor 120 will apply the synthesized waveform Wcom to the multiple frequency bands FB1-FBn shown in fig. 5 to generate the adjusted frequency bands FB1 ', FB 2', FB3 '… FBn' accordingly. The adjusted frequency bands FB1 ', FB 2', FB3 '… FBn' have frequency points fc1-fcn, respectively, and the frequency points fc1-fcn correspond to an adjusted frequency value X '(f 1), X' (f2), and X '(f 3) … X' (fn).
After the processor 120 equalizes the frequency bands to obtain the adjusted frequency bands (i.e., step S250), it represents that the processor 120 has adjusted the frequency bands to the sound effect desired by the user. At this time, the processor 120 then converts the adjusted frequency bands into a plurality of new sample blocks in a time domain by an inverse mdct (imdct) (step S260). Each sampling point in each new sampling block corresponds to a new result value. In connection with the above example, as shown in fig. 7, the processor 120 converts the adjusted frequency band FB1 '-FBn' into a plurality of new sampling blocks BLK1 ', BLK 2', BLK3 '… BLKn' in a time domain by IMDCT. Each new sampling block BLK1 ', BLK 2', BLK3 ', … BLKn' has a plurality of sampling points (not shown), and each sampling point corresponds to a new result value (not shown). For example, each new sampling block BLK1 ' -BLKn ' has 1024 sampling points therein, and the 50 th sampling point P50 of the new sampling block BLK1 ' corresponds to the value S1.
In this step S260, the processor 120 converts the equalized frequency band back to a time domain signal, and restores the signal to be associated with the time domain audio x (t) in the next step.
Therefore, in step S270, the processor 120 applies a KBD restore window to each new sample block to compensate for the new result value corresponding to each sample point in each new sample block (step S270). Taking the above example as an example, as shown in fig. 8, the processor 120 sets the range of the KBD restore window to 0-1023, and multiplies the 0-1023 th values in the KBD restore window by 1024 sampling points in each new sampling block BLK1 '-BLKn', respectively, to generate a new result value corresponding to each sampling point in each new sampling block BLK1 '-BLKn'. For example, referring to fig. 7-8 simultaneously, the 50 th sampling point P50 of the new sampling block BLK 1' corresponds to the value S1. The processor 120 multiplies the 50 th value in the KBD restore window by the 50 th sampling point P50 of the new sampling block BLK1 'to generate a new result S1' of the sampling point P50. In this embodiment, the KBD restoring window is the same as the KBD window in step S230, so that the processor 120 can compensate the new result value corresponding to each sampling point in each new sampling block under the same condition. Of course, the KBD restore window and the KBD window in step S230 may be designed differently according to the actual situation, which is not limited in the present invention.
At this time, the new sampling blocks BLK1 '-BLKn' represent the region signals after the time-domain audio x (t) is equalized, and each new sampling block BLK1 '-BLKn' has a predetermined ratio (50% in this embodiment) of overlapping portions.
Therefore, the processor 120 generates a new time-domain audio by aliasing each new sample block according to the overlapped portion by an overlap and add (OLA) method (step S280). Taking the above example as a reference, as shown in fig. 9, the processor 120 performs an OLA aliasing process on each new sample block BLK1 '-BLKn' according to the overlapping portion (in this example, 50% overlapping portion) described in step S220 to generate a new time-domain audio y (t). The new time-domain audio y (t) represents the audio of the time domain generated after the processor 120 equalizes the time-domain audio x (t). The processor 120 can transmit the new time-domain audio y (t) to the next playback element (e.g., speaker) for output or to other electronic elements for subsequent processing.
In summary, the audio processing method and the audio equalizer provided by the embodiments of the present invention can be implemented in an embedded system without causing an excessive amount of computation. More specifically, the audio processing method and the audio equalizer use a KBD window and an OLA method to eliminate the signal distortion of the time domain audio during the conversion, and the user can flexibly set the type and number of the filters to meet the audio effect desired by the user. Accordingly, the audio processing method and the audio equalizer provided by the embodiment of the invention can be realized in an embedded system with small calculation amount, and can generate a required filter according to the audio effect which a user wants to achieve.
The above description is only an embodiment of the present invention, and is not intended to limit the claims of the present invention.

Claims (10)

1. An audio processing method applied to an audio equalizer, comprising:
reading a time domain audio;
windowing the time domain audio to generate a plurality of sampling blocks, wherein each sampling block has a plurality of sampling points, and each adjacent sampling block has a predetermined proportion of overlapping parts;
applying a Caesar Bessel derivation window to each of the sampling blocks to generate a result value corresponding to each of the sampling points in each of the sampling blocks;
converting the sampling block into a plurality of frequency bands of a frequency domain by a modified discrete cosine transform mode, wherein each frequency band has a frequency point and the frequency point corresponds to a frequency value;
balancing the frequency bands to generate a plurality of adjusted frequency bands, wherein the frequency point of each adjusted frequency band corresponds to an adjustment frequency value;
converting the adjusted frequency band into a plurality of new sampling blocks in a time domain by an inverse modified discrete cosine transform (IDCT), wherein each sampling point in each new sampling block corresponds to a new result value;
applying a Caesar Bessel derived restoring window to each new sampling block to compensate the new result value corresponding to each sampling point in each new sampling block; and
aliasing each of the new sample blocks by an overlap and add method based on the overlap portion to generate a new time domain audio.
2. The audio processing method of claim 1, wherein the predetermined ratio is 50% in the step of windowing the time domain audio.
3. The audio processing method of claim 1, wherein applying the Caesare derivative window to each of the sampling blocks further comprises:
multiplying the Kaisebax derivative window by the sampling points in each of the sampling blocks to generate the result value corresponding to each of the sampling points.
4. The audio processing method of claim 1, wherein in the step of equalizing the frequency bands, further comprising:
generating a reference waveform;
applying at least one parameter set to the reference waveform to generate at least one adjustment waveform with the parameter set number, wherein each parameter set has one or a combination of a preset frequency point, a preset frequency band and a preset gain value;
superposing the at least one adjusting waveform to generate a superposed waveform, and limiting the gain value of the superposed waveform according to a preset maximum gain value to generate a synthesized waveform; and
applying the synthesized waveform to the frequency band to generate the adjusted frequency band.
5. The method of claim 1, wherein the step of applying the Caesarbex-derived restoration window to each new sample block further comprises:
multiplying the Kaisebax derived restoration window by the sampling point in each new sampling block to generate the new result value corresponding to each sampling point.
6. The audio processing method of claim 1, wherein the Caesarbus-derived viewport is the same as the Caesarbus-derived restoration viewport.
7. An audio equalizer comprising:
a receiver for receiving a sound signal and converting the sound signal into a time domain audio; and
a processor coupled to the receiver and configured to perform the following steps:
reading the time domain audio;
windowing the time domain audio to generate a plurality of sampling blocks, wherein each sampling block has a plurality of sampling points, and each adjacent sampling block has a predetermined proportion of overlapping parts;
applying a Caesar Bessel derivation window to each of the sampling blocks to generate a result value corresponding to each of the sampling points in each of the sampling blocks;
converting the sampling block into a plurality of frequency bands of a frequency domain by a modified discrete cosine transform mode, wherein each frequency band has a frequency point and the frequency point corresponds to a frequency value;
balancing the frequency bands to generate a plurality of adjusted frequency bands, wherein the frequency point of each adjusted frequency band corresponds to an adjustment frequency value;
converting the adjusted frequency band into a plurality of new sampling blocks in a time domain by an inverse modified discrete cosine transform (IDCT), wherein each sampling point in each new sampling block corresponds to a new result value;
applying a Caesar Bessel derived restoring window to each new sampling block to compensate the new result value corresponding to each sampling point in each new sampling block; and
aliasing each of the new sample blocks by an overlap and add method based on the overlap portion to generate a new time domain audio.
8. The audio equalizer of claim 7, wherein the predetermined proportion is 50%.
9. The audio equalizer of claim 7, wherein the processor multiplies the Caesar Becker derivative window by the sampling points in each of the sampling blocks to generate the result value corresponding to each of the sampling points when applying the Caesar Becker derivative window to each of the sampling blocks.
10. The audio equalizer of claim 7, wherein the processor generates a reference waveform when equalizing the frequency band, applies at least one parameter set to the reference waveform to generate at least one adjusted waveform having the number of the parameter set, superimposes the at least one adjusted waveform to generate a superimposed waveform, limits a gain of the superimposed waveform according to a predetermined maximum gain to generate a synthesized waveform, and applies the synthesized waveform to the frequency band to generate the adjusted frequency band, wherein each parameter set has one or a combination of a predetermined frequency point, a predetermined frequency band, and a predetermined gain.
CN201810897147.3A 2018-08-08 2018-08-08 Audio processing method and audio equalizer Active CN110830884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810897147.3A CN110830884B (en) 2018-08-08 2018-08-08 Audio processing method and audio equalizer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810897147.3A CN110830884B (en) 2018-08-08 2018-08-08 Audio processing method and audio equalizer

Publications (2)

Publication Number Publication Date
CN110830884A CN110830884A (en) 2020-02-21
CN110830884B true CN110830884B (en) 2021-06-25

Family

ID=69540684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810897147.3A Active CN110830884B (en) 2018-08-08 2018-08-08 Audio processing method and audio equalizer

Country Status (1)

Country Link
CN (1) CN110830884B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103282958A (en) * 2010-10-15 2013-09-04 华为技术有限公司 Signal analyzer, signal analyzing method, signal synthesizer, signal synthesizing method, windower, transformer and inverse transformer
CN104718572A (en) * 2012-06-04 2015-06-17 三星电子株式会社 Audio encoding method and device, audio decoding method and device, and multimedia device employing same
CN106373587A (en) * 2016-08-31 2017-02-01 北京容联易通信息技术有限公司 Automatic sound feedback detection and elimination method of real-time communication system
CN107592938A (en) * 2015-03-09 2018-01-16 弗劳恩霍夫应用研究促进协会 For the decoder decoded to coded audio signal and the encoder for being encoded to audio signal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103282958A (en) * 2010-10-15 2013-09-04 华为技术有限公司 Signal analyzer, signal analyzing method, signal synthesizer, signal synthesizing method, windower, transformer and inverse transformer
CN104718572A (en) * 2012-06-04 2015-06-17 三星电子株式会社 Audio encoding method and device, audio decoding method and device, and multimedia device employing same
CN107592938A (en) * 2015-03-09 2018-01-16 弗劳恩霍夫应用研究促进协会 For the decoder decoded to coded audio signal and the encoder for being encoded to audio signal
CN106373587A (en) * 2016-08-31 2017-02-01 北京容联易通信息技术有限公司 Automatic sound feedback detection and elimination method of real-time communication system

Also Published As

Publication number Publication date
CN110830884A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
US20120140951A1 (en) System and Method for Processing an Audio Signal
US9407993B2 (en) Latency reduction in transposer-based virtual bass systems
US8229135B2 (en) Audio enhancement method and system
US5796842A (en) BTSC encoder
JP5607626B2 (en) Parametric stereo conversion system and method
CN1694581B (en) Measuring apparatus and method
US20080292114A1 (en) Audio reproducing apparatus
US8391471B2 (en) Echo suppressing apparatus, echo suppressing system, echo suppressing method and recording medium
CN104685563A (en) Audio signal shaping for playback in a noisy environment
CN101843115B (en) Auditory sensibility correction device
CN1659927A (en) Method of digital equalisation of a sound from loudspeakers in rooms and use of the method
WO2012033942A2 (en) Dynamic compensation of audio signals for improved perceived spectral imbalances
JP2000152374A (en) Automatic speaker equalizer
CN102077609A (en) Acoustic processing apparatus
US20130148822A1 (en) Correcting Non-Linear Loudspeaker Response
US9324334B2 (en) Signal processing apparatus, signal processing method, and program
CN104704855A (en) System and method for reducing latency in transposer-based virtual bass systems
US5687243A (en) Noise suppression apparatus and method
CN110830884B (en) Audio processing method and audio equalizer
US10693430B2 (en) Audio signal processing method and audio equalizer
Faller Modifying audio signals for reproduction with reduced room effect
WO2020179472A1 (en) Signal processing device, method, and program
CN108182947B (en) Sound channel mixing processing method and device
Cecchi et al. Automotive audio equalization
RU2265951C2 (en) Method and device for correction of frequency distortion (automatic equalizer)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant