CN112233686B - Voice data processing method of NVOCPLUS high-speed broadband vocoder - Google Patents

Voice data processing method of NVOCPLUS high-speed broadband vocoder Download PDF

Info

Publication number
CN112233686B
CN112233686B CN202011047245.1A CN202011047245A CN112233686B CN 112233686 B CN112233686 B CN 112233686B CN 202011047245 A CN202011047245 A CN 202011047245A CN 112233686 B CN112233686 B CN 112233686B
Authority
CN
China
Prior art keywords
voice
signal
value
parameter
voice data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011047245.1A
Other languages
Chinese (zh)
Other versions
CN112233686A (en
Inventor
肖文雄
朱振荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Liansheng Software Development Co ltd
Original Assignee
Tianjin Liansheng Software Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Liansheng Software Development Co ltd filed Critical Tianjin Liansheng Software Development Co ltd
Priority to CN202011047245.1A priority Critical patent/CN112233686B/en
Publication of CN112233686A publication Critical patent/CN112233686A/en
Application granted granted Critical
Publication of CN112233686B publication Critical patent/CN112233686B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0212Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using orthogonal transformation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/26Pre-filtering or post-filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/84Detection of presence or absence of voice signals for discriminating voice from noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/93Discriminating between voiced and unvoiced parts of speech signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention relates to a voice data processing method of an NVOCPLUS high-speed broadband vocoder, which comprises the following steps: step 1, an encoding end carries out initialization configuration and analysis processing on an original voice digital signal, judges whether the current voice signal is voice, and calculates a fundamental tone period and unvoiced and voiced numerical parameters of each sub-band after extracting fundamental tones in the voice if the current voice signal is voice; step 2, extracting and quantizing parameters of the line spectrum pair, the pitch value, the gain parameter, the residual compensation gain and the codebook vector to obtain a voice quantization parameter; and 3, after extracting the voice quantization parameters in the step 2, synthesizing the voice quantization parameters into voice, increasing the voice quality through noise pressing, and performing voice reconstruction after the parameter recovery fails or the voice synthesis fails. The invention can provide good voice quality and strong adaptability to speech in low speed and in the application of losing voice frequency below 300 Hz.

Description

Voice data processing method of NVOCPLUS high-speed broadband vocoder
Technical Field
The invention belongs to the technical field of vocoder digital voice compression, and particularly relates to a voice data processing method of a high-speed wideband vocoder of a NVOCPLUS.
Background
With the rapid development of communication technology, frequency and resources are precious, and compared with an analog voice communication system, a digital voice communication system has the characteristics of strong anti-interference performance, good confidentiality, easy integration and the like, and a low-speed vocoder plays an important role in the digital voice communication system.
At present, most speech coding algorithms are established on the basis of acoustic models of human vocal organs. The vocal organs of a person consist of the glottis, the vocal tract and other auxiliary organs. The actual speech generation process is that the vibration generated by glottis is modulated by sound channel filter and radiated via mouth and nose, etc., and can be expressed as the following formula
s(n)=h(n)*e(n)
Wherein s (n) represents a voice signal, h (n) is a unit impulse response of a vocal tract filter, and e (n) is a glottal vibration signal.
In order to clearly represent speech signals, the glottal and the vocal tract can be respectively described from the spectral characteristics, and how to efficiently quantize the characteristic parameters of the glottal and the vocal tract is the target to be achieved by the algorithm of parameter coding.
Vocoders belong to the class of parametric coding, which is a method of compressing the digital representation of a speech signal to recover the most similar speech to the original speech with fewer bits (bits). With the explosive increase in the efficiency of digital signal processing hardware, vocoders have been used in large quantities in addition to the accelerated research of vocoders.
Different from the existing NVOC narrowband vocoder, the existing NVOC narrowband vocoder comprises two code rates of 2.4kbps and 2.2kbps (used for encryption), the channel FEC code rate is 1.2kbps, and both the voice codec and the FEC perform encoding and decoding on a frame with 20 milliseconds of 8K sampling. The NVOC wideband vocoder realizes high speed of 12.2kbps, more narrow band (40 + bit) after compression (200 + bit), and the encoded data carries more information which is helpful for restoring sound.
In the field of the existing wideband vocoder, because the compression ratio of the voice coding is not high, the following problems still exist on the premise of obtaining better tone quality and accuracy: (1) The gene parameters are extracted by utilizing the time domain correlation, so that the calculation is easy to be wrong; (2) Because the sound is not subjected to noise reduction, the extracted sound parameters are inaccurate when noise exists; and (4) neglecting the compatibility with low-speed narrow-band vocoder.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a voice data processing method of an NVOCPLUS high-speed broadband vocoder, which has reasonable design, high voice quality and strong adaptability to the dialect.
The invention solves the practical problem by adopting the following technical scheme:
a voice data processing method of NVOCPLUS high-speed wideband vocoder comprises the following steps:
step 1, an encoding end carries out initialization configuration and analysis processing on an original voice digital signal, firstly carries out noise suppression processing on the original voice digital signal, then judges whether the current voice signal is voice, if the current voice signal is voice, calculates a pitch period and unvoiced and voiced numerical parameters of each sub-band after extracting a pitch in the voice;
step 2, extracting and quantizing parameters of a line spectrum pair, a base pitch value, a gain parameter, a residual compensation gain and a codebook vector on the basis of the numerical parameters of the pitch period, the unvoiced sound and the voiced sound calculated in the step 1 to obtain a sound quantization parameter;
and 3, after the voice quantization parameters in the step 2 are extracted, synthesizing the voice quantization parameters into voice, increasing the voice quality through noise pressing, and performing voice reconstruction after the parameter recovery fails or the voice synthesis fails.
The step 1 specifically includes:
(1) Noise suppression processing is carried out on the original voice digital signal S (n) to obtain voice data S after noise suppression 1 (n) and sound spectrum characteristics of 0 to 4000Hz of the original data S (n);
(2) Judging whether the current voice signal after noise suppression processing is voice by adopting VAD activation detection technology to obtain voice data S 2 (n);
(3) Extracting voice data S 2 A fundamental tone of (n);
(4) And calculating the parameters of the pitch period and the unvoiced and voiced values of each sub-band.
Moreover, the specific steps in step (1) of step 1 include:
(1) a high-pass filter is adopted to remove direct current components from voice data, improve high-frequency components and attenuate low frequency;
(2) windowing signals, adopting a Hamming window with the window length N, and obtaining energy distribution on a frequency spectrum through overlapped Fourier transform to obtain voice data S after noise suppression 1 (n), noise suppression result parameters and sound spectrum characteristics of 0-4000 Hz of the original voice digital signal S (n).
Moreover, the specific method in the step 1 and the step (2) is as follows:
according to the auditory characteristics of human ears, the voice data S after noise suppression 1 (n) filtering the sub-band and calculating the level of the sub-band signal, estimating the signal-to-noise ratio according to the formula shown below, comparing the signal-to-noise ratio with a preset threshold value, and judging whether the signal is correctWhether the preceding speech signal is speech:
Figure BDA0002708379040000031
wherein, a is the signal level value of the current frame, and b is the current signal level value estimated from the previous frames;
moreover, the specific method in the step 1 and the step (3) is as follows:
dialogue data S using a low-pass filter with a cut-off frequency of B Hz 2 (n) low-pass filtering is carried out, after the voice data after low-pass filtering is carried out by adopting a second-order inverse filter, the self-phase function of the output signal of the second-order inverse filtering is calculated according to the following formula, and fundamental tones are extracted:
Figure BDA0002708379040000041
wherein N is the window length of the mentioned window function in the step (1) in the step 1, S w (i) And (3) outputting a signal for the second-order inverse filtering in the step (1) and the step (3).
Moreover, the specific steps of the step 1, the step (4) comprise:
(1) dividing the frequency domain into 5 frequency bands with equal intervals of 0-4000, wherein the frequency bands are respectively [0-500] Hz,
[500-1000] Hz, [1000-2000] Hz, [2000-3000] Hz, [3000-4000] Hz, and the autocorrelation function of the bandpass signal in each interval is calculated using the following formula:
Figure BDA0002708379040000042
where "t" is a continuous time argument, "τ" is an input signal delay ". Cndot. * f * () To take conjugation;
(2) taking the average value of the product of two values of the same time function at the moment t and t + a as the function of time t, wherein the function is the measurement of the similarity between the signal and the delayed signal, when the delay time is zero, the average value of the signal is the mean square value, the value of the mean square value is the maximum value at the moment, and the maximum value of the function is used as the intensity of voiced sound to calculate the value of the unvoiced sound of each sub-band;
further, the specific steps of step 2 include:
(1) Filtering the voice data subjected to noise suppression by adopting a high-pass filter with cut-off frequency of A Hz to obtain S 3 (n), windowing, calculating autocorrelation coefficients, solving line spectrum pair parameters by using a Levinson-Durbin recursive algorithm, and performing parameter quantization on the obtained line spectrum pair parameters by using a three-level vector quantization scheme;
(2) Quantizing the pitch value calculated in the step (3) in the step 1: linearly mapping integer intervals containing pitch values to [ 0-z ]]In the above, the number of z is m 1 Bit representation;
(3) Voice data S detected by voice in step 1 (2) 2 (n) obtaining a prediction error signal r (n) without the influence of formants through a second order inverse filter, wherein the coefficient of the second order filter is a 1 、a 2 1, the gain parameter is expressed by RMS of r (n), and the quantization is completed in a logarithmic domain;
(4) Quantizing the maximum value obtained by the correlation function of the band-pass signal value after the frequency domain segmentation of the step 1 and the step 4 into m 2 A bit;
(5) Calculating residual compensation gain, calculating linear prediction coefficient by using quantized LSF parameter to form prediction error filter for input speech S 2 (n) filtering to obtain a residual signal, wherein the length of the residual signal is 160 points;
(6) Using a Hamming window with the window length of 160 points to window the prediction residual error, supplementing 0 to 512 points to a windowed signal, performing 512-point complex FFT on the windowed signal, and finding out a Fourier transform value corresponding to the first x-order harmonic by using a spectrum peak point detection algorithm;
(7) Let P be the quantized fundamental, given that the initial position of the ith harmonic is 512i/P, peak point detection finds the maximum peak value within 512/P frequency samples centered around the initial position of each subharmonic, the width being truncated to an integer; the harmonic times of searching are limited to be the smaller of x and P/4; the harmonic corresponding coefficients are thenNormalization, using an m for the x-dimensional vector 3 ∈[0,48]Quantizing the vector codebook of bits to obtain m 3 ∈[0,48]A bit.
Moreover, the specific method for synthesizing the voice quantization parameter into the voice in the step 3 is as follows:
by dividing into several frequency bands to form excitations, adding them and passing them through a synthesis filter to obtain synthesized speech, and then post-filtering the synthesized speech to obtain decoded synthesized speech data, where the synthesis filter H (z) and the post-filter H pf The z-transform transfer function of (z) is as follows:
H(z)=1/A(z)
Figure BDA0002708379040000061
wherein A (z) is 1-az -1 A is the filter coefficient, z in all the above equations is a complex variable having real and imaginary parts, let z = e jw γ =0.56, β =0.75, μ being determined by the reflection coefficient, the value of μ being dependent on
Figure BDA0002708379040000062
Furthermore, the method further comprises the following steps before the step 3:
and initializing and configuring a decoding end, wherein the initialization and configuration comprises rate selection, parameters of an algorithm of the decoding end and filter coefficients.
Before the step of initializing and configuring the decoding end, the method also comprises the following steps:
expanding the linear prediction coefficient, the excitation gain parameter and the gene period parameter in the step 3 to respectively obtain expanded parameters;
the method comprises the following specific steps:
(1) Enlarging the gain value quantization interval obtained in the step 3, and calculating the molecular frame to obtain an expansion excitation gain parameter;
(2) Respectively subtracting the LSP parameter mean value from the current frame LSP parameter and the quantized previous frame LSP parameter in the step 3 to obtain the mean value-removed vectors, and respectively recording the mean value-removed vectors
Figure BDA0002708379040000063
And
Figure BDA0002708379040000064
Figure BDA0002708379040000065
as the input of the hierarchical vector quantization, the quantization is carried out, namely the expanded LSP linear prediction parameters are obtained;
(3) Enlarging the gene value quantization bit obtained in the step 3, calculating a molecular frame once every two subframes, namely dividing the interval set in the step 3 (2) into two parts of the corresponding subframes, respectively calculating the maximum value and the index i according to the autocorrelation function in the step 2 (3), and respectively using the maximum value and the index i
Figure BDA0002708379040000071
And (5) carrying out normalization to obtain the period parameter of the expanded gene.
The invention has the advantages and beneficial effects that:
1. the invention provides excellent voice quality under low speed condition, provides good voice quality in application losing voice frequency under 300Hz and has strong adaptability to speech by analyzing the continuity of voice in time domain and the correlation of voice in frequency domain.
2. The invention extracts the actual parameters in two stages, more accurately extracts the parameters, improves the sound quality and saves the calculation resources for users.
3. The invention is different from the low-speed vocoder, expands the linear prediction coefficient, the excitation gain parameter and the gene period parameter, and ensures that the sound reconstruction degree is much higher than the narrow band because the coding result carries more information even under the condition of the error code existing in poor channel quality.
4. The invention inhibits the noise through the noise inhibiting function, improves the accuracy of the extracted sound parameters when the noise exists, and ensures the sound quality.
5. The invention adopts the codebook based on various local conversation training, and has strong adaptability to dialects.
6. The invention is developed based on standard codes, is standard and sustainable, and is easy to be transplanted to various hardware platforms.
Drawings
Fig. 1 is a working principle diagram of the present invention.
Detailed Description
The embodiments of the invention will be described in further detail below with reference to the accompanying drawings:
the input parameter of the voice data processing method of the NVOCPLUS high-speed broadband vocoder is a linear PCM voice digital signal with the sampling rate of 8000Hz (the number of voice signal samples collected per second) and the resolution of 16 bits; in the time domain, every 20 milliseconds of analysis, and in the frequency domain, a plurality of frequency bands are divided into 0-4000 for analysis.
A voice data processing method of NVOCPLUS high-speed wideband vocoder, as shown in fig. 1, includes the following steps:
step 1, initializing and configuring a coding end, wherein the initialization and configuration comprises rate selection, parameters and coefficients used by the coding end and a filter coding end algorithm;
step 2, the coding end initializes configuration and analysis processing of the original voice digital signal, firstly, noise suppression processing is carried out on the original voice digital signal, then whether the current voice signal is voice or not is judged, if the current voice signal is voice, fundamental tone in the voice is extracted, and then fundamental tone period and unvoiced and voiced numerical parameters of each sub-band are calculated;
the step 2 specifically comprises the following steps:
(1) Noise suppression: carrying out noise suppression processing on the original voice digital signal S (n) to obtain voice data S after noise suppression 1 (n) and sound spectrum characteristics of 0 to 4000Hz of the original data S (n);
the step 2, the step (1), comprises the following specific steps:
(1) a high-pass filter is adopted to remove direct-current components from voice data, improve high-frequency components and attenuate low frequency;
(2) windowing signal, using window length ofN, by overlapping Fourier transform to obtain energy distribution on frequency spectrum, obtaining voice data S after noise suppression 1 (n), noise suppression result parameters and sound spectrum characteristics of 0-4000 Hz of the original voice digital signal S (n).
(2) Voice detection: judging whether the current voice signal after noise suppression processing is voice by adopting VAD activation detection technology to obtain voice data S 2 (n);
The specific method of the step 2 and the step (2) is as follows:
according to the auditory characteristics of human ears, the voice data S after noise suppression 1 (n) filtering the sub-band and calculating the level of the sub-band signal, estimating the signal-to-noise ratio according to the following formula, and comparing the signal-to-noise ratio with a preset threshold value to further judge whether the current speech signal is voice:
Figure BDA0002708379040000091
wherein, a is the signal level value of the current frame, and b is the current signal level value estimated from the previous frames;
(3) Gene estimation first stage: extracting voice data S 2 A pitch of (n);
the specific method of the step (2) and the step (3) comprises the following steps:
dialogue data S using low-pass filter with cut-off frequency of B Hz 2 (n) low-pass filtering, and after the low-pass filtered voice data is inversely filtered by adopting a second-order inverse filter, calculating a self-phase function of an output signal of the second-order inverse filter according to the following formula, and extracting fundamental tone:
Figure BDA0002708379040000092
wherein N is the window length of the mentioned window function in the step (1) in the step 1, S w (i) And (3) outputting a signal for the second-order inverse filtering in the step (2) and the step (3).
In the present embodiment, in the frequency domain, the speech signal has a peak valueThe frequency of the sum peak is the multiple relation of the fundamental tones, and possible fundamental tone values or fundamental tone range values are preliminarily calculated; in the time domain, speech has short-term autocorrelation, and if the original signal has periodicity, its autocorrelation function also has periodicity, and the periodicity is the same as that of the original signal. And peaks occur at integer multiples of the period. The unvoiced sound signal is non-periodic, its autocorrelation function is attenuated with the increase of frame length, the voiced sound is periodic, its autocorrelation function has peak value on the integral multiple of gene period, and the low-pass filter whose cut-off frequency is B Hz is used to make voice data S 2 (n) low-pass filtering is carried out to remove the influence of the high-frequency signal on the fundamental tone extraction, then a second-order inverse filter is adopted to carry out inverse filtering on the voice data after the low-pass filtering to remove the influence of formants, an autocorrelation function of an output signal of the second-order inverse filtering is calculated, and the fundamental tone is extracted:
Figure BDA0002708379040000101
in the autocorrelation function of the frame, the pitch value of the frame is the sampling rate/frame length at which the maximum value appears, after the first maximum value is removed.
(4) A first stage of multi-subband voiced and unvoiced decision: calculating the value of unvoiced and voiced sounds of each sub-band
The step 2, the step (4) comprises the following specific steps:
(1) dividing the frequency domain into 5 frequency bands at equal intervals of 0-4000, wherein the frequency bands are respectively [0-500] Hz, [500-1000] Hz, [1000-2000] Hz, [2000-3000] Hz, [3000-4000] Hz, and calculating the autocorrelation function of the bandpass signals in each interval by using the following formula:
Figure BDA0002708379040000102
wherein ". Dot" is a convolution operator, (.) * f * () To obtain conjugation;
(2) taking the average value of the product of two values of the same time function at the moment t and t + a as the function of delay time t, wherein the average value is the measure of the similarity between the signal and the delayed signal, when the delay time is zero, the average value becomes the mean square value of the signal, the value of the mean square value is the maximum at the moment, and the maximum value of the function is taken as the voiced intensity to calculate the unvoiced and voiced numerical value of each sub-band;
step 3, extracting and quantizing parameters of a line spectrum pair, a base pitch value, a gain parameter, a residual compensation gain and a codebook vector on the basis of the numerical parameters of the pitch period, the unvoiced sound and the voiced sound calculated in the step 2 to obtain a sound quantization parameter;
the specific steps of the step 3 comprise:
(1) Filtering the voice data subjected to noise suppression by adopting a high-pass filter with cut-off frequency of A Hz to obtain S 3 (N), adding a Hamming window with the window length of N2, calculating an autocorrelation coefficient, solving a line spectrum pair parameter (namely a prediction parameter (namely an LSF parameter)) by using a Levinson-Durbin recursive algorithm, and performing parameter quantization on the obtained line spectrum pair parameter by using a three-level vector quantization scheme to obtain m 1 A bit;
(2) Quantizing the pitch value calculated in the step (3) in the step 2: linearly mapping integer intervals containing pitch values to [ 0-z ]]In the above, the number of z is m 2 Bit representation;
(3) Voice data S detected by voice in step 2 (2) 2 (n) obtaining a prediction error signal r (n) without the influence of formants through a second order inverse filter, wherein the coefficient of the second order filter is a 1 、a 2 1, the excitation gain parameter is expressed by RMS (mean value Ping Fangen of the square) of r (n), and the quantization is completed in a logarithmic domain;
(4) Quantizing the maximum value (namely the unvoiced and voiced state value) obtained by the correlation function of the band-pass signal value after the frequency domain segmentation of the step (4) in the step 2 into m 3 A bit;
(5) Calculating spectral compensation gain, calculating linear prediction coefficient by using quantized LSF parameter to form prediction error filter for input speech S 2 (n) filtering to obtain a residual signal, wherein the length of the residual signal is 160 points;
(6) Windowing the prediction residual error by using a Hamming window with the window length of 160 points, supplementing 0 to 512 points to a windowed signal, performing 512-point complex FFT on the windowed signal, and finding a Fourier transform value corresponding to the first x-order harmonic by using a spectrum peak point detection algorithm;
(7) Let P be the quantized pitch, given an initial position of the ith harmonic of 512i/P, peak detection looks for the largest peak within 512/P frequency samples centered around the initial position of each subharmonic, this width being truncated to an integer. The number of harmonics to be searched is limited to the smaller of x and P/4. The coefficients corresponding to these harmonics are then normalized, using an m for this x-dimensional vector 4 ∈[0,48]Quantizing the vector codebook of bits to obtain m 4 ∈[0,48]A bit.
Step 4, expanding the linear prediction coefficient, the excitation gain parameter and the gene period parameter in the step 3 to respectively obtain an expansion parameter;
because the high-speed vocoder has larger bandwidth, more bits can be carried, and in order to improve the accuracy of gene detection, the resolution reliability of pitch detection is improved and the calculation is carried out by using a molecular frame.
In the present embodiment, 12.2kbps is taken as an example, and the meaning of the sub-frame here is expressed as every 40 sampling points (5 ms data).
The specific steps of the step 4 comprise:
(1) Enlarging the gain value quantization interval obtained in the step 3, and calculating the molecular frame to obtain an expansion excitation gain parameter;
(2) Respectively recording the vectors obtained by subtracting the LSP parameter mean value from the current frame LSP parameter and the quantized previous frame LSP parameter in the step 3 and removing the mean value as
Figure BDA0002708379040000121
And
Figure BDA0002708379040000122
Figure BDA0002708379040000123
as the input of the hierarchical vector quantization, the quantization is carried out, namely the expanded LSP linear prediction parameters are obtained;
(3) In the enlargement step 3Quantizing bit of the obtained gene value, calculating a molecular frame once every two subframes, namely dividing the interval set in the step (2) in the step (3) into two parts of corresponding subframes, respectively solving a maximum value and an index i according to the autocorrelation function in the step (3) in the step (2), and respectively using the maximum value and the index i
Figure BDA0002708379040000124
And (5) carrying out normalization to obtain the period parameter of the expanded gene.
In this embodiment, the extended parameters are the result of adding the extended information bits on the premise of the original parameters. Not solely as a new parameter derived from the encoding.
Step 5, initializing and configuring a decoding end, wherein the initialization and configuration comprises the speed selection (2.2 kbps or 2.4 kbps), and parameters of an algorithm of the decoding end, filter coefficients and the like;
and 6, after the voice quantization parameters in the steps 3 and 4 are extracted, synthesizing the voice quantization parameters into voice, enhancing the voice quality through noise pressing, and performing voice reconstruction after the parameter recovery fails or the voice synthesis fails.
The specific method of the step 6 comprises the following steps:
the result after each frame signal coding is a numerical value formed by equivalently converting a line spectrum pair, gain, a gene period, unvoiced voiced sound and a vector codebook into bits. Among these parameters, the noise suppression result parameter determines whether the audio data segment with excessive environmental noise is replaced by mute or comfortable environmental sound, the pitch period and the unvoiced/voiced value determine the excitation source for synthesizing the speech signal at the decoding end, and according to the step 1 (4) at the encoding end, the unvoiced/voiced signal covers 5 frequency bands, so that the excitation is formed by dividing into several frequency bands, and then the excitation is added and passed through the synthesis filter and post-filtering, so as to obtain the decoded synthesized speech data. If the frame is an unvoiced frame, namely the unvoiced and voiced values bit are all 0, a random number is used as an excitation source, if the frame is a voiced frame, a periodic pulse sequence is selected to generate the excitation source through an all-pass filter, the amplitude of the excitation source is weighted by a gain parameter, and the length of a sampling point depends on the size of a gene period. All-pass filter H 1 (z) synthesis filter H 2 (z) andpost filter H pf The z-transform transfer function of (z) is as follows:
Figure BDA0002708379040000131
Figure BDA0002708379040000132
Figure BDA0002708379040000133
wherein A (z) is 1-az -1 A is filter coefficient, obtained by P transformation of linear prediction parameters in step 4, P is high-level mathematical transformation, z in all the formulas is complex variable and has real part and imaginary part, and z = e can be ensured jw γ =0.56, β =0.75, μ being determined by the reflection coefficient, the value of μ being dependent on
Figure BDA0002708379040000141
It can be understood that the algorithm of the encoding and decoding is corresponding, the input parameter format of the decoding end and the output parameter format of the encoding end are also corresponding, the decoder decodes a frame and outputs 160 sampling values, and the calling needs to be unified with the encoder speed.
It should be emphasized that the embodiments described herein are illustrative and not restrictive, and thus the present invention includes, but is not limited to, the embodiments described in this detailed description, as well as other embodiments that can be derived by one skilled in the art from the teachings herein, and are within the scope of the present invention.

Claims (8)

1. A voice data processing method of NVOCPLUS high-speed broadband vocoder is characterized in that: the method comprises the following steps:
step 1, an encoding end carries out initialization configuration and analysis processing on an original voice digital signal, firstly carries out noise suppression processing on the original voice digital signal, then judges whether the current voice signal is voice, if the current voice signal is voice, calculates a fundamental tone period and unvoiced and voiced numerical parameters of each sub-band after extracting fundamental tones in the voice;
step 2, extracting and quantizing parameters of a line spectrum pair, a base pitch value, a gain parameter, a residual compensation gain and a codebook vector on the basis of the numerical parameters of the pitch period, the unvoiced sound and the voiced sound calculated in the step 1 to obtain a sound quantization parameter;
step 3, after extracting the voice quantization parameters in the step 2, synthesizing the voice quantization parameters into voice, increasing the voice quality through noise pressing, and performing voice reconstruction after the parameter recovery fails or the voice synthesis fails;
the method also comprises the following steps before the step 3:
initializing and configuring a decoding end, wherein the initialization and configuration comprises rate selection, and the initialization and configuration of parameters and filter coefficients of an algorithm of the decoding end;
before the step of initializing and configuring the decoding end, the method also comprises the following steps:
expanding the linear prediction coefficient, the excitation gain parameter and the pitch period parameter in the step 3 to respectively obtain an expanded parameter;
the method comprises the following specific steps:
(1) Expanding the gain value quantization interval obtained in the step 3, and calculating the molecular frame to obtain an expanded excitation gain parameter;
(2) Respectively subtracting the LSP parameter mean value from the current frame LSP parameter and the quantized previous frame LSP parameter in the step 3 to obtain the mean value-removed vectors, and respectively recording the mean value-removed vectors
Figure FDA0003748671190000021
And
Figure FDA0003748671190000022
Figure FDA0003748671190000023
as the input of the hierarchical vector quantization, the quantization is carried out, namely the expanded LSP linear prediction parameters are obtained;
(3) Enlarging the quantized bit of the pitch value obtained in the step 3, calculating a molecular frame once every two subframes, namely dividing the interval set in the step 3 (2) into two parts of the corresponding subframes, respectively solving the maximum value and the index i according to the autocorrelation function in the step 2 (3), and respectively using the maximum value and the index i
Figure FDA0003748671190000024
And carrying out normalization to obtain the extended pitch period parameters.
2. The method as claimed in claim 1, wherein the voice data processing method of the NVOCPLUS high-speed wideband vocoder comprises: the step 1 specifically comprises the following steps:
(1) Carrying out noise suppression processing on the original voice digital signal S (n) to obtain voice data S after noise suppression 1 (n) sound spectral characteristics of 0 to 4000Hz of the original data S (n);
(2) Judging whether the current voice signal after noise suppression processing is voice by adopting VAD activation detection technology to obtain voice data S 2 (n);
(3) Extracting voice data S 2 A pitch of (n);
(4) And calculating the parameters of the pitch period and the unvoiced and voiced values of each sub-band.
3. The method as claimed in claim 2, wherein the voice data processing method of the NVOCPLUS high-speed wideband vocoder comprises: the step 1, the step (1), comprises the following specific steps:
(1) a high-pass filter is adopted to remove direct-current components from voice data, improve high-frequency components and attenuate low frequency;
(2) windowing signals, adopting a Hamming window with the window length N, and obtaining energy distribution on a frequency spectrum through overlapped Fourier transform to obtain voice data S after noise suppression 1 (n), noise suppression result parameter, and sound spectrum characteristic of 0-4000 Hz of the original voice digital signal S (n).
4. The method as claimed in claim 2, wherein the voice data processing method of the NVOCPLUS high-speed wideband vocoder comprises: the specific method of the step (1) and the step (2) comprises the following steps:
according to the auditory characteristics of human ears, the voice data S after noise suppression 1 (n) filtering the sub-band and calculating the level of the sub-band signal, estimating the signal-to-noise ratio according to the following formula, and comparing the signal-to-noise ratio with a preset threshold value to further judge whether the current speech signal is voice:
Figure FDA0003748671190000031
where a is the signal level value of the current frame and b is the current signal level value estimated from the previous frames.
5. The method as claimed in claim 3, wherein the voice data processing method of the NVOCPLUS high-speed wideband vocoder comprises: the specific method of the step 1 and the step (3) comprises the following steps:
dialogue data S using a low-pass filter with a cut-off frequency of B Hz 2 (n) low-pass filtering, and after the low-pass filtered voice data is inversely filtered by adopting a second-order inverse filter, calculating a self-phase function of an output signal of the second-order inverse filter according to the following formula, and extracting fundamental tone:
Figure FDA0003748671190000032
wherein N is the window length of the mentioned window function in the step 1, S w (i) And (3) outputting a signal for the second-order inverse filtering in the step (1) and the step (3).
6. The method as claimed in claim 2, wherein the voice data processing method of the NVOCPLUS high-speed wideband vocoder comprises: the step 1, the step (4) comprises the following specific steps:
(1) dividing the frequency domain into 5 frequency bands at equal intervals of 0-4000, wherein the frequency bands are respectively [0-500] Hz, [500-1000] Hz, [1000-2000] Hz, [2000-3000] Hz, [3000-4000] Hz, and calculating the autocorrelation function of the bandpass signals in each interval by using the following formula:
Figure FDA0003748671190000041
where "t" is a continuous time argument, "τ" is an input signal delay ". The" is a convolution operator ".) * f * () To take conjugation;
(2) the average value of the product of two values of the same time function at the instant t and t + a is taken as the function of the time t, which is the measure of the similarity between the signal and the delayed signal, when the delay time is zero, the average square value of the signal is obtained, the value of the average square value is the maximum value at the moment, and the maximum value of the function is taken as the voiced sound intensity to calculate the unvoiced and voiced sound value of each sub-band.
7. The method as claimed in claim 1, wherein the voice data processing method of the NVOCPLUS high-speed wideband vocoder comprises: the specific steps of the step 2 comprise:
(1) Filtering the voice data subjected to noise suppression by adopting a high-pass filter with cut-off frequency of A Hz to obtain S 3 (n), windowing, calculating autocorrelation coefficients, solving line spectrum pair parameters by using a Levinson-Durbin recursive algorithm, and performing parameter quantization on the obtained line spectrum pair parameters by using a three-level vector quantization scheme;
(2) Quantizing the pitch value calculated in the step (3) in the step 1: linearly mapping integer intervals containing pitch values to [ 0-z ]]In the above, the number of z is m 1 Bit representation;
(3) Voice data S detected by voice in step 1 (2) 2 (n) obtaining a prediction error signal r (n) without the influence of the formants through a second order inverse filter, wherein the coefficient of the second order inverse filter is a 1 、a 2 1, the gain parameter is expressed by RMS of r (n), and the quantization is completed in a logarithmic domain;
(4) Quantizing the maximum value obtained by the correlation function of the band-pass signal value after the frequency domain segmentation of the step 1 and the step 4 into m 2 A bit;
(5) Computing residual compensation gain, computing linear prediction coefficient by using quantized LSF parameter to form prediction error filter for input speech S 2 (n) filtering to obtain a residual signal, wherein the length of the residual signal is 160 points;
(6) Windowing the prediction residual error by using a Hamming window with the window length of 160 points, supplementing 0 to 512 points to a windowed signal, performing 512-point complex FFT on the windowed signal, and finding a Fourier transform value corresponding to the first x-order harmonic by using a spectrum peak point detection algorithm;
(7) Setting P as quantized fundamental tone, setting the initial position of the ith harmonic as 512i/P, and searching the maximum peak value with the width within 512/P frequency samples and the initial position of each subharmonic as the center by peak point detection, wherein the width is truncated into an integer; the harmonic number of the search is limited to the smaller of x and P/4; the coefficients corresponding to the harmonics are then normalized, using an m for this x-dimensional vector 3 ∈[0,48]Bit vector codebook is quantized to obtain m 3 ∈[0,48]A bit.
8. The method as claimed in claim 1, wherein the voice data processing method of the NVOCPLUS high-speed wideband vocoder comprises: the specific method for synthesizing the voice quantization parameter into the voice in the step 3 comprises the following steps:
by dividing into several frequency bands to form excitations, adding them and passing them through a synthesis filter to obtain synthesized speech, and then post-filtering the synthesized speech to obtain decoded synthesized speech data, where the synthesis filter H (z) and the post-filter H pf The z-transform transfer function of (z) is as follows:
H(z)=1/A(z)
Figure FDA0003748671190000051
wherein A (z) is 1-az -1 A is the filter coefficient, z in all the above equations is a complex variable having real and imaginary parts, let z = e jw γ =0.56, β =0.75, μ being determined by the reflection coefficient, the value of μ being dependent on
Figure FDA0003748671190000061
CN202011047245.1A 2020-09-29 2020-09-29 Voice data processing method of NVOCPLUS high-speed broadband vocoder Active CN112233686B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011047245.1A CN112233686B (en) 2020-09-29 2020-09-29 Voice data processing method of NVOCPLUS high-speed broadband vocoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011047245.1A CN112233686B (en) 2020-09-29 2020-09-29 Voice data processing method of NVOCPLUS high-speed broadband vocoder

Publications (2)

Publication Number Publication Date
CN112233686A CN112233686A (en) 2021-01-15
CN112233686B true CN112233686B (en) 2022-10-14

Family

ID=74120236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011047245.1A Active CN112233686B (en) 2020-09-29 2020-09-29 Voice data processing method of NVOCPLUS high-speed broadband vocoder

Country Status (1)

Country Link
CN (1) CN112233686B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6094629A (en) * 1998-07-13 2000-07-25 Lockheed Martin Corp. Speech coding system and method including spectral quantizer
CN103050121A (en) * 2012-12-31 2013-04-17 北京迅光达通信技术有限公司 Linear prediction speech coding method and speech synthesis method
CN109729553B (en) * 2017-10-30 2021-12-28 成都鼎桥通信技术有限公司 Voice service processing method and device of LTE (Long term evolution) trunking communication system
CN108597529A (en) * 2018-01-22 2018-09-28 北京交通大学 A kind of police digital cluster system air interface speech monitoring system and method
CN111107501A (en) * 2018-10-25 2020-05-05 普天信息技术有限公司 Group call service processing method and device
CN111243610A (en) * 2020-01-19 2020-06-05 福建泉盛电子有限公司 Method for realizing intercommunication of different vocoder and mobile stations in digital intercommunication system

Also Published As

Publication number Publication date
CN112233686A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
Shrawankar et al. Techniques for feature extraction in speech recognition system: A comparative study
US5450522A (en) Auditory model for parametrization of speech
JP4308345B2 (en) Multi-mode speech encoding apparatus and decoding apparatus
JP3277398B2 (en) Voiced sound discrimination method
KR100348899B1 (en) The Harmonic-Noise Speech Coding Algorhthm Using Cepstrum Analysis Method
KR101378696B1 (en) Determining an upperband signal from a narrowband signal
JP4624552B2 (en) Broadband language synthesis from narrowband language signals
JP2002516420A (en) Voice coder
Milner et al. Speech reconstruction from mel-frequency cepstral coefficients using a source-filter model
EP1395978A2 (en) Method and apparatus for speech reconstruction in a distributed speech recognition system
JP2001525079A (en) Audio coding system and method
JP3687181B2 (en) Voiced / unvoiced sound determination method and apparatus, and voice encoding method
KR100216018B1 (en) Method and apparatus for encoding and decoding of background sounds
Bhatt Simulation and overall comparative evaluation of performance between different techniques for high band feature extraction based on artificial bandwidth extension of speech over proposed global system for mobile full rate narrow band coder
CN112270934B (en) Voice data processing method of NVOC low-speed narrow-band vocoder
JPH10105195A (en) Pitch detecting method and method and device for encoding speech signal
Robinson Speech analysis
CN112233686B (en) Voice data processing method of NVOCPLUS high-speed broadband vocoder
Shahnaz et al. Robust pitch estimation at very low SNR exploiting time and frequency domain cues
KR20060067016A (en) Apparatus and method for voice coding
CN114550741A (en) Semantic recognition method and system
EP0713208B1 (en) Pitch lag estimation system
Schlien et al. Acoustic tube interpolation for spectral envelope estimation in artificial bandwidth extension
Srivastava Fundamentals of linear prediction
Tan et al. Speech feature extraction and reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant