CA1201533A - Speech pattern processing utilizing speech pattern compression - Google Patents

Speech pattern processing utilizing speech pattern compression

Info

Publication number
CA1201533A
CA1201533A CA000449947A CA449947A CA1201533A CA 1201533 A CA1201533 A CA 1201533A CA 000449947 A CA000449947 A CA 000449947A CA 449947 A CA449947 A CA 449947A CA 1201533 A CA1201533 A CA 1201533A
Authority
CA
Canada
Prior art keywords
speech
signals
signal
representative
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000449947A
Other languages
French (fr)
Inventor
Bishnu S. Atal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone and Telegraph Co Inc filed Critical American Telephone and Telegraph Co Inc
Application granted granted Critical
Publication of CA1201533A publication Critical patent/CA1201533A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/0018Speech coding using phonetic or linguistical decoding of the source; Reconstruction using text-to-speech synthesis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

SPEECH PATTERN PROCESSING
UTILIZING SPEECH PATTERN COMPRESSION

Abstract A speech pattern is compressed to a degree not previously attainable by analyzing the pattern to generate a sequence of signals representative of its acoustic feature at a first rate. Responsive to the acoustic feature feature signals, a sequence of speech event representative signals is generated. A sequence of coded signals corresponding to the speech pattern is formed at rate less than said first rate responsive to the speech event representative signals.

Description

SPEECH PATTERN PROCESSING
UTILIZING SPEECH PATTERN COMPRESSION

Background of the Invention My invention relates to speech processing and, particularly, to the compression of speech patterns and to the synthesis of speech patterns from such compressed patterns.
It is generally accepted that a speech signal requires a bandwidth of at least 4 kHz for reasonable intelligibility. In digital speech processing systems such as speech synthesizers, recognizers, or coders, the channel capacity needed for transmission or memory required for storage of the digital elements of the full 4 kHz bandwidth waveform is very large. Many techniques have been devised to reduce the number of digital codes needed to represent a speech signal. Waveform coding such as Pulse Code Modulation (PCM), Differential Pulse Code Modulation (DPC~), Delta Modulation or adaptive predictive coding result in natural sounding, high quality speech at bit rates between 16 and 64 kbps. The speech quality obtained from waveform coders, however, degrades as the bit rate is reduced below 16 kbps.
An alternative speech coding technique disclosed in U. S. Patent 3,624,302 utilizes a small number, e.g., 12-16, of slowly varyiny parameters which may be processed to produce a low distortion replica of a speech pattern.
Such parameters, e.g., Linear Prediction Coefficient (LPC) or log area, generated by linear prediction analysis can be spectrum limited to 50 Hz without significant band limiting distortion. Encoding of the LPC or log area parameters generally requires sampling at a rate of twice the bandwidth and quantizing each resulting frame of log area parameters. Each frame of a log area parameter can be quantized using 48 bits. Consequently, 12 log area parameters each having a 50 Hz bandwidth results in a total " ~

bit rate of 4800 bits/sec.
Further reduction of bandwidth decreases the bit rate, but the resulting increase in distortion interferes with the i~telligibility of speech synthesi~ed from the lower band~idth parameters. It is well known that sounds in speech patterns do not occur at a uniform rate and techniques have been devised to take into account such nonuniform occurrences. U. S. Patent 4,349,700 discloses arrangements that permit recognition of speech patterns having diverse sound patterns utilizing dynamic programming. U. S. Patent 4,038,503 discloses a technique for nonlinear warping of time intervals of speech patterns so that the sound features are represented in a more uniform manner. These arrangements, however, require storing and processing acoustic feature signals that are sampled at a rate corresponding to the most rapidly changing feature in the pattern. It is an object of the invention to provide an improved speech representation and/or speech synthesis arrangements having reduced digital storage and processing requirements.
Summary of the Invention Sounds or events in human speech are produced at an average rate that varies between lO and 20 events per second. It has been observed that such speech events generally occur at nonuniformly spaced time intervals and that artic~latory movements differ widely for various speech sounds. Consequently, a significant degree of compression may be achieved by transforming acoustic feature parameters into short speech event related units located at nonuniformly spaced time intervals. The coding of such speech event units results in higher efficiency without degradation of the accuracy of the the pattern representation.

~.~

5~

According to one aspect of the invention there is provided a method for compressing speech patterns comprising the steps of: analyzing a speech pattern to generate a set of signals representative of acoustic features of the speech pattern at a first rate; charac-terized by the steps of generating a sequence of sigals each representative of a speech event of said speech pattern responsive to said set of acoustic feature signals; and forming a sequence of coded signals corresponding to said speech pattern at a rate less than said first rate responsive to said speech event representative signals.
According to another aspect of the invention there is provided a method for generating a speech pat-tern comprising the steps of: storing a prescribed set of speech element signals; combining said speech element signals to form a set of signals representative of the acoustic features of a speech pattern; and producing said speech pattern responsive to said set of acoustic feature signals; said prescribed speech element signals are formed by; analyzing a speech pattern to generate a set of acoustic feature representative signals at a first rate;
producing a sequence of signals representative of the features of sucessive speech events in said speech pattern responsive to said acoustic feature signals; and forming a sequence of coded signals corresponding to said speech event representative signal at a rate less than said first rate.
Description of the Drawing FIG. 1 depicts a flowchart illustrating the general method of the invention;
FIG. 2 depicts a block diagram of a speech pattern coding circuit illustrative of the invention;
FIGS. 3-8 depict detailed flowcharts illustratin~
the operation of the circuit of FIG. 2;

s~
- 3a -FIG. 9 depicts a speech synthesizer illustrative of the invention;
FIG. 10 depicts a flow chart illustrating the operation of the circuit of FIG. 9;
FIG. 11 shows a waveform illustrating a speech event timing signal obtained in the circuit of FIG. 2; and FIG. 12 shows waveforms illustrative of a speech pattern and the speech event feature signals associated therewith.

General Description It is well known in the art to represent a speech pattern by a sequence of acoustic feature signals derived from a linear prediction or other spectral analysis. Log area parameter signals sampled at closely spaced time intervals have been used in speech synthesis to obtain efficient representation of a speech pattern. In accordance with the invention, log area parameters are transformed into a sequence of individual speech event feature signals ~k(n) such that the log area para~eters m Yi(n) k_laik~k(n) 1 < n < N 1 < i < p The speech event feature signals ~k(n) are sequential and occur at the speech event rate of the pattern which is substantially lower than the the log area parameter frame rate. In equation (1), p is the total number of log area parameters Yi(n) determined by linear prediction analysisO
m corresponds to the number of speech events in the pattern, n is the index of samples in the speech pattern at the sampling rate of the log area parameters, ~k(n) is the kth speech event signal at sampling instant n, and aik is a combining coefficient corresponding to the contribution of the kth speech event function to the ith log area parameter. Equation (1) may be expressed in matrix form as Y = A~ (2) where Y is a pxN matrix whose (i,n) element is yi(n), A is a pxm matrix whose (i,k) element is aik, and ~ i9 an mxN
matrix whose (k,n) element is ~k(n). Since each speech event k occupies only a small segment of the speech pattern, the signal ~k(n) representative thereof should be non-zero over only a small range of the sampling intervals of the total pattern. Each log area parameter Yi(n) in equation (1) is a linear combination of the speech event functions ~k(n) and the bandwidth of each Yi(n) parameter is the maximum bandwidth o~ any one of the speech event functions ~k(n). It is therefore readily seen that the direct codiny of Yi(n) signals will take more bits than the coding of the ~k(n) switch event signals and the combining coefficient signals aik in equation (1).
FIG. 1 shows a flow chart illustrative of the general method of the invention. In accordance with the invention, a speech pattern is analyzed to form a sequence of signals representative of log area parameter acoustic feature signals. It is to be understood, however, that LPC, Partial Autocorrelation (PARCOR) or other speech features (see, e.g., U~S. patent 3,624,302) may be used instead of log area parameters. The feature signals are then converted into a set of speech event representative signals that are encoded at a lower bit rate for transmission or storage.
With reference to FIG. 1, box 101 is entered in which an electrical signal corresponding to a speech pattern is low pass filtered to remove unwanted higher frequency noise and speech components and the filtered signal is sampled at twice the low pass filtering cutoff frequency. The speech pattern samples are then converted into a sequence of digitally coded signals corresponding to the pattern as per box 110. Since the storage required for the sample signals is too large for most practical applications, they are utilized to generate log area parameter signals as per box 120 by linear prediction techniques well known in the art. The log area parameter signals Yi(n) are produced at a constant sampling rate high enough to accurately represent the fastest expected event in the speech pattern. Typically, a sampling interval between two and five milliseconds is selected.
After the log area parameter signals are stored, the times of occurrence of the successive speech events in the pattern are detected and signals representative of the event timing are generated and stored as per box 130. This is done by partitioning the pattern into prescribed smaller segments, e.g., 0.25 second intervals. For each successive interval having a beginning frame nb an~ an ending frame ne, a matrix of log area parameter signals is formed corresponding to the log area parameters Yi(n) of the segment. The redundancy in the matrix is reduced by factoring out the first four principal components so that yi(n) ~ltCimum(n)~ (3) and um(n) = i~l ~imYi(n)~ (4) The first four principal components may be obtained by methods well known in the art such as described in the article "An Efficient Linear Prediction vocoder" by M. R. Sambur appearing in the Bell System Technical Journal Vol. 54, No. 10, pp. 1693-1723, December 1975. The resulting um(n) functions may be linearly combined to define the desired speech event signals as ~k(n) = ~ ~bkmUm(n)] (5) by selecting coefficients bkm such that each ~k(n) are most compact in time. In this way, the speech pattern is represented by a sequence of successive compact ~minimum spreading) speech event feature signals ~k(n) each of which can be efficiently coded. In order to obtain the shapes and locations of the speech event signals, a distance measure a (L~ (n-L)2~2(n)/~2(n)~ 1/2 (6) n n is minimized to choose the optimum ~(n) and its location is obtained from a speech event timing signal v(L) = [~(n-L)~ (n)~ (n) (7) n n In terms of equations 5, 6, and 7, a speech event signal ~k(n) with minimum spreading is centered at each negative zero crossing of v(L).
Subsequent to the generation of the v(L) signals in box 130, box 140 is entered and the speech event signals ~k(n) are accurately determined using the process of box 130 with the speech event occurrence signals from the negative going zero crossings of v(L). Having generated the sequence of speech event representative signals, the combining coefficients aik in equations (1) and (2) may be generated by ~inimizing the mean-squared error M

E = ~[yi(n) ~ ~ aik~k(n)~ (8) where M is the total number of speech events within the range of index n over which the sum is performed. The partial derivatives of E with respect to the coefficients 25 aik are set equal to zero and the coefficients aik are obtained from the set of simultaneous linear equations M

k-l aik n~k(n)~r(n) = ~Yi(n)~r(n) (9 1< r <M

Detailed Description FIG~ 2 shows a speech coding arrangement that includes electroacoustic transducer 201, filter and sampler circuit 203, analog to ~igital converter 205, and speech sample store 210 which cooperate to convert a speech pattern into a stored sequence of digital codes representative of the pattern. Central processor 275 may comprise a microprocessor such as the Motorola*type MC68000 controlled by permanently stored instructions in read only memories (~OM) 215, 220, 225, 230 and 235. Processor 275 is adapted to direct the operations of arithmetic processor 280, and stores 210, 240, 245, 250, 255 and 260 so that the digital codes from store 210 are compressed into a compact set of speech event feature signals. The speech event feature signals are then supplied to utilization device 285 via input output interface 255. The utilization device may be a digital communication facility or a storage arrangement for delayed transmission or a store associated with a speech synthesizer. The Motorola MC68000 integrated circuit is described in the publication MC68000 1~6 Bit Microprocessor User's Manual, second edition, Motorola, Inc., 1980 and arithmetic processor 280 may comprise the TR~ type MPY-16HJ integrated circuit.
Referring to FIG. 2, a speech pattern is applied to electroacoustic transducer 201 and the electrical signal therefrom is supplied to low pass filter and sampler circuit 203 which is operative to limit the upper end of the signal bandwidth to 3.5 KHz and to sample the filtered signal at an 8 KHz rate. Analog to digital converter 205 converts the sampled signal from filter and sampler 203 into a sequence of digital codes, each representative of the magnitude of a signal sample. The resulting digital codes are sequentially stored in speech sample store 210.
Subsequent to the storage of the sampled speech pattern codes in store 210, central processor 275 causes the instructions stored in log area parameter program store 215 to be transferred to the random access memory associated with the central processor. The flow chart of FIG. 3 illustrates the sequence of operations per~ormed by * Trade Mark ~, 5 ~3 _ g the controller responsive to the instructions from store 215.
Referring to FIG. 3, box 305 is initially entered and frame count index n is reset to 1. The speech samples of the current frame are then transferred from store 210 to arithmetic processor 280 via central processor 275 as per box 310. The occurrence of an end of speech sample signal is checked in decision box 315. Until the detection of the end of speech pattern signal, control is passed to box 325 and an LPC analysis is performed for the frame in processors 275 and 280. The LPC parameter signals of the current frame are then converted to log area parameter signals Yi(k) as per box 330 and the log area parameter signals are stored in log area parameter store 240 (box 335). The frame count is incremented by one in box 345 and the speech samples of the next frame are read (box 310). When the end of speech pattern signal occurs, control is passed to box 320 and a signal corresponding to the number of frames in the pattern is stored in processor 275~
Central processor 275 is operative after the log area parameter storing operation is completed to transfer the stored instructions of ROM 220 into its random access memory. The instruction codes from store 220 correspond to the operations illustrated in the flow chart of FIGS. 4 and 5. These instruction codes are effective to generate a signal v(L) from which the occurrences of the speech events in the speech pattern may be detected and located.
Referring to FIG. 4, the frame count of the log area parameters is initially reset in processor 275 as per box 403 and the log area parameters Yi(n~ for an initial time interval nl to n2 of the speech pattern are transferred from log area parameter store 240 to processor 275 (box 410). ~fter determining whether the end of the speech pattern has been reached in decision box 415, box 420 is entered and the redundancy of the log area parameter signals is removed by factoring out the first s~

four principal components ui(n), i=l,...,~ as aforementioned.
The log area parameters of the current time interval are then represented by i m-l im m (10) from which a set of signals m i-l im i (11) are to be obtained. The ui(n) signals over the interval may be combined through use of parameters bi, i=1,...,4, in box 425 so that a set of signals ~k(n) ~l[bkmUm(n)~ tl2) are produced such that ~k is most compact over the range n to n2. This is accomplished through use of the 9(L) function of equation 6. A signal v(L) representative of the speech event timing of the speech pattern is then formed in accordance with equation 7 in box 430 and the v(L) signal is stored in timing parameter store 245. Frame counter n is incremented by a constant value, e.g., 5, on the basis of how close adjacent speech event signals ~k(n) are expected to occur (box 435) and box 410 is reentered to generate the ~k(n) and v(L) signals for the next time interval of the speech pattern.
When the end of the speech pattern is detected in decision box 415, the frame count of the speech pattern is stored (box 440) and the generation of the speech event timing parameter signal for the speech pattern is completed. FIG. 11 illustrates the speech event timing parameter signal for the an utterance exemplary message.

s~;~

Each nega~ive going zero crossing in FIG. 11 corresponds to the centroid of a speech event feature signal ~k(n).
Referring to FIG. 5, box 501 is entered in which speech event index I is reset to zero and frame index n is again reset to one. After indices I and n are initialized, `
the successive frames of speech event timing parameter signal are read from store 245 (box 505) and zero crossings therein are detected in processor 275 as per box 510.
Whenever a zero crossing is found, the speech event index I
is incremented (box 515) and the speech event location frame is stored in speech event location store 250 (box 520). The frame index n is then incremented in box 525 and a check is made for the end of the speech pattern frames in box 530. Until the end of speech pattern frames signal is detected, box 505 is reentered from box 530 after each iteration to detect the subsequent speech event location frames of the pattern.
Upon detection of end of the speech pattern signal in box 530, central processor 235 addresses speech event feature signal generation program store 225 and causes its contents to be transferred to the processor.
Central processor 275 and arithmetic processor 280 are thereby adapted to form a sequence of speech event feature signals ~k(n) responsive to the log area parameter signals in store 240 and the speech event location signals in store 250. The speech event feature signal generation instructions are illustrated in the flow chart of FIGo 6~
Initially, location index I is set to one as per box 601 and the locations of the speech events in store 250 are transferred to central processor 275 (box 605). As per box 610, the limit frames for a prescribed number of speech event locations, e.g., 5, are determined. The log area parameters for the speech pattern interval defined by the limit frames are read from store 240 and are placed in a section of the memory of central processor 275 (box 615)o The redundancy in the log area parameters is removed by factoring out the number of principal components therein s,'~

corresponding to the number of prescribed number of events (box 620). Immediately thereafter, the speech event feature signal ~L(n) for the current location L is generated.
The minimization of equation (h) to determine (n) is accomplished by forming the derivative aln~(L) = 12 2 [~ (n-L) ~( abr n-n where m i_1 i i (14) m is the prescribed number of speech events and r can be either 1,2,..., or m. The derivative of equation (13) is set equal to zero to determine the minimum and n~2 (n-L)2~(n)-a~(n)/ ~2 (n-L) ~ (n n=nl r n=nl = ~2 ~(n)_a~n)/ ~2 ~ (n) (15) n=nl abr n=n is obtained. From equation (14) a~(n) ur(n) (16) so that equation (15) can be changed to s :3~

n2 2 ~ (n-L) ~(n)ur(n) n=nl [n22 2 n2 2 ~ [n2 ~ (17) n=nln=l n=n ~(n) in equation (17) can be replaced by the riqht side of equation 14. Thus, n M
~2 (n-L)2 ~ b.u.(n)urln) (18 n=nl i=l N M
b.u.(n)u (n) n=l i=l 1 1 r where n n 2 = ~2 (n-L)2~2(n)/ ~2 ~ (n) = min. value ~(L) (19) n=nl n=n Rearranging equation (18) yields M n2 2 ~ b. ~ (n-L) ui(n)ur(n) i=l ln=nl M n =~ b. ~2 u.(n)u (n) (20) i=l ln=nl 1 r Since ui(n) is the principal component o matrix Y, ~ ui(n)ur(n) =
n=nl (21) equation (20) can be simplified to M

i_lbi ir r ( ) (22) where Rir= ~ (n-L) ui(n)ur(n) ~23) n=nl Equation (22) can be expressed in matrix notation as Rb = Lb (24) where ~ = ~(L) (253 Equation 25 has exactly m solutions and the solution which minimizes ~(L) is the one for which ~ is minimum. The coefficients bl, b2,...,bm for which ~ = ~(L) attains its minimum value results in the optimum speech event feature signal ~L(n)-In FIG. 6, the speech event feature signal ~Ltn)is generated in box 625 and is stored in store 255. Until the end of the speech pattern is detected in decision box 635, the loop including boxes 605, 610, 615, 620, 625 .~d?~ 3 and 630 is iterated so that the complete sequence of speech events for the speech pattern is formed.
FIGo 12 shows waveforms illustrating a speech pattern and the speech event feature signals generated therefrom in accordance with the invention. Waveform 1201 corresponds to a portion of a speech pattern and waveforms 1205-1 through 1205-n correspond to the sequence of speech event feature signals ~(n) obtained from the waveform in the circuit of FIG. 2. Each feature signal is representative of the acoustic characteristics of a speech event of the pattern of waveform 1201. The speech event feature signals may be combined with coefficients aik of equation 1 to reform log area parameter signals that are representative of the acoustic features of the speech pattern.
Upon completion of the operations shown in Fig. 6, the sequence of speech event feature signals for the speech pattern is stored in store 255. Each speech event feature signal ~I(n) is encoded and transferred to utilization device 285 as illustrated in the flow chart of FIG. 7. Central processor is adapted to receive the speech event signal encoding program instruction set stored in ROM 235.
Referring to FIG. 7, the speech event index I is reset to one as per box 701 and the speech event feature signal ~I(n) is read from store 255. The sampling rate RI
for the current speech event feature signal is selected in box 710 by one of the many methods well known in the artO
For example, the instruction codes perform a Fourier analysis and generate a signal corresponding to the upper band limit of the feature signal from which a sampling rate signal RI is determined. As is well known in the art, the sampling rate need only be sufficient to adequately represent the feature signal. Thus, a slowly changing feature signal may utilize a lower sampling rate than a rapidly changing feature signal and the sampling rate for each feature signal may be different.

Once a sampling rate signal has been determined for speech event feature signal ~I(n), it is encoded at rate RI as per box 715. Any of the well-known encoding schemes can be used. For example, each sample may be converted into a PCM, ADPCM or ~ modulated signal and concatenated with a signal indicative of the feature signal location in the speech pattern and a signal representative of the sampling rate RI. The coded speech event feature signal is then transferred to utilization device 285 via input output interface 265. Speech event index I is then incremented (box 720) and decision box 725 is entered to determine if the last speech event signal has been coded.
The loop including boxes 705 through 725 is iterated until the last speech event signal has been encoded (I>IF) at which time the coding of the speech event feature si~nals is completed.
The speech event feature signals must be combined in accordance with equation 1 to form replicas of the log area feature signals therein. Accordingly, the combining coefficients for the speech pattern are generated and encoded as shown in the flow chart of FIG. 8. After the speech event feature signal encoding, central processor 275 is conditioned to read the contents of ROM 225. The instruction codes permanently stored in the ROM control the formation and encoding of the combining coefficients.
The combining coefficients are produced for the entire speech pattern by matrix processing in central processor 275 and arithmetic processor 280. Referring to FIG. 8, the log area parameters of the speech pattern are transferred to processor 275 as per box 801. A speech event feature signal coefficient matrix G is generated (box 805) in accordance with gkr n~k(n)~r(n) (26) and a Y~ correlation matrix C is formed (box 810) in accordance with 5~33 cir nYi(n)~r(n) (27) The combining coefficient matrix is then produced as per box 815 according to the relationship ~=G lC (28) The elements of matrix A are the combining coefficients aik of equation 1. These combining coefficients are encoded, as is well known in the art, in box 820 and the encoded coefficiants are transferred to utilization device 285.
In accordance with the invention, the linear predictive parameters sampled at a rate corresponding to the most rapid change therein are converted into a sequence of speech event feature signals that are encoded at the much lower speech event occurrence rate and the speech pattern is further compressed to reduce transmission and storage requirements without adversely affecting intelligibility. Utilization device 285 may be a communication facility connected to one of the many speech synthesizer circuits using an LPC all pole filter known in the art.
The circuit of FIG. 2 is adapted to compress a spoken message into a sequence of coded speech event feature signals which are transmitted via utilization device 285 to a synthesizer. In the synthesizer, the speech event feature signals and the combining coefficients of the message are decoded and recombined to form the message log area parameter signals. These log area parameter signals are then utilized to produce a replica of the original message.
FIG. 9 depicts a block diagram of a speech synthesizer cirsuit illustrative of the invention and FIG. 10 shows a flow chart illustrating its operation.
Store 915 of FIG. 9 is adapted to store the successive 35 coded speech event feature signals and combining signals '3~:~

received from utilization device 285 of FIG. 2 via line 901 and interface circuit 904. Store 920 receives the sequence of excitation signals required for synthesis via line 903.
The excitation signals may comprise a succession o pitch period and voiced/unvoiced signals generated responsive to the voice message by methods well known in the art.
Microprocessor 910 is adapted to control the operation of the synthesizee and may be the aforementioned Motorola-type MC68000 integrated circuit. LPC feature signal store 925 is utilized to store the successive log area parameter signals of the spoken message which are formed from the speech event feature signals and combining signals of store 915. ~ormation of a replica of the spoken message is accomplished in LPC synthesizer g30 responsive to the LPC
feature signals from store 925 and the excitation signals from store 920 under control of microprocessor 910.
The synthesizer operation is directed by microprocessor 910 under control of permanently stored instruction codes resident in a read only memory associated therewith. The operation of the synthesizer is described in the flow chart of FIG. 10. Referring to FIG. 10, The coded speech event feature signals, the corresponding combining signals, and the excitation signals of the spoken message are received by interface 90~ and are transferred to speech event eature signal and combining coefficient signal store 915 and to excitation signal store 920 as per box 1010. The log area parameter signal index I is then reset to one in processor 910 tbox 1020) so that the reconstruction of the first log area feature signal y1(n) is initiated.
The formation of the log area signal requires combining the speech event feature signals with the combining coefficients of index I in accordance with equation 1. Speech event feature signal location counter L
is reset to one by processor 910 as per box 102S and the current speech event feature signal samples are read rom store 915 (box 1030). The signal sample sequence is filtered to smooth the speech event feature signal as per (box 1035) and the current log area parameter signal is partially formed in box 1040. Speech event location counter L is incremented to address the next speech event feature signal in store 915 (box 1045) and the occurrence of the last feature signal is tested in decision box 1050.
Until the last speech event feature signal has been processed, the loop including boxes 1030 through 1050 is iterated so that the current 1O9 area parameter signal is generated and stored in LPC feature signal store 925 under control of processor 910.
Upon storage of a log area feature signal in store 925, box 1055 is entered from box 1050 and the log area index signal I is incremented (box 1055) to initiate the formation of the next log area parameter signal. The loop from box 1030 through box 1050 is reentered via decision box 1060. After the last log area parameter signal is stored, processor 910 causes a replica of the spoken message to be formed in LPC synthesizer 930.
The synthesizer circuit of FIG. 9 may be readily modified to store the speech event feature signal sequences corresponding to a plurality of spoken messages and to selectively generate replicas of these messages by techniques well known in the art. For such an arrangement, the speech event feature signal generating circuit of FIG. 2 may receive a sequence of predetermined spoken messages and utilization device 285 may comprise an arrangement to permanently store the speech event feature signals and corresponding combining coefficients for the messages and to generate a read only memory containing said spoken message speech event and combining signals. The read only memory containing the coded speech event and combining signals can be inserted as store 915 in the synthesizer circuit of FIG. 9.

Claims (20)

Claims
1. A method for compressing speech patterns comprising the steps of:
analyzing a speech pattern to generate a set of signals representative of acoustic features of the speech pattern at a first rate;
CHARACTERIZED BY
the steps of generating a sequence of signals each representative of a speech event of said speech pattern responsive to said set of acoustic feature signals; and forming a sequence of coded signals corresponding to said speech pattern at a rate less than said first rate responsive to said speech event representative signals.
2. A method for compressing speech patterns according to claim 1 wherein each speech event representative signal comprises a prescribed linear combination of said acoustic feature signals.
3. A method for compressing speech patterns according to claim 2 further comprising the step of generating a set of speech event representative signal combining coefficient signals jointly responsive to said speech event representative signals and said acoustic feature signals; and said coded signal forming step further comprises producing a set of digitally coded signals corresponding to said combining coefficient signals.
4. A method for compressing speech patterns according to claim 3 wherein the said generation of the sequence of speech event signals comprises:
producing a sequence of signals representative of the times of occurrence of speech events in said speech pattern responsive to said acoustic feature signals and generating a sequence of speech event feature signals jointly responsive to said acoustic feature signals and said speech event time of occurrence signals.
5. A method for compressing speech patterns according to claim 4 wherein each speech event signal is a controlled time spreading linear combination of said acoustic feature signals having its centroid at the time of occurrence of the corresponding speech event.
6. A method for compressing speech patterns according to claim 5 wherein said speech event time of occurrence signal generation comprises:
producing a signal representative of speech event timing in said speech pattern responsive to said acoustic feature signals and detecting the times of occurrence of negative going zero crossings in said speech event timing signal.
7. A method for compressing speech patterns according to claim 6 wherein said coded signal forming step comprises:
generating a signal representative of the bandwidth of each speech representative signal;
sampling said speech event feature signal at a rate corresponding to its bandwidth representative signal;
coding each sampled speech event feature signal;
and producing a sequence of encoded speech event coded signals at a rate corresponding to the rate of occurrence of speech events in said speech pattern.
8. A method for compressing speech patterns according to claim 1, wherein:
said acoustic feature signals are linear predictive parameter signals representative of the speech pattern.
9. A method for compressing speech patterns according to claim 8 wherein:
said linear predictive parameter signals are log area parameter signals representative of the speech pattern.
10. A method for compressing speech patterns according to claim 8 wherein:
said linear predictive parameter signals are PARCOR signals representative of the speech pattern.
11. A method for generating a speech pattern comprising the steps of:
storing a prescribed set of speech element signals; combining said speech element signals to form a set of signals representative of the acoustic features of a speech pattern;
and producing said speech pattern responsive to said set of acoustic feature signals;
said prescribed speech element signals are formed by;
analyzing a speech pattern to generate a set of acoustic feature representative signals at a first rate;
producing a sequence of signals representative of the features of successive speech events in said speech pattern responsive to said acoustic feature signals; and forming a sequence of coded signals corresponding to said speech event representative signal at a rate less than said first rate.
12. A method for generating a speech pattern according to claim 11 wherein each speech event representative signal comprises a prescribed linear combination of said acoustic feature signals.
13. A method for generating a speech pattern according to claim 12 further comprising the step of generating a set of speech event feature signal combining coefficient signals jointly responsive to said speech event feature signals and said acoustic feature signals; and said coded signal forming step further comprises producing a set of digitally coded signals corresponding to said combining coefficient signals.
14. A method for generating a speech pattern according to claim 13 wherein the said generation of successive speech event signals comprises:
producing a sequence of signals representative of the times of occurrence of speech events in said speech pattern responsive to said acoustic feature signals; and generating a sequence of speech event feature signals jointly responsive to said acoustic feature signals and said speech event time of occurrence signals.
15. A method for generating a speech pattern according to claim 14 wherein each speech event signal is a controlled time spreading linear combination of said acoustic feature signals having its centroid at the time of occurrence of the corresponding speech event.
16. A method for generating a speech pattern according to claim 15 wherein said speech event time of occurrence signal generation comprises:
producing a signal representative of speech event timing in said speech pattern responsive to said acoustic feature signals; and detecting the times of occurrence of negative going zero crossings in said speech event timing signal.
17. A method for generating a speech pattern according to claim 16 wherein said coded signal forming step comprises:
generating a signal representative of the bandwidth of each speech representative signal;
sampling said speech event feature signal at a rate corresponding to its bandwidth representative signal;
coding each sampled speech event feature signal; and producing a sequence of encoded speech event coded signals at a rate corresponding to the rate of occurrence of speech events in said speech pattern.
18. A method for generating a speech pattern according to claim 1 1, wherein:
said acoustic feature signals are linear predictive parameter signals representative of the speech pattern.
19. A method for generating a speech pattern according to claim 18 wherein:
said linear predictive parameter signals are log area parameter signals representative of the speech pattern.
20. A method for generating a speech pattern according to claim 18 wherein:
said linear predictive parameter signals are PARCOR signals representative of the speech pattern.
CA000449947A 1983-04-12 1984-03-19 Speech pattern processing utilizing speech pattern compression Expired CA1201533A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US48423183A 1983-04-12 1983-04-12
US484,231 1983-04-12

Publications (1)

Publication Number Publication Date
CA1201533A true CA1201533A (en) 1986-03-04

Family

ID=23923295

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000449947A Expired CA1201533A (en) 1983-04-12 1984-03-19 Speech pattern processing utilizing speech pattern compression

Country Status (5)

Country Link
EP (1) EP0138954B1 (en)
JP (1) JP2648138B2 (en)
CA (1) CA1201533A (en)
DE (1) DE3474873D1 (en)
WO (1) WO1984004194A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL8503304A (en) * 1985-11-29 1987-06-16 Philips Nv METHOD AND APPARATUS FOR SEGMENTING AN ELECTRIC SIGNAL FROM AN ACOUSTIC SIGNAL, FOR EXAMPLE, A VOICE SIGNAL.

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3598921A (en) * 1969-04-04 1971-08-10 Nasa Method and apparatus for data compression by a decreasing slope threshold test
US3715512A (en) * 1971-12-20 1973-02-06 Bell Telephone Labor Inc Adaptive predictive speech signal coding system
JPS595916B2 (en) * 1975-02-13 1984-02-07 日本電気株式会社 Speech splitting/synthesizing device
JPS5326761A (en) * 1976-08-26 1978-03-13 Babcock Hitachi Kk Injecting device for reducing agent for nox
US4280192A (en) * 1977-01-07 1981-07-21 Moll Edward W Minimum space digital storage of analog information
FR2412987A1 (en) * 1977-12-23 1979-07-20 Ibm France PROCESS FOR COMPRESSION OF DATA RELATING TO THE VOICE SIGNAL AND DEVICE IMPLEMENTING THIS PROCEDURE

Also Published As

Publication number Publication date
EP0138954B1 (en) 1988-10-26
WO1984004194A1 (en) 1984-10-25
EP0138954A1 (en) 1985-05-02
EP0138954A4 (en) 1985-11-07
JP2648138B2 (en) 1997-08-27
DE3474873D1 (en) 1988-12-01
JPS60501076A (en) 1985-07-11

Similar Documents

Publication Publication Date Title
KR100427753B1 (en) Method and apparatus for reproducing voice signal, method and apparatus for voice decoding, method and apparatus for voice synthesis and portable wireless terminal apparatus
US4868867A (en) Vector excitation speech or audio coder for transmission or storage
CA1222568A (en) Multipulse lpc speech processing arrangement
US4852179A (en) Variable frame rate, fixed bit rate vocoding method
US5018200A (en) Communication system capable of improving a speech quality by classifying speech signals
US4704730A (en) Multi-state speech encoder and decoder
US6006174A (en) Multiple impulse excitation speech encoder and decoder
EP0342687B1 (en) Coded speech communication system having code books for synthesizing small-amplitude components
JPS6046440B2 (en) Audio processing method and device
USRE43099E1 (en) Speech coder methods and systems
WO1991013432A1 (en) Dynamic codebook for efficient speech coding based on algebraic codes
EP0232456A1 (en) Digital speech processor using arbitrary excitation coding
KR100204740B1 (en) Information coding method
AU669788B2 (en) Method for generating a spectral noise weighting filter for use in a speech coder
US4764963A (en) Speech pattern compression arrangement utilizing speech event identification
Robinson Speech analysis
CA1201533A (en) Speech pattern processing utilizing speech pattern compression
Schroeder et al. Stochastic coding of speech signals at very low bit rates: The importance of speech perception
US5235670A (en) Multiple impulse excitation speech encoder and decoder
JP2000132193A (en) Signal encoding device and method therefor, and signal decoding device and method therefor
Rebolledo et al. A multirate voice digitizer based upon vector quantization
WO2004088634A1 (en) Speech signal compression device, speech signal compression method, and program
JP3166673B2 (en) Vocoder encoding / decoding device
JPH0480400B2 (en)
JP3271966B2 (en) Encoding device and encoding method

Legal Events

Date Code Title Description
MKEX Expiry