WO1984004194A1 - Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole - Google Patents

Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole Download PDF

Info

Publication number
WO1984004194A1
WO1984004194A1 PCT/US1984/000367 US8400367W WO8404194A1 WO 1984004194 A1 WO1984004194 A1 WO 1984004194A1 US 8400367 W US8400367 W US 8400367W WO 8404194 A1 WO8404194 A1 WO 8404194A1
Authority
WO
WIPO (PCT)
Prior art keywords
speech
signals
signal
event
representative
Prior art date
Application number
PCT/US1984/000367
Other languages
English (en)
Inventor
Bishnu Saroop Atal
Original Assignee
American Telephone & Telegraph
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone & Telegraph filed Critical American Telephone & Telegraph
Priority to DE8484901491T priority Critical patent/DE3474873D1/de
Publication of WO1984004194A1 publication Critical patent/WO1984004194A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/0018Speech coding using phonetic or linguistical decoding of the source; Reconstruction using text-to-speech synthesis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis

Definitions

  • My invention relates to speech processing and, particularly, to the compression of speech patterns and to the synthesis of speech patterns from such compressed patterns.
  • a speech signal requires a bandwidth of at least 4 kHz for reasonable intelligibility.
  • digital speech processing systems such as speech synthesizers, recognizers, or coders
  • the channel capacity needed for transmission or memory required for storage of the digital elements of the full 4 kHz bandwidth waveform is very large.
  • Waveform coding such as Pulse Code Modulation (PCM), Differential Pulse Code Modulation (DPCM) , Delta Modulation or adaptive predictive coding result in natural sounding, high quality speech at bit rates between 16 and 64 kbps.
  • PCM Pulse Code Modulation
  • DPCM Differential Pulse Code Modulation
  • Delta Modulation adaptive predictive coding
  • An alternative speech coding technique disclosed in U. S. Patent 3,624,302 utilizes a small number, e.g., 12-16, of slowly varying parameters which may be processed to produce a low distortion replica of a speech pattern.
  • Such parameters e.g., Linear Prediction Coefficient (LPC) or log area
  • LPC Linear Prediction Coefficient
  • Encoding of the LPC or log area parameters generally requires sampling at a rate of twice the bandwidth and quantizing each resulting frame of log area parameters.
  • Each frame of a log area parameter can be quantized using 48 bits. Consequently, 12 log area parameters each having a 50 Hz bandwidth results in a total bit. rate of 4800 bits/sec.
  • Speech or events in human speech are produced at an average rate that varies between 10 and 20 events per second. It has been observed that such speech events generally occur at nonuniformly spaced time intervals and that articulatory movements differ widely for various speech sounds. Consequently, a significant degree of compression may be achieved by transforming acoustic feature parameters into short speech event related units located at nonuniformly spaced time intervals. The coding of such speech event units results in higher efficiency without degradation of the accuracy of the the pattern representation.
  • the invention is directed to an arrangement for compressing speech in which a speech pattern is analyzed to generate a set of signals representative of acoustic features of a speech pattern at a first rate.
  • a sequence of signals representative of acoustic features of successive speech events in the speech pattern is produced responsive to said acoustic feature signals and a digitally coded signal corresponding to each speech event representative signal is produced at a rate less than said first rate.
  • a speech pattern is synthesized by storing a prescribed set of speech element signals, combining said speech element signals to form a signal representative of the acoustic features of a speech pattern, and producing said speech pattern responsive to the set of acoustic feature signals.
  • the prescribed speech element signals are formed by analyzing a speech pattern to generate a set of acoustic feature representative signals at a first rate.
  • a sequence of signals representative of acoustic features of successive speech events in said speech pattern is produced responsive to said sampled acoustic feature signals and a sequence of digitally coded signals corresponding to the speech event representative signal is formed at a rate less than said first rate.
  • FIG. 1 depicts a flowchart illustrating the general method of the invention
  • FIG. 2 depicts a block diagram of a speech pattern coding circuit illustrative of the invention
  • FIGS. 3-8 depict detailed flowcharts illustrating the operation of the circuit of FIG. 2;
  • FIG. 9 depicts a speech synthesizer illustrative of the invention.
  • FIG. 10 depicts a flow chart illustrating the operation of the circuit of FIG. 9;
  • FIG. 11 shows a waveform illustrating a speech event timing signal obtained in the circuit of FIG. 2; and FIG. 12 shows waveforms illustrative of a speech pattern and the speech event feature signals associated therewith.
  • log area parameter signals sampled at closely spaced time intervals have been used in speech synthesis to obtain efficient representation of a speech pattern.
  • log area parameters are transformed into a sequence of individual speech event feature signals ⁇ k (n) such that the log area parameters
  • the speech event feature signals ⁇ k (n) are sequential and occur at the speech event rate of the pattern which is substantially lower than the the log area parameter frame rate.
  • P is the total number of log area parameters y i (n) determined by linear prediction analysis, in corresponds to the number of speech events in the pattern, n is the index of samples in the speech pattern at the sampling rate of the log area parameters, ⁇ k (n) is the kth speech event signal at sampling instant n, and a ik is a combining coefficient corresponding to the contribution of the kth speech event function to the ith log area parameter. Equation (1) may be expressed in matrix form as
  • Y is a pxN matrix whose (i,n) element is y i (n)
  • A is a pxm matrix whose (i,k) element is a ik
  • is an mxN matrix whose (k,n) element is ⁇ k (n). Since each speech event k occupies only a small segment of the speech pattern, the signal ⁇ k (n) representative thereof should be non-zero over only a small range of the sampling intervals of the total pattern.
  • Each log area parameter yi (n) in equation (1) is a linear combination of the speech event functions ⁇ k (n) and the bandwidth of each y i (n) parameter is the maximum bandwidth of any one of the speech event functions ⁇ k (n). It is therefore readily seen that the direct coding of y i (n) signals will take more bits than the coding of the ⁇ K (n) switch event signals and the combining coefficient signals a ik in equation (1) .
  • FIG. 1 shows a flow chart illustrative of the general method of the invention.
  • a speech pattern is analyzed to form a sequence of signals representative of log area parameter acoustic feature signals.
  • LPC Partial Autocorrelation
  • PARCOR Partial Autocorrelation
  • Other speech features see, e.g., U.S. patent 3,624,302
  • the feature signals are then converted into a set of speech event representative signals that are encoded at a lower bit rate for transmission or storage.
  • box 101 is entered in which an electrical signal corresponding to a speech pattern is low pass filtered to remove unwanted higher frequency noise and speech components and the filtered signal is sampled at twice the low pass filtering cutoff frequency.
  • the speech pattern samples are then converted into a sequence of digitally coded signals corresponding to the pattern as per box 110. Since the storage required for the sample signals is too large for most practical applications, they are utilized to generate log area parameter signals as per box 120 by linear prediction techniques well known in the art.
  • the log area parameter signals y i (n) are produced at a constant sampling rate high enough to accurately represent the fastest expected event in the speech pattern. Typically, a sampling interval between two and five milliseconds is selected.
  • the times of occurrence of the successive speech events in the pattern are detected and signals representative of the event timing are generated and stored as per box 130. This is done by partitioning the pattern into prescribed smaller segments, e.g., 0.25 second intervals. For each successive interval having a beginning frame n b and an ending frame n e , a matrix of log area parameter signals is formed corresponding to the log area parameters y i (n) of the segment. The redundancy in the matrix is reduced by factoring out the first four principal components so that
  • the first four principal components may be obtained by methods well known in the art such as described in the article "An Efficient Linear Prediction Vocoder" by M. R. Sambur appearing in the Bell System Technical Journal Vol. 54, No. 10, pp. 1693-1723, December 1975.
  • the resulting u m (n) functions may be linearly combined to define the desired speech event s ignals as
  • the speech pattern is represented by a sequence of successive compact (minimum spreading) speech event feature signals ⁇ k (n) each of which can be efficiently coded.
  • a distance measure In order to obtain the shapes and locations of the speech event signals, a distance measure
  • a speech event signal ⁇ k (n) with minimum spreading is centered at each negative zero crossing of v(L).
  • box 140 is entered and the speech event signals ⁇ k (n) are accurately determined using the process of box 130 with the speech event occurrence signals from the negative going zero crossings of v(L).
  • the combining coefficients a ik in equations (1) and (2) may be generated by minimizing the mean-squared error
  • FIG. 2 shows a speech coding arrangement that includes electroacoustic transducer 201, filter and sampler circuit 203, analog to digital converter 205, and speech sample store 210 which cooperate to convert a speech pattern into a stored sequence of digital codes representative of the pattern.
  • Central processor 275 may comprise a microprocessor such as the Motorola type MC68000 controlled by permanently stored instructions in read only memories (ROM) 215, 220, 225, 230 and 235.
  • ROM read only memories
  • Processor 275 is adapted to direct the operations of arithmetic processor 280, and stores 210, 240, 245, 250, 255 and 260 so that the digital codes from store 210 are compressed into a compact set of speech event feature signals.
  • the speech event feature signals are then supplied to utilization device 285 via input output interface 265.
  • the utilization device may be a digital communication facility or a storage arrangement for delayed transmission or a store associated with a speech synthesizer.
  • the Motorola MC68000 integrated circuit is described in the publication MC68000 16 Bit Microprocessor User's Manual, second edition. Motorola, Inc., 1980 and arithmetic processor 280 may comprise the TRW type MPY-16HJ integrated circuit.
  • a speech pattern is applied to electroacoustic transducer 201 and the electrical signal therefrom is supplied to low pass filter and sampler circuit 203 which is operative to limit the upper end of the signal bandwidth to 3.5 KHz and to sample the filtered signal at an 8 KHz rate.
  • Analog to digital converter 205 converts the sampled signal from filter and sampler 203 into a sequence of digital codes, each representative of the magnitude of a signal sample. The resulting digital codes are sequentially stored in speech sample store 210.
  • central processor 275 causes the instructions stored in log area parameter program store 215 to be transferred to the random access memory associated with the central processor.
  • the flow chart of FIG. 3 illustrates the sequence of operations performed by the controller responsive to the instructions from store 215.
  • box 305 is initially entered and frame count index n is reset to 1.
  • the speech samples of the current frame are then transferred from store 210 to arithmetic processor 280 via central processor 275 as per box 310.
  • the occurrence of an end of speech sample signal is checked in decision box 315.
  • control is passed to box 325 and an LPC analysis is performed for the frame in processors 275 and 280.
  • the LPC parameter signals of the current frame are then converted to log area parameter signals y i (k) as per box 330 and the log area parameter signals are stored in log area parameter store 240 (box 335).
  • the frame count is incremented by one in box 345 and the speech samples of the next frame are read (box 310).
  • control is passed to box 320 and a signal corresponding to the number of frames in the pattern is stored in processor 275.
  • Central processor 275 is operative after the log area parameter storing operation is completed to transfer the stored instructions of ROM 220 into its random access memory.
  • the instruction codes from store 220 correspond to the operations illustrated in the flow chart of FIGS. 4 and 5. These instruction codes are effective to generate a signal v(L) from which the occurrences of the speech events in the speech pattern may be detected and located.
  • the frame count of the log area parameters is initially reset in processor 275 as per box 403 and the log area parameters y i (n) for an initial time interval n 1 to n 2 of the speech pattern are transferred from log area parameter store 240 to processor 275 (box 410).
  • ⁇ k is most compact over the range n 1 to n 2 .
  • This is accomplished through use of the ⁇ (L) function of equation 6.
  • a signal v(L) representative of the speech event timing of the speech pattern is then formed in accordance with equation 7 in box 430 and the v(L) signal is stored in timing parameter store 245.
  • Frame counter n is incremented by a constant value, e.g., 5, on the basis of how close adjacent speech event signals ⁇ k (n) are expected to occur (box 435) and box 410 is reentered to generate the ⁇ k (n) and v(L) signals for the next time interval of the speech pattern.
  • FIG. 11 illustrates the speech event timing parameter signal for the an utterance exemplary message. Each negative going zero crossing in FIG. 11 corresponds to the centroid of a speech event feature signal ⁇ k (n).
  • box 501 is entered in which speech event index I is reset to zero and frame index n is again reset to one.
  • the successive frames of speech event timing parameter signal are read from store 245 (box 505) and zero crossings therein are detected in processor 275 as per box 510.
  • the speech event index I is incremented (box 515) and the speech event location frame is stored in speech event location store 250 (box 520).
  • the frame index n is then incremented in box 525 and a check is made for the end of the speech pattern frames in box 530.
  • box 505 is reentered from box 530 after each iteration to detect the subsequent speech event location frames of the pattern.
  • central processor 235 Upon detection of end of the speech pattern signal in box 530, central processor 235 addresses speech event feature signal generation program store 225 and causes its contents to be transferred to the processor. Central processor 275 and arithmetic processor 280 are thereby adapted to form a sequence of speech event feature signals ⁇ k (n) responsive to the log area parameter signals in store 240 and the speech event location signals in store 250.
  • the speech event feature signal generation instructions are illustrated in the flow chart of FIG. 6.
  • location index I is set to one as per box 601 and the locations of the speech events in store 250 are transferred to central processor 275 (box 605).
  • the limit frames for a prescribed number of speech event locations e.g., 5, are determined.
  • the log area parameters for the speech pattern interval defined by the limit frames are read from store 240 and are placed in a section of the memory of central processor 275 (box 615). The redundancy in the log area parameters is removed by factoring out the number of principal components therein corresponding to the number of prescribed number of events (box 620).
  • the speech event feature signal ⁇ L (n) for the current location L is generated.
  • n is the prescribed number of speech events and r can be either 1,2,..., or m.
  • the derivative of equation (13) is set equal to zero to determine the minimum and
  • equation (20) can be simplified to
  • Equation (22) can be expressed in matrix notation as
  • Equation 25 has exactly ⁇ i solutions and the solution which minimizes ⁇ (L) is the one for which ⁇ is minimum.
  • the speech event feature signal ⁇ L (n) is generated in box 625 and is stored in store 255. Until the end of the speech pattern is detected in decision box 635, the loop including boxes 605, 610, 615, 620, 625 and 630 is iterated so that the complete sequence of speech events for the speech pattern is formed.
  • FIG. 12 shows waveforms illustrating a speech pattern and the speech event feature signals generated therefrom in accordance with the invention.
  • Waveform 1201 corresponds to a portion of a speech pattern and waveforms 1205-1 through 1205-n correspond to the sequence of speech event feature signals ⁇ L (n) obtained from the waveform in the circuit of FIG. 2.
  • Each feature signal is representative of the acoustic characteristics of a speech event of the pattern of waveform 1201.
  • the speech event feature signals may be combined with coefficients a ik of equation 1 to reform log area parameter signals that are representative of the acoustic features of the speech pattern.
  • each speech event feature signal ⁇ I (n) is encoded and transferred to utilization device 285 as illustrated in the flow chart of FIG. 7.
  • Central processor is adapted to receive the speech event signal encoding program instruction set stored in ROM 235. Referring to FIG. 7, the speech event index I is reset to one as per box 701 and the speech event feature signal ⁇ I (n) is read from store 255.
  • the sampling rate R I for the current speech event feature signal is selected in box 710 by one of the many methods well known in the art.
  • the instruction codes perform a Fourier analysis and generate a signal corresponding to the upper band limit of the feature signal from which a sampling rate signal R I is determined.
  • the sampling rate need only be sufficient to adequately represent the feature signal.
  • a slowly changing feature signal may utilize a lower sampling rate than a rapidly changing feature signal and the sampling rate for each feature signal may be different.
  • each sample may be converted into a PCM, ADPCM or ⁇ modulated signal and concatenated with a signal indicative of the feature signal location in the speech pattern and a signal representative of the sampling rate R I .
  • the coded speech event feature signal is then transferred to utilization device 285 via input output interface 265.
  • Speech event index I is then incremented (box 720) and decision box 725 is entered to determine if the last speech event signal has been coded.
  • the loop including boxes 705 through 725 is iterated until the last speech event signal has been encoded (I>I F ) at which time the coding of the speech event feature signals is completed.
  • the speech event feature signals must be combined in accordance with equation 1 to form replicas of the log area feature signals therein. Accordingly, the combining coefficients for the speech pattern are generated and encoded as shown in the flow chart of FIG. 8. After the speech event feature signal encoding, central processor 275 is conditioned to read the contents of ROM 225. The instruction codes permanently stored in the ROM control the formation and encoding of the combining coefficients.
  • the combining coefficients are produced for the entire speech pattern by matrix processing in central processor 275 and arithmetic processor 280. Referring to FIG. 8, the log area parameters of the speech pattern are transferred to processor 275 as per box 801. A speech event feature signal coefficient matrix G is generated (box 805) in accordance with
  • the combining coefficient matrix is then produced as per box 815 according to the relationship
  • the elements of matrix A are the combining coefficients a ik of equation 1. These combining coefficients are encoded, as is well known in the art, in box 820 and the encoded coefficients are transferred to utilization device 285.
  • the linear predictive parameters sampled at a rate corresponding to the most rapid change therein are converted into a sequence of speech event feature signals that are encoded at the much lower speech event occurrence rate and the speech pattern is further compressed to reduce transmission and storage requirements without adversely affecting intelligibility.
  • Utilization device 285 may be a communication facility connected to one of the many speech synthesizer circuits using an LPC all pole filter known in the art.
  • the circuit of FIG. 2 is adapted to compress a spoken message into a sequence of coded speech event feature signals which are transmitted via utilization device 285 to a synthesizer.
  • the speech event feature signals and the combining coefficients of the message are decoded and recombined to form the message log area parameter signals. These log area parameter signals are then utilized to produce a replica of the original message.
  • FIG. 9 depicts a block diagram of a speech synthesizer circuit illustrative of the invention and FIG. 10 shows a flow chart illustrating its operation.
  • Store 915 of FIG. 9 is adapted to store the successive coded speech event feature signals and combining signals received- from utilization device 285 of FIG. 2 via line 901 and* interface circuit 904.
  • Store 920 receives the sequence of excitation signals required for synthesis via line 903.
  • the excitation signals may comprise a succession of pitch period and voiced/unvoiced signals generated responsive to the voice message by methods well known in the art.
  • Microprocessor 910 is adapted to control the operation of the synthesizer and may be the aforementioned Motorola-type MC68000 integrated circuit.
  • LPC feature signal store 925 is utilized to store the successive log area parameter signals of the spoken message which are formed from the speech event feature signals and combining signals of store 915. Formation of a replica of the spoken message is accomplished in LPC synthesizer 930 responsive to the LPC feature signals from store 925 and the excitation signals from store 920 under control of microprocessor 910. The synthesizer operation is directed by microprocessor 910 under control of permanently stored instruction codes resident in a read only memory associated therewith. The operation of the synthesizer is described in the flow chart of FIG. 10. Referring to FIG.
  • the coded speech event feature signals, the corresponding combining signals, and the excitation signals of the spoken message are received by interface 904 and are transferred to speech event feature signal and combining coefficient signal store 915 and to excitation signal store 920 as per box 1010.
  • the log area parameter signal index I is then reset to one in processor 910 (box 1020) so that the reconstruction of the first log area feature signal y 1 (n) is initiated.
  • Speech event feature signal location counter L is reset to one by processor 910 as per box 1025 and the current speech event feature signal samples are read from store 915 (box 1030). The signal sample sequence is filtered to smooth the speech event feature signal as per (box 1035) and the current log area parameter signal is partially formed in box 1040. Speech event location counter L is incremented to address the next speech event feature signal in store 915 (box 1045) and the occurrence of the last feature signal is tested in decision box 1050. Until the last speech event feature signal has been processed, the loop including boxes 1030 through 1050 is iterated so that the current log area parameter signal is generated and stored in LPC feature signal store 925 under control of processor 910.
  • box 1055 is entered from box 1050 and the log area index signal I is incremented (box 1055) to initiate the formation of the next log area parameter signal.
  • the loop from box 1030 through box 1050 is reentered via decision box 1060.
  • processor 910 causes a replica of the spoken message to be formed in LPC synthesizer 930.
  • the synthesizer circuit of FIG. 9 may be readily modified to store the speech event feature signal sequences corresponding to a plurality of spoken messages and to selectively generate replicas of these messages by techniques well known in the art. For such an arrangement, the speech event feature signal generating circuit of FIG.
  • utilization device 285 may comprise an arrangement to permanently store the speech event feature signals and corresponding combining coefficients for the messages and to generate a read only memory containing said spoken message speech event and combining signals.
  • the read only memory containing the coded speech event and combining signals can be inserted as store 915 in the synthesizer circuit of FIG. 9.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

Une configuration de la parole est comprimée à un degré qu'il n'était pas possible d'atteindre auparavant en analysant la configuration pour produire (210, 215, 275, 280) une séquence de signaux représentatifs de ses caractéristiques acoustiques à une première cadence. Une séquence de signaux représentatifs des événements de la parole est produite (225, 275, 280), en réponse aux signaux caractéristiques du phénomène acoustique. Une séquence de signaux codés correspondant à la configuration de la parole est formée (235, 275, 280) à une cadence inférieure à la première cadence en réponse aux signaux représentatifs de l'événement de la parole.
PCT/US1984/000367 1983-04-12 1984-03-12 Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole WO1984004194A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE8484901491T DE3474873D1 (en) 1983-04-12 1984-03-12 Speech pattern processing utilizing speech pattern compression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US48423183A 1983-04-12 1983-04-12

Publications (1)

Publication Number Publication Date
WO1984004194A1 true WO1984004194A1 (fr) 1984-10-25

Family

ID=23923295

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1984/000367 WO1984004194A1 (fr) 1983-04-12 1984-03-12 Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole

Country Status (5)

Country Link
EP (1) EP0138954B1 (fr)
JP (1) JP2648138B2 (fr)
CA (1) CA1201533A (fr)
DE (1) DE3474873D1 (fr)
WO (1) WO1984004194A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4813074A (en) * 1985-11-29 1989-03-14 U.S. Philips Corp. Method of and device for segmenting an electric signal derived from an acoustic signal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3598921A (en) * 1969-04-04 1971-08-10 Nasa Method and apparatus for data compression by a decreasing slope threshold test
US3715512A (en) * 1971-12-20 1973-02-06 Bell Telephone Labor Inc Adaptive predictive speech signal coding system
US4216354A (en) * 1977-12-23 1980-08-05 International Business Machines Corporation Process for compressing data relative to voice signals and device applying said process
US4280192A (en) * 1977-01-07 1981-07-21 Moll Edward W Minimum space digital storage of analog information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS595916B2 (ja) * 1975-02-13 1984-02-07 日本電気株式会社 音声分折合成装置
JPS5326761A (en) * 1976-08-26 1978-03-13 Babcock Hitachi Kk Injecting device for reducing agent for nox

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3598921A (en) * 1969-04-04 1971-08-10 Nasa Method and apparatus for data compression by a decreasing slope threshold test
US3715512A (en) * 1971-12-20 1973-02-06 Bell Telephone Labor Inc Adaptive predictive speech signal coding system
US4280192A (en) * 1977-01-07 1981-07-21 Moll Edward W Minimum space digital storage of analog information
US4216354A (en) * 1977-12-23 1980-08-05 International Business Machines Corporation Process for compressing data relative to voice signals and device applying said process

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0138954A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4813074A (en) * 1985-11-29 1989-03-14 U.S. Philips Corp. Method of and device for segmenting an electric signal derived from an acoustic signal

Also Published As

Publication number Publication date
JP2648138B2 (ja) 1997-08-27
DE3474873D1 (en) 1988-12-01
CA1201533A (fr) 1986-03-04
EP0138954A4 (fr) 1985-11-07
EP0138954B1 (fr) 1988-10-26
JPS60501076A (ja) 1985-07-11
EP0138954A1 (fr) 1985-05-02

Similar Documents

Publication Publication Date Title
US4709390A (en) Speech message code modifying arrangement
KR100427753B1 (ko) 음성신호재생방법및장치,음성복호화방법및장치,음성합성방법및장치와휴대용무선단말장치
US4969192A (en) Vector adaptive predictive coder for speech and audio
US4868867A (en) Vector excitation speech or audio coder for transmission or storage
US7191125B2 (en) Method and apparatus for high performance low bit-rate coding of unvoiced speech
US4701954A (en) Multipulse LPC speech processing arrangement
USRE43099E1 (en) Speech coder methods and systems
JPS6046440B2 (ja) 音声処理方法とその装置
EP0470975A4 (en) Methods and apparatus for reconstructing non-quantized adaptively transformed voice signals
JPH096397A (ja) 音声信号の再生方法、再生装置及び伝送方法
EP0232456A1 (fr) Processeur numérique de la parole utilisant un codage d'excitation arbitraire
US4991215A (en) Multi-pulse coding apparatus with a reduced bit rate
US4764963A (en) Speech pattern compression arrangement utilizing speech event identification
EP0138954B1 (fr) Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole
JP2796408B2 (ja) 音声情報圧縮装置
JP3166673B2 (ja) ボコーダ符号化復号装置
JPH0480400B2 (fr)
JP3271966B2 (ja) 符号化装置及び符号化方法
EP0987680A1 (fr) Traitement de signal audio
KR100255297B1 (ko) 음성 데이터 부호화/복호화장치 및 그 방법
Ni et al. Waveform interpolation at bit rates above 2.4 kb/s
WO2001009880A1 (fr) Vocodeur de type vselp
JPH08160993A (ja) 音声分析合成器
Chen et al. Vector adaptive predictive coder for speech and audio

Legal Events

Date Code Title Description
AK Designated states

Designated state(s): JP

AL Designated countries for regional patents

Designated state(s): DE FR GB

WWE Wipo information: entry into national phase

Ref document number: 1984901491

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1984901491

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1984901491

Country of ref document: EP