EP0138954B1 - Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole - Google Patents

Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole Download PDF

Info

Publication number
EP0138954B1
EP0138954B1 EP19840901491 EP84901491A EP0138954B1 EP 0138954 B1 EP0138954 B1 EP 0138954B1 EP 19840901491 EP19840901491 EP 19840901491 EP 84901491 A EP84901491 A EP 84901491A EP 0138954 B1 EP0138954 B1 EP 0138954B1
Authority
EP
European Patent Office
Prior art keywords
speech
signals
signal
representative
speech pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
EP19840901491
Other languages
German (de)
English (en)
Other versions
EP0138954A4 (fr
EP0138954A1 (fr
Inventor
Bishnu Saroop Atal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone and Telegraph Co Inc, AT&T Corp filed Critical American Telephone and Telegraph Co Inc
Publication of EP0138954A1 publication Critical patent/EP0138954A1/fr
Publication of EP0138954A4 publication Critical patent/EP0138954A4/fr
Application granted granted Critical
Publication of EP0138954B1 publication Critical patent/EP0138954B1/fr
Expired legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/0018Speech coding using phonetic or linguistical decoding of the source; Reconstruction using text-to-speech synthesis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis

Definitions

  • This invention relates to speech processing and, particularly, to the compression of speech patterns and to the synthesis of speech patterns from such compressed patterns.
  • a speech signal requires a bandwidth of at least 4 kHz for reasonable intelligibility.
  • digital speech processing systems such as speech synthesizers, recognizers, or coders
  • the channel capacity needed for transmission or memory required for storage of the digital elements of the full 4 kHz bandwidth waveform is very large.
  • Waveform coding such as Pulse Code Modulation (PCM), Differential Pulse Code Modulation (DPCM), Delta Modulation or adaptive predictive coding result in natural sounding, high quality speech at bit rates between 16 and 64 kbps.
  • PCM Pulse Code Modulation
  • DPCM Differential Pulse Code Modulation
  • Delta Modulation adaptive predictive coding
  • An alternative speech coding technique disclosed in U.S. Patent 3,624,302 utilizes a small number, e.g., 12-16, of slowly varying parameters which may be processed to produce a low distortion replica of a speech pattern.
  • Such parameters e.g., Linear Prediction Coefficient (LPC) or log area
  • LPC Linear Prediction Coefficient
  • Encoding of the LPC or log area parameters generally requires sampling at a rate of twice the bandwidth and quantizing each resulting frame of log area parameters.
  • Each frame of a log area parameter can be quantized using 48 bits. Consequently, 12 log area parameters each having a 50 Hz bandwidth results in a total bit rate of 4800 bits/sec.
  • U.S. Patent 4,349,700 discloses arrangements that permit recognition of speech patterns having diverse sound patterns utilizing dynamic programming.
  • U.S. Patent 4,038,503 discloses a technique for nonlinear warping of time intervals of speech patterns so that the sound features are represented in a more uniform manner. These arrangements, however, require storing and processing acoustic feature signals that are sampled at a rate corresponding to the most rapidly changing feature in the pattern. It is an object of the invention to provide an improved speech representation and/or speech synthesis arrangements having reduced digital storage and processing requirements.
  • log area parameter signals sampled at closely spaced time intervals have been used in speech synthesis to obtain efficient representation of a speech pattern.
  • log area parameters are transformed into a sequence of individual sound or speech event feature signals ())x(n) such that the log area parameters
  • the speech event feature signals ( ⁇ )k ⁇ n) are sequential and occur at the speech event rate of the pattern which is substantially lower than the the log area parameter frame rate.
  • p is the total number of log area parameters y,(n) determined by linear prediction analysis.
  • m corresponds to the number of speech events in the pattern
  • n is the index of samples in the speech pattern at the sampling rate of the log area parameters
  • ⁇ k (n) is the kth speech event signal at sampling instant n
  • a lk is a combining coefficient corresponding to the contribution of the kth speech event function to the ith log area parameter.
  • Equation (1) may be expressed in matrix form as where Y is a pxN matrix whose (i,n) element is y,(n), A is a pxm matrix whose (i,k) element is a ik , and ⁇ is an mx N matrix whose (k,n) element is ⁇ k (n). Since each speech event k occupies only a small segment of the speech pattern, the signal ⁇ k (n) representative thereof should be non-zero over only a small range of the sampling intervals of the total pattern.
  • Each log area parameter y,(n) in equation (1) is a linear combination of the speech event functions ⁇ k(n) and the bandwidth of each y,(n) parameter is the maximum bandwidth of any one of the speech event functions ⁇ k (n). It is therefore readily seen that the direct coding of y,(n) signals will take more bits than the coding of the ⁇ k (n) switch event signals and the combining coefficient signals a lk in equation (1).
  • Fig. 1 shows a flow chart illustrative of the general method of the invention.
  • a speech pattern is analyzed to form a sequence of signals representative of log area parameter acoustic feature signals.
  • LPC Partial Autocorrelation
  • PARCOR Partial Autocorrelation
  • Other speech features see, e.g., U.S. patent 3,624,302
  • the feature signals are then converted into a set of speech event representative signals that are encoded at a lower bit rate for transmission or storage.
  • box 101 is entered in which an electrical signal corresponding to a speech pattern is low pass filtered to remove unwanted higher frequency noise and speech components and the filtered signal is sampled at twice the low pass filtering cutoff frequency.
  • the speech pattern samples are then converted into a sequence of digitally coded signals corresponding to the pattern as per box 110. Since the storage required for the sample signals is too large for most practical applications, they are utilized to generate log area parameter signals as per box 120 by linear prediction techniques well known in the art.
  • the log area parameter signals y,(n) are produced at a constant sampling rate high enough to accurately represent the fastest expected event in the speech pattern. Typically, a sampling interval between two and five milliseconds is selected.
  • the times of occurrence of the successive speech events in the pattern are detected and signals representative of the event timing are generated and stored as per box 130. This is done by partitioning the pattern into prescribed smaller segments, e.g., 0.25 second intervals. For each successive interval having a beginning frame n b and an ending frame n e , a matrix of log area parameter signals is formed corresponding to the log area parameters y,(n) of the segment. The redundancy in the matrix is reduced by factoring out the first four principal components so that and
  • the first four principal components may be obtained by methods well known in the art such as described in the article "An Efficient Linear Prediction Vocoder" by M. R. Sambur appearing in the Bell System Technical Journal Vol. 54, No. 10, pp. 1693-1723, December 1975.
  • the resulting u m (n) functions may be linearly combined to define the desired speech event signals as by selecting coefficients b km such that each ⁇ k (n) are most compact in time.
  • the speech pattern is represented by a sequence of successive compact (minimum spreading) speech event feature signals ⁇ k (n) each of which can be efficiently coded.
  • a distance measure is minimized to choose the optimum ⁇ (n) and its location is obtained from a speech event timing signal
  • the combining coefficients a, k in equations (1) and (2) may be generated by minimizing the mean-squared error where M is the total number of speech events within the range of index n over which the sum is performed.
  • the partial derivatives of E with respect to the coefficients a lk are set equal to zero and the coefficients a lk are obtained from the set of simultaneous linear equations
  • Fig. 2 shows a speech coding arrangement that includes electroacoustic transducer 201, filter and sampler circuit 203, analog to digital converter 205, and speech sample store 210 which cooperate to convert a speech pattern into a stored sequence of digital codes representative of the pattern.
  • Central processor 275 may comprise a microprocessor such as the Motorola type MC68000 controlled by permanently stored instructions in read only memories (ROM) 215, 220, 225, 230 and 235.
  • ROM read only memories
  • Processor 275 is adapted to direct the operations of arithmetic processor 280, and stores 210, 240, 245, 250, 255 and 260 so that the digital codes from store 210 are compressed into a compact set of speech event feature signals.
  • the speech event feature signals are then supplied to utilization device 285 via input output interface 265.
  • the utilization device may be a digital communication facility or a storage arrangement for delayed transmission or a store associated with a speech synthesizer.
  • the Motorola MC68000 integrated circuit is described in the publication MC68000 16 Bit Microprocessor User's Manual, second edition, Motorola, Inc., 1980 and arithmetic processor 280 may comprise the TRW type MPY-16HJ integrated circuit.
  • a speech pattern is applied to electroacoustic transducer 201 and the electrical signal therefrom is supplied to low pass filter and sampler circuit 203 which is operative to limit the upper end of the signal bandwidth to 3.5 KHz and to sample the filtered signal at an 8 KHz rate.
  • Analog to digital converter 205 converts the sampled signal from filter and sampler 203 into a sequence of digital codes, each representative of the magnitude of a signal sample. The resulting digital codes are sequentially stored in speech sample store 210.
  • central processor 275 causes the instructions stored in log area parameter program store 215 to be transferred to the random access memory associated with the central processor.
  • the flow chart of Fig. 3 illustrates the sequence of operations performed by the controller responsive to the instructions from store 215.
  • box 305 is initially entered and frame count index n.is reset to 1.
  • the speech samples of the current frame are then transferred from store 210 to arithmetic processor 280 via central processor 275 as per box 310.
  • the occurrence of an end of speech sample signal is checked in decision box 315.
  • control is passed to box 325 and an LPC analysis is performed for the frame in processors 275 and 280.
  • the LPC parameter signals of the current frame are then converted to log area parameter signals y,(k) as per box 330 and the log area parameter signals are stored in log area parameter store 240 (box 335).
  • the frame count is incremented by one in box 345 and the speech samples of the next frame are read (box 310).
  • control is passed to box 320 and a signal corresponding to the number of frames in the pattern is stored in processor 275.
  • Central processor 275 is operative after the log area parameter storing operation is completed to transfer the stored instructions of ROM 220 into its random access memory.
  • the instruction codes from store 220 correspond to the operations illustrated in the flow chart of Figs. 4 and 5. These instruction codes are effective to generate a signal v(L) from which the occurrences of the speech events in the speech pattern may be detected and located.
  • the frame count of the log area parameters is initially reset in processor 275 as per box 403 and the log area parameters y,(n) for an initial time interval n, to n 2 of the speech pattern are transferred from log area parameter store 240 to processor 275 (box 410).
  • the log area parameters of the current time interval are then represented by from which a set of signals are to be obtained.
  • This is accomplished through use of the 8(L) function of equation 6.
  • a signal v(L) representative of the speech event timing of the speech pattern is then formed in accordance with equation 7 in box 430 and the v(L) signal is stored in timing parameter store 245.
  • Frame counter n is incremented by a constant value, e.g., 5, on the basis of how close adjacent speech event signals cp k (n) are expected to occur (box 435) and box 410 is reentered to generate the ⁇ k (n) and v(L) signals for the next time interval of the speech pattern.
  • Fig. 11 illustrates the speech event timing parameter signal for the an utterance exemplary message. Each negative going zero crossing in Fig. 11 corresponds to the centroid of a speech event feature signal wk(n).
  • box 501 is entered in which speech event index I is reset to zero and frame index n is again reset to one.
  • the successive frames of speech event timing parameter signal are read from store 245 (box 505) and zero crossings therein are detected in processor 275 as per box 510.
  • the speech event index I is incremented (box 515) and the speech event location frame is stored in speech event location store 250 (box 520).
  • the frame index n is then incremented in box 525 and a check is made for the end of the speech pattern frames in box 530.
  • box 505 is reentered from box 530 after each iteration to detect the subsequent speech event location frames of the pattern.
  • central processor 235 Upon detection of end of the speech pattern signal in box 530, central processor 235 addresses speech event feature signal generation program store 225 and causes its contents to be transferred to the processor. Central processor 275 and arithmetic processor 280 are thereby adapted to form a sequence of speech event feature signals ⁇ k (n) responsive to the log area parameter signals in store 240 and the speech event location signals in store 250.
  • the speech event feature signal generation instructions are illustrated in the flow chart of Fig. 6.
  • location index I is set to one as per box 601 and the locations of the speech events in store 250 are transferred to central processor 275 (box 605).
  • the limit frames for a prescribed number of speech event locations e.g., 5, are determined.
  • the log area parameters for the speech pattern interval defined by the limit frames are read from store 240 and are placed in a section of the memory of central processor 275 (box 615). The redundancy in the log area parameters is removed by factoring out the number of principal components therein corresponding to the number of prescribed number of events (box 620).
  • the speech event feature signal ⁇ L (n) for the current location L is generated.
  • Equation (6) The minimization of equation (6) to determine ⁇ L (n) is accomplished by forming the derivative where m is the prescribed number of speech events and r can be either 1, 2,..., or m.
  • the derivative of equation (13) is set equal to zero to determine the minimum and is obtained. From equation (14) so that equation (15) can be changed to ⁇ (n) in equation (17) can be replaced by the right side of equation 14.
  • Equation (22) can be expressed in matrix notation as where
  • Equation 25 has exactly m solutions and the solution which minimizes ⁇ (L) is the one for whichI ⁇ is minimum.
  • the speech event feature signal ⁇ L (n) is generated in box 625 and is stored in store 255. Until the end of the speech pattern is detected in decision box 635, the loop including boxes 605, 610, 615, 620, 625 and 630 is iterated so that the complete sequence of speech events for the speech pattern is formed.
  • Fig. 12 shows waveforms illustrating a speech pattern and the speech event feature signals generated therefrom in accordance with the invention.
  • Waveform 1201 corresponds to a portion of a speech pattern and waveforms 1205-1 through 1205-n correspond to the sequence of speech event feature signals ⁇ L (n) obtained from the waveform in the circuit of Fig. 2.
  • Each feature signal is representative of the acoustic characteristics of a speech event of the pattern of waveform 1201.
  • the speech event feature signals may be combined with coefficients a, k of equation 1 to reform log area parameter signals that are representative of the acoustic features of the speech pattern.
  • each speech event feature signal ⁇ I (n) is encoded and transferred to utilization device 285 as illustrated in the flow chart of Fig. 7.
  • Central processor is adapted to receive the speech event signal encoding program instruction set stored in ROM 235.
  • the speech event index I is reset to one as per box 701 and the speech event feature signal ⁇ I (n) is read from store 255.
  • the sampling rate R, for the current speech event feature signal is selected in box 710 by one of the many methods well known in the art.
  • the instruction codes perform a Fourier analysis and generate a signal corresponding to the upper band limit of the feature signal from which a sampling rate signal R, is determined.
  • the sampling rate need only be sufficient to adequately represent the feature signal.
  • a slowly changing feature signal may utilize a lower sampling rate than a rapidly changing feature signal and the sampling rate for each feature signal may be different.
  • a sampling rate signal has been determined for speech event feature signal ⁇ I (n), it is encoded at rate R, as per box 715.
  • Any of the well-known encoding schemes can be used. For example, each sample may be converted into a PCM, ADPCM or A modulated signal and concatenated with a signal indicative of the feature signal location in the speech pattern and a signal representative of the sampling rate R,.
  • the coded speech event feature signal is then transferred to utilization device 285 via input output interface 265.
  • Speech event index I is then incremented (box 720) and decision box 725 is entered to determine if the last speech event signal has been coded.
  • the loop including boxes 705 through 725 is iterated until the last speech event signal has been encoded (I>I F ) at which time the coding of the speech event feature signals is completed.
  • the speech event feature signals must be combined in accordance with equation 1 to form replicas of the log area feature signals therein. Accordingly, the combining coefficients for the speech pattern are generated and encoded as shown in the flow chart of Fig. 8. After the speech event feature signal encoding, central processor 275 is conditioned to read the contents of ROM 225. The instruction codes permanently stored in the ROM control the formation and encoding of the combining coefficients.
  • the combining coefficients are produced for the entire speech pattern by matrix processing in central processor 275 and arithmetic processor 280.
  • the log area parameters of the speech pattern are transferred to processor 275 as per box 801.
  • a speech event feature signal coefficient matrix G is generated (box 805) in accordance with and a Y- ⁇ correlation matrix C is formed (box 810) in accordance with
  • the combining coefficient matrix is then produced as per box 815 according to the relationship
  • the elements of matrix A are the combining coefficients a, k of equation 1. These combining coefficients are encoded, as is well known in the art, in box 820 and the encoded coefficients are transferred to utilization device 285.
  • the linear predictive parameters sampled at a rate corresponding to the most rapid change therein are converted into a sequence of speech event feature signals that are encoded at the much lower speech event occurrence rate and the speech pattern is further compressed to reduce transmission and storage requirements without adversely affecting intelligibility.
  • Utilization device 285 may be a communication facility connected to one of the many speech synthesizer circuits using an LPC all pole filter known in the art.
  • the circuit of Fig. 2 is adapted to compress a spoken message into a sequence of coded speech event feature signals which are transmitted via utilization device 285 to a synthesizer.
  • the speech event feature signals and the combining coefficients of the message are decoded and recombined to form the message log area parameter signals. These log area parameter signals are then utilized to produce a replica of the original message.
  • Fig. 9 depicts a block diagram of a speech synthesizer circuit illustrative of the invention and Fig. 10 shows a flow chart illustrating its operation.
  • Store 915 of Fig. 9 is adapted to store the successive coded speech event feature signals and combining signals received from utilization device 285 of Fig. 2 via line 901 and interface circuit 904.
  • Store 920 receives the sequence of excitation signals required for synthesis via line 903.
  • the excitation signals may comprise a succession of pitch period and voiced/unvoiced signals generated responsive to the voice message by methods well known in the art.
  • Microprocessor 910 is adapted to control the operation of the synthesizer and may be the aforementioned Motorola-type MC68000 integrated circuit.
  • LPC feature signal store 925 is utilized to store the successive log area parameter signals of the spoken message which are formed from the speech event feature signals and combining signals of store 915. Formation of a replica of the spoken message is accomplished in LPC synthesizer 930 responsive to the LPC feature signals from store 925 and the excitation signals from store 920 under control of microprocessor 910.
  • the synthesizer operation is directed by microprocessor 910 under control of permanently stored instruction codes resident in a read only memory associated therewith.
  • the operation of the synthesizer is described in the flow chart of Fig. 10. Referring to Fig. 10, the coded speech event feature signals, the corresponding combining signals, and the excitation signals of the spoken message are received by interface 904 and are transferred to speech event feature signal and combining coefficient signal store 915 and to excitation signal store 920 as per box 1010.
  • the log area parameter signal index I is then reset to one in processor 910 (box 1020) so that the reconstruction of the first log area feature signal y l (n) is initiated.
  • Speech event feature signal location counter L is reset to one by processor 910 as per box 1025 and the current speech event feature signal samples are read from store 915 (box 1030). The signal sample sequence is filtered to smooth the speech event feature signal as per (box 1035) and the current log area parameter signal is partially formed in box 1040. Speech event location counter L is incremented to address the next speech event feature signal in store 915 (box 1045) and the occurrence of the last feature signal is tested in decision box 1050. Until the last speech event feature signal has been processed, the loop including boxes 1030 through 1050 is iterated so that the current log area parameter signal is generated and stored in LPC feature signal store 925 under control of processor 910.
  • box 1055 is entered from box 1050 and the log area index signal I is incremented (box 1055) to initiate the formation of the next log area parameter signal.
  • the loop from box 1030 through box 1050 is reentered via decision box 1060.
  • processor 910 causes a replica of the spoken message to be formed in LPC synthesizer 930.
  • the synthesizer circuit of Fig. 9 may be readily modified to store the speech event feature signal sequences corresponding to a plurality of spoken messages and to selectively generate replicas of these messages by techniques well known in the art.
  • the speech event feature signal generating circuit of Fig. 2 may receive a sequence of predetermined spoken messages and utilization device 285 may comprise an arrangement to permanently store the speech event feature signals and corresponding combining coefficients for the messages and to generate a read only memory containing said spoken message speech event and combining signals.
  • the read only memory containing the coded speech event and combining signals can be inserted as store 915 in the synthesizer circuit of Fig. 9.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Claims (11)

1. Un procédé pour comprimer des configurations de parole, comprenant les opérations suivantes: on analyse (101,110,120) une configuration de parole pour élaborer, à une première cadence, un ensemble de signaux (y,(n)) représentatifs de caractéristiques acoustiques de la configuration de parole, on génère (130, 140,150) une séquence de signaux codés représentatifs de la configuration de parole, sous la dépendance de l'ensemble précité de signaux de caractéristiques acoustiques, à une seconde cadence inférieure à la première cadence, caractérisé en ce que l'opération de génération comprend: la génération (420, 425) d'une séquence de signaux (4)k(n», chacun d'eux étant représentatif d'un son individuel de la configuration de parole, et chacun d'eux étant une combinaison linéaire des signaux de caractéristiques acoustiques; on détermine (510) les trames temporelles de la configuration de parole dans lesquelles apparaissent les centroïdes de sons individuels, sous la dépendance de l'ensemble de signaux de caractéristiques acoustiques; on génère (625) une séquence de signaux de caractéristiques de sons individuels (φL(I)(n)), sous la dépendance conjointe des signaux de caractéristiques acoustiques et de la détermination des trames temporelles de centroïdes; la génération (805-815) d'un ensemble de coefficients de combinaison de signaux représentatifs de sons individuels (alk), sous la dépendance conjointe des signaux représentatifs de sons individuels et des signaux de caractéristiques acoustiques; et la formation de signal codé sous la dépendance de la séquence de signaux de caractéristiques de sons individuels (715) et des coefficients de combinaison (820).
2. Un procédé pour comprimer des configurations de parole selon la revendication 1, dans lequel l'opération de détermination des trames temporelles de la configuration de parole dans lesquelles apparaissent les centroïdes de sons individuels, comprend la génération (430) d'un signal (v(L)) représentatif des instants d'apparition des sons individuels dans la configuration de parole, sous la dépendance des signaux de caractéristiques acoustiques de la configuration de parole, et la détection de chaque passage par zéro en sens négatif dans le signal d'instants d'apparition de sons individuels.
3. Un procédé pour comprimer des configurations de parole selon la revendication 1 ou la revendication 2, dans lequel l'opération de formation d'un signal codé comprend la génération (710) d'un signal représentatif de la largeur de bande de chaque signal représentatif de la parole; l'échantillonnage du signal de caractéristiques d'événement de parole à une cadence qui correspond à son signal représentatif de la largeur de bande; le codage (715) de chaque signal de caractéristiques d'événement échantillonné; et la génération d'une séquence de signaux codés d'événement de parole, à une cadence qui correspond à la cadence d'apparition des événements de parole dans la configuration de parole.
4. Un procédé pour comprimer des configurations de parole selon l'une quelconque des revendications précédentes, dans lequel les signaux de caractéristiques acoustiques sont des signaux de paramètres de prédiction linéaire, représentatifs de la configuration de parole.
5. Un procédé pour comprimer des configurations de parole selon la revendication 4, dans lequel les signaux de paramètres de prédiction linéaire sont des signaux de paramètre d'aire logarithmique, représentatifs de la configuration de parole.
6. Un procédé pour comprimer des configurations de parole selon la revendication 4, dans lequel les signaux de paramètres de prédiction linéaire sont des signaux d'autocorrélation partielle, représentatifs de la configuration de parole.
7. Appareil pour comprimer des configurations de parole, comprenant des moyens (210, 215, 225, 280) pour analyser une configuration de parole de façon à élaborer, à une première cadence, un ensemble de signaux représentatifs de caractéristiques acoustiques de la configuration de parole, et des moyens (220-260) pour générer une séquence de signaux codés représentatifs de la configuration de parole, sous la dépendance de l'ensemble de signaux de caractéristiques acoustiques, à une seconde cadence qui est inférieure à la première cadence, caractérisé en ce que les moyens de génération comprennent: des moyens (220) destinés à générer une séquence de signaux (1),(n», chacun d'eux étant représentatif d'un son individuel dans la configuration de parole, et chacun d'eux étant une combinaison linéaire de signaux de caractéristiques acoustiques, et pour déterminer les trames temporelles de la configuration de parole dans lesquelles apparaissent les centroïdes de son individuels, sous la dépendance de l'ensemble de signaux de caractéristiques acoustiques, des moyens (230) destinés à générer un ensemble de coefficients de combinaison de signaux représentatifs de sons individuels (a,k), sous la dépendance conjointe des signaux représentatifs de sons individuels et des signaux de caractéristiques acoustiques, des moyens (225) destinés à générer une séquence de signaux de caractéristiques de sons individuels (φL(I)(n)), sous la dépendance conjointe des signaux de caractéristiques acoustiques et de la détermination de trames temporelles de centroïdes, et des moyens (235) destinés à former le signal codé sous la dépendance de la séquence de signaux de caractéristiques de sons individuels et des coefficients de combinaison.
8. Appareil pour comprimer des configurations de parole selon la revendication 7, dans lequel les moyens de détermination de trames temporelles de la configuration de parole dans lesquelles apparaissent les centroïdes de sons individuels, comprennent des moyens (220) destinés à produire un signal représentatif des instants d'apparition des sons individuels dans la configuration de parole, sous la dépendance des signaux de caractéristiques acoustiques de la configuration de parole, et à détecter chaque passage par zéro de sens négatif dans le signal d'instants d'apparition de sons individuels.
9. Appareil pour comprimer des configurations de parole selon la revendication 7 ou la revendication 8, dans lequel les moyens destinés à former un signal comprennent des moyens (une partie de 235) destinés à générer un signal représentatif de la largeur de bande de chaque signal représentatif de la parole; des moyens (une partie de 235) destinés à échantillonner chaque signal représentatif d'une configuration d'articulation d'un son individuel dans la configuration de parole, à une cadence qui correspond à son signal de largeur de bande; des moyens (235) destinés à coder chaque signal représentatif d'une configuration d'articulation d'un son individuel; et des moyens (une partie de 235) destinés à produire une séquence de signaux d'echantillon représentatifs d'une configuration d'articulation d'un son individuel, à une cadence correspondant aux largeurs de bande des signaux représentatifs d'une configuration d'articulation d'un son individuel.
10. Appareil selon l'une quelconque des revendications 7 à 9, dans lequel les moyens d'analyse d'une configuration de parole comprennent des moyens (210, 215, 275, 280) destinés à générer un ensemble de signaux de paramètres de prédiction linéaire, représentatifs des caractéristiques acoustiques de la configuration de parole.
11. Appareil selon l'une quelconque des revendications 7 à 10, comprenant des moyens (285 ou 910-930) destinés à générer une configuration de parole à partir du signal codé.
EP19840901491 1983-04-12 1984-03-12 Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole Expired EP0138954B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US48423183A 1983-04-12 1983-04-12
US484231 1983-04-12

Publications (3)

Publication Number Publication Date
EP0138954A1 EP0138954A1 (fr) 1985-05-02
EP0138954A4 EP0138954A4 (fr) 1985-11-07
EP0138954B1 true EP0138954B1 (fr) 1988-10-26

Family

ID=23923295

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19840901491 Expired EP0138954B1 (fr) 1983-04-12 1984-03-12 Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole

Country Status (5)

Country Link
EP (1) EP0138954B1 (fr)
JP (1) JP2648138B2 (fr)
CA (1) CA1201533A (fr)
DE (1) DE3474873D1 (fr)
WO (1) WO1984004194A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL8503304A (nl) * 1985-11-29 1987-06-16 Philips Nv Werkwijze en inrichting voor het segmenteren van een uit een akoestisch signaal, bij voorbeeld een spraaksignaal, afgeleid elektrisch signaal.

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3598921A (en) * 1969-04-04 1971-08-10 Nasa Method and apparatus for data compression by a decreasing slope threshold test
US3715512A (en) * 1971-12-20 1973-02-06 Bell Telephone Labor Inc Adaptive predictive speech signal coding system
JPS595916B2 (ja) * 1975-02-13 1984-02-07 日本電気株式会社 音声分折合成装置
JPS5326761A (en) * 1976-08-26 1978-03-13 Babcock Hitachi Kk Injecting device for reducing agent for nox
US4280192A (en) * 1977-01-07 1981-07-21 Moll Edward W Minimum space digital storage of analog information
FR2412987A1 (fr) * 1977-12-23 1979-07-20 Ibm France Procede de compression de donnees relatives au signal vocal et dispositif mettant en oeuvre ledit procede

Also Published As

Publication number Publication date
JPS60501076A (ja) 1985-07-11
EP0138954A4 (fr) 1985-11-07
WO1984004194A1 (fr) 1984-10-25
CA1201533A (fr) 1986-03-04
EP0138954A1 (fr) 1985-05-02
JP2648138B2 (ja) 1997-08-27
DE3474873D1 (en) 1988-12-01

Similar Documents

Publication Publication Date Title
US4472832A (en) Digital speech coder
US4701954A (en) Multipulse LPC speech processing arrangement
US5495556A (en) Speech synthesizing method and apparatus therefor
KR100427753B1 (ko) 음성신호재생방법및장치,음성복호화방법및장치,음성합성방법및장치와휴대용무선단말장치
US5018200A (en) Communication system capable of improving a speech quality by classifying speech signals
US7191125B2 (en) Method and apparatus for high performance low bit-rate coding of unvoiced speech
USRE43099E1 (en) Speech coder methods and systems
EP0342687B1 (fr) Système de transmission de parole codée comportant des dictionnaires de codes pour la synthése des composantes de faible amplitude
USRE32580E (en) Digital speech coder
EP0232456A1 (fr) Processeur numérique de la parole utilisant un codage d'excitation arbitraire
US4991215A (en) Multi-pulse coding apparatus with a reduced bit rate
US6141637A (en) Speech signal encoding and decoding system, speech encoding apparatus, speech decoding apparatus, speech encoding and decoding method, and storage medium storing a program for carrying out the method
US4764963A (en) Speech pattern compression arrangement utilizing speech event identification
US5621853A (en) Burst excited linear prediction
EP0138954B1 (fr) Traitement de configurations de la parole utilisant un procede de compression de configurations de la parole
Dankberg et al. Development of a 4.8-9.6 kbps RELP Vocoder
Rebolledo et al. A multirate voice digitizer based upon vector quantization
JPH0480400B2 (fr)
JPH1185198A (ja) ボコーダ符号化復号装置
JP3271966B2 (ja) 符号化装置及び符号化方法
EP0987680A1 (fr) Traitement de signal audio
WO2001009880A1 (fr) Vocodeur de type vselp
GB2266213A (en) Digital signal coding
JPH11500837A (ja) スピーチコーダ用信号予測方法及び装置
KR19980035867A (ko) 음성 데이터 부호화/복호화장치 및 그 방법

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Designated state(s): DE FR GB

17P Request for examination filed

Effective date: 19850326

17Q First examination report despatched

Effective date: 19870317

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REF Corresponds to:

Ref document number: 3474873

Country of ref document: DE

Date of ref document: 19881201

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20030224

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20030225

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20030310

Year of fee payment: 20

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20040311

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20