EP0640237B1 - Procede de conversion de signaux vocaux - Google Patents
Procede de conversion de signaux vocaux Download PDFInfo
- Publication number
- EP0640237B1 EP0640237B1 EP94905743A EP94905743A EP0640237B1 EP 0640237 B1 EP0640237 B1 EP 0640237B1 EP 94905743 A EP94905743 A EP 94905743A EP 94905743 A EP94905743 A EP 94905743A EP 0640237 B1 EP0640237 B1 EP 0640237B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- speaker
- sound
- speech
- vocal tract
- cross
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 230000001755 vocal effect Effects 0.000 claims abstract description 38
- 238000006243 chemical reaction Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000005311 autocorrelation function Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000004704 glottis Anatomy 0.000 description 2
- 210000001260 vocal cord Anatomy 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 210000000867 larynx Anatomy 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 210000003800 pharynx Anatomy 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
- G10L2021/0135—Voice conversion or morphing
Definitions
- the invention relates to a method of converting speech, in which method samples are taken of a speech signal produced by a first speaker for the calculation of reflection coefficients.
- the speech of speech-handicapped persons is often unclear and sounds included therein are difficult to identify.
- the speech quality of speech-handicapped persons causes problems especially when a communications device or network is used for transmitting and transferring a speech signal produced by a speech-handicapped person to a receiver.
- the speech produced by the speech-handicapped person is then still more difficult to identify and understand for a listener.
- regardless of whether a communications device or network transferring speech signals is used it is always difficult for a listener to identify and understand the speech of a speech-handicapped person.
- the object of this invention is to provide a method, by which a speech of a speaker can be changed or corrected in such a way that the speech heard by a listener or the corrected or changed speech signal obtained by a receiver corresponds either to a speech produced by another speaker or to the speech of the same speaker corrected in some desired manner.
- the invention is based on the idea that a speech signal is analyzed by means of the LPC (Linear Prediction Coding) method, and a set of parameters modelling a speaker's vocal tract is created, which parameters typically are characteristics of reflection coefficients.
- W092/20064, by the same inventor discloses a speaker recognition method using such parameters.
- US-A-5 121 434 discloses a speech synthesis method by vocal tract simulation, which is also using such parameters.
- sounds are then identified from the speech to be converted by comparing the cross-sectional areas of the cylinders of the lossless tube calculated from the reflection coefficients of the sound to be converted with several speakers' previously received respective cross-sectional areas of the cylinders calculated for the same sound.
- some characteristic is calculated for the cross-sectional areas of each sound for each speaker.
- some characteristic typically an average, is calculated for the cross-sectional areas of each sound for each speaker.
- subtracted sound parameters corresponding to each sound i.e. the cross-sectional areas of the cylinders of the speaker's lossless vocal tract, providing a difference to be transferred to next conversion step together with the identifier of the sound.
- the characteristics of the sound parameters corresponding to each sound identifier of the speaker to be imitated i.e. the target person, have been agreed upon, and therefore, by summing said difference and the characteristic of the sound parameters for the same sound of the target person searched for in the memory, the original sound may be reproduced, but as if the target person would have uttered it.
- the sounds of the speech is brought along, i.e. the sounds not included in the sounds on the basis of the identifiers of which the characteristics corresponding to those sounds have been searched for in the memory, i.e. typically the averages of the cross-sectional areas of the cylinders of the lossless tube of the speaker's vocal tract.
- An advantage of such a method of converting speech is that the method makes it possible to correct errors and inaccuracies, occurring in speech sounds and caused by the speaker's physical properties, in such a way that the speech can be more easily understood by the listener.
- the method according to the invention makes it possible to convert a speaker's speech into a speech sounding like the speech of another speaker.
- cross-sectional areas of the cylinder portions of the lossless tube model used in the invention can be calculated easily from so-called reflection coefficients produced in conventional speech coding algorithms.
- some other cross-sectional dimension of the area such as radius or diameter, may also be determined to a reference parameter.
- the cross-section of the tube may also have some other shape.
- FIG. 1 showing a perspective view of a lossless tube model comprising successive cylinder portions C1 to C8 and constituting a rough model of a human vocal tract.
- the lossless tube model of Figure 1 can be seen in side view in Figure 2.
- the human vocal tract generally refers to a vocal passage defined by the human vocal cords, the larynx, the mouth of pharynx and the lips, by means of which tract a man produces speech sounds.
- the cylinder portion C1 illustrates the shape of a vocal tract portion immmediately after the glottis between the vocal cords
- the cylinder portion C8 illustrates the shape of the vocal tract at the lips
- the cylinder portions C2 to C7 inbetween illustrate the shape of the discrete vocal tract portions between the glottis and the lips.
- the shape of the vocal tract typically varies continuously during speaking, when sounds of different kinds are produced.
- the diameters and areas of the discrete cylinders C1 to C8 representing the various parts of the vocal tract also vary during speaking.
- WO 92/20064 discloses that the average shape of the vocal tract calculated from a relatively high number of instantaneous vocal tract shapes is a constant characteristic of each speaker, which constant may be used for a more compact transmission of sounds in a telecommunication system, for recognizing the speaker or even for converting the speaker's speech.
- the averages of the cross-sectional areas of the cylinder portions C1 to C8 calculated in the long term from the instantaneous values of the cross-sectional areas of the cylinders C1 to C8 of the lossless tube model of the vocal tract are also relatively exact constants.
- the values of the cross-sectional dimensions of the cylinders are also determined by the values of the actual vocal tract and are thus relatively exact constants characteristic of the speaker.
- the method according to the invention utilizes so-called reflection coefficients produced as a provisional result at Linear Predictive Coding (LPC) well-known in the art, i.e. so-called PARCOR-coefficients r K having a certain connection with the shape and structure of the vocal tract.
- LPC Linear Predictive Coding
- PARCOR-coefficients r K having a certain connection with the shape and structure of the vocal tract.
- the LPC analysis producing the reflection coefficients used in the invention is utilized in many known speech coding methods.
- an input signal IN is sampled in block 10 at a sampling frequency 8 kHz, and an 8-bit sample sequence S O is formed.
- a DC component is extracted from the samples so as to eliminate an interfering side tone possible occurring in coding.
- the sample signal is pre-emphasized in block 12 by weighting high signal frequencies by a first-order FIR (Finite Impulse Response) filter.
- FIR Finite Impulse Response
- the values of eight so-called reflection coefficients r K of a short-term analysis filter used in a speech coder are calculated from the obtained values of the auto-correlation function by Schur's recursion or some other suitable recursion method.
- Schur's recursion produces new reflection coefficients every 20th ms.
- the coefficients comprise 16 bits and their number is 8.
- step 16 the cross-sectional area A K of each cylinder portion C K of the lossless tube modelling the speaker's vocal tract by means of the cylindrical portions is calculated from the reflection coefficients r K calculated from each frame. As Schur's recursion produces new reflection coefficients every 20th ms, 50 cross-sectional areas per second will be obtained for each cylinder portion C K .
- the sound of the speech signal is identified in step 17 by comparing these calculated cross-sectional areas of the cylinders with the values of the cross-sectional areas of the cylinders stored in a parameter memory.
- step 18 averages of the first speaker's previous parameters for the same sound are searched for in the memory and from these averages are subtracted the instantaneous parameters of a sample just arrived from the same speaker, thus producing a difference, which is stored in the memory.
- step 19 the prestored averages of the cross-sectional areas of the cylinders of several samples of the target person's sound concerned are searched for in the memory, the target person being the person whose speech the converted speech shall resemble.
- the target person may also be e.g. the first speaker, but in such a way that the articulation errors made by the speaker are corrected by using in this conversion step new, more exact parameters, by means of which the speaker's speech can be converted into a more clear or more distinct speech, for example.
- step 20 the difference calculated above in step 18 is added to the average of the cross-sectional areas of the cylinders of the same sound of the target person. From this sum are calculated in step 21 reflection coefficients, which are LPC-decoded in step 22, which decoding produces electric speech signals to be applied to a microphone or a data communications system, for instance.
- speech conversion on a sound level will be described with reference to the block diagram of Figure 5a.
- speech can be coded and converted by means of a single sound, it is reasonable to use at conversion all such sounds a conversion of which is desired to be performed in such a way that the listener hears them as new sounds.
- speech can be converted so as to sound as if another speaker spoke instead of the actual speaker or so as to improve the speech quality for example in such a way that the listener distinguishes the sounds of the converted speech more clearly than the sounds of the original, unconverted speech.
- speech conversion can be used for instance all vowels and consonants.
- the instantaneous lossless tube model 59 ( Figure 5a) created from a speech signal can be identified in block 52 to correspond to a certain sound, if the cross-sectional dimension of each cylinder portion of the instantaneous lossless tube model 59 is within the predetermined stored limit values of a known speaker's respective sound.
- These sound-specific and cylinder-specific limit values are stored in a so-called quantization table 54 creating a so-called sound mask.
- the reference numerals 60 and 61 illustrate how said sound- and cylinder-specific limit values create a mask or model for each sound, within the allowed area 60A and 61A (unshadowed areas) of which the instantaneous vocal tract model 59 to be identified has to fit.
- Block 52 thus acts as a kind of sound filter, which classifies the vocal tract models into correct sound groups a, e, i, etc.
- parameters corresponding to each sound such as a, e, i, k, are searched for in a parameter memory 55 on the basis of identifiers 53 of the sounds identified in block 52 of Figure 5a, the parameters being sound-specific characteristics, e.g. averages, of the cross-sectional areas of the cylinders of the lossless tube.
- each sound it has also been possible to provide each sound to be identified with an identifier 53, by means of which parameters corresponding to each instantaneous sound can be searched for in the parameter memory 55.
- These parameters can be applied to a subtraction means calculating 56 according to Figure 5a the difference between the parameters of a sound searched for in the parameter memory by means of the sound identifier, i.e. the characteristic of the cross-sectional areas of the cylinders of the lossless tube, typically the average, and the instantaneous values of said sound.
- This difference is sent further to be summed and decoded in the manner shown in Figure 5b, which will be described in more detail in connection with the explanation of said figure.
- Figure 5b is a transaction diagram illustrating a reproduction of a speech signal on a sound level in the speech conversion method according to the invention.
- An identifier 500 of an identified sound is received and parameters corresponding to the sound are searched for in a parameter memory 501 on the basis of the sound parameter 500 and supplied 502 to a summer 503 creating new reflection coefficients by summing the difference and the parameters.
- a new speech signal is calculated by decoding the new reflection coefficients.
- FIG. 6 is a functional and simplified block diagram of a speech converter 600 implementing one embodiment of the method according to the invention.
- the speech of a first speaker i.e. the speaker to be imitated, comes to the speech converter 600 through a microphone 601.
- the converter may also be connected to some data communication system, whereby the speech signal to be converted enters the converter as an electric signal.
- the speech signal converted by the micophone 601 is LPC-coded 602 (encoded) and from that are calculated reflection coefficients for each sound.
- the other parts of the signal are sent 603 forward to be decoded 615 later.
- the calculated reflection coefficients are transmitted to a unit 604 for the calculation of characteristics, which unit calculates from the reflection coefficients the characteristics of the cross-sectional areas of the cylinders of the lossless tube modelling the speaker's vocal tract for each sound, which characteristics are transmitted further to a sound identification unit 605.
- the sound identification unit 605 identifies the sound by comparing cross-sectional areas of cylinder portions of a lossless tube model of the speaker's vocal tract, calculated from the reflection coefficients of the sound produced by the first speaker, i.e. the speaker to be imitated, with at least one previous speaker's respective previously identified sound-specific values stored in some memory. As a result of this comparison is obtained the identifier of the identified sound.
- parameters are searched for 607, 609 in a parameter table 608 of the speaker, in which table have been stored earlier some characteristics, e.g. averages, of this first speaker's (to be imitated) respective parameters for the same sound and the subtraction means 606 subtracts from them the instantaneous parameters of a sample just arrived from the same speaker.
- some characteristics e.g. averages, of this first speaker's (to be imitated) respective parameters for the same sound
- the subtraction means 606 subtracts from them the instantaneous parameters of a sample just arrived from the same speaker.
- the characteristic/ characteristics corresponding to that identified sound e.g. the sound-specific average of the cross-sectional areas of the lossless tube modelling the speaker's vocal tract calculated from the reflection coefficients, is searched for 610, 612 in a parameter table 611 of the target person, i.e. a second speaker being the speaker into whose speech the speech of the first speaker shall be converted, and is supplied to a summer 613.
- the difference calculated by the subtraction means which difference is added by the summer 617 to the characteristic/characteristics searched for in the parameter table 611 of the subject person, for instance to the sound-specific average of the cross-sectional areas of the cylinders of the lossless tube, modelling the speaker's vocal tract calculated from the reflection coefficients of the speaker's vocal tract.
- a total is then produced, from which are calculated reflection coefficients in a reproduction block 614 of reflection coefficients.
- a signal in which the first speaker's speech signal is converted into acoustic form in such a way that the listener believes that he hears the second speaker's speech, though the actual speaker is the first speaker whose speech has been converted so as to sound like the second speaker's speech.
- This speech signal is applied further to an LPC decoder 615, in which it is LPC-decoded and the LPC uncoded parts 603 of the speech signal are added thereto.
- the final speech signal which is converted into acoustic form in a loudspeaker 616.
- this speech signal can be left in electric form just as well and transferred to some data or telecommunication system to be transmitted or transferred further.
Landscapes
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)
- Complex Calculations (AREA)
- Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
- Electric Clocks (AREA)
- Filters That Use Time-Delay Elements (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
- Length Measuring Devices With Unspecified Measuring Means (AREA)
Claims (2)
- Procédé de conversion de signaux vocaux, comprenant les étapes consistant à:prélever ((601) des échantillons d'un signal vocal (IN) produit par un premier locuteur et calculer (602) des coefficients de réflexion (rk),calculer (604) à partir des coefficients de réflexion (rk) des caractéristiques de sections transversales (Ak) de parties cylindriques d'un tube sans pertes modélisant le conduit vocal du premier locuteur,identifier les sons en comparant (605) lesdites caractéristiques des sections transversales (AK) des parties cylindriques du tube sans pertes du premier locuteur avec des caractéristiques mémorisées respectives spécifiques de sons d'au moins un locuteur précédent qui concernent des sections transversales (Ak) de parties cylindriques d'un tube sans pertes modélisant ledit conduit vocal d'un locuteur précédent, et doter les sons identifiés d'identificateurs respectifs,calculer (606) des différences entre les caractéristiques mémorisées des sections transversales (AK) des parties cylindriques du tube sans pertes modélisant le conduit vocal du premier locuteur pour lesdits sons et les paramètres instantanés d'un échantillon venant d'arriver du même locuteur,rechercher (610) dans une mémoire, sur la base de l'identificateur des sons identifiés, les caractéristiques spécifiques de locuteur d'un deuxième locuteur qui concernent des sections transversales (Ak) de parties cylindriques d'un tube sans pertes modélisant le conduit vocal de ce deuxième locuteur pour les mêmes sons,former (613) une somme en additionnant lesdites différences (617) et les caractéristiques (612) spécifiques de locuteur du deuxième locuteur concernant les sections transversales des parties cylindriques du tube sans pertes modélisant le conduit vocal de ce deuxième locuteur pour les mêmes sons,calculer (614) de nouveaux coefficients de réflexion à partir de cette somme, etproduire (615) un nouveau signal vocal (616) à partir desdits nouveaux coefficients de réflexion.
- Procédé selon la revendication 1, caractérisé en ce que pour les dimensions physiques du tube sans pertes une caractéristique représentant les mêmes sons du premier locuteur est calculée (604) et enregistrée dans une mémoire (608).
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FI930629 | 1993-02-12 | ||
FI930629A FI96247C (fi) | 1993-02-12 | 1993-02-12 | Menetelmä puheen muuntamiseksi |
PCT/FI1994/000054 WO1994018669A1 (fr) | 1993-02-12 | 1994-02-10 | Procede de conversion de signaux vocaux |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0640237A1 EP0640237A1 (fr) | 1995-03-01 |
EP0640237B1 true EP0640237B1 (fr) | 1998-10-14 |
Family
ID=8537362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP94905743A Expired - Lifetime EP0640237B1 (fr) | 1993-02-12 | 1994-02-10 | Procede de conversion de signaux vocaux |
Country Status (9)
Country | Link |
---|---|
US (1) | US5659658A (fr) |
EP (1) | EP0640237B1 (fr) |
JP (1) | JPH07509077A (fr) |
CN (1) | CN1049062C (fr) |
AT (1) | ATE172317T1 (fr) |
AU (1) | AU668022B2 (fr) |
DE (1) | DE69413912T2 (fr) |
FI (1) | FI96247C (fr) |
WO (1) | WO1994018669A1 (fr) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9419388D0 (en) | 1994-09-26 | 1994-11-09 | Canon Kk | Speech analysis |
JP3522012B2 (ja) * | 1995-08-23 | 2004-04-26 | 沖電気工業株式会社 | コード励振線形予測符号化装置 |
US6240384B1 (en) | 1995-12-04 | 2001-05-29 | Kabushiki Kaisha Toshiba | Speech synthesis method |
JP3481027B2 (ja) * | 1995-12-18 | 2003-12-22 | 沖電気工業株式会社 | 音声符号化装置 |
US6542857B1 (en) * | 1996-02-06 | 2003-04-01 | The Regents Of The University Of California | System and method for characterizing synthesizing and/or canceling out acoustic signals from inanimate sound sources |
US6377919B1 (en) | 1996-02-06 | 2002-04-23 | The Regents Of The University Of California | System and method for characterizing voiced excitations of speech and acoustic signals, removing acoustic noise from speech, and synthesizing speech |
DE10034236C1 (de) * | 2000-07-14 | 2001-12-20 | Siemens Ag | Sprachkorrekturverfahren |
US7016833B2 (en) * | 2000-11-21 | 2006-03-21 | The Regents Of The University Of California | Speaker verification system using acoustic data and non-acoustic data |
US6876968B2 (en) * | 2001-03-08 | 2005-04-05 | Matsushita Electric Industrial Co., Ltd. | Run time synthesizer adaptation to improve intelligibility of synthesized speech |
CN1303582C (zh) * | 2003-09-09 | 2007-03-07 | 摩托罗拉公司 | 自动语音归类方法 |
EP2017832A4 (fr) * | 2005-12-02 | 2009-10-21 | Asahi Chemical Ind | Systeme de conversion de la qualite vocale |
US8251924B2 (en) * | 2006-07-07 | 2012-08-28 | Ambient Corporation | Neural translator |
GB2466668A (en) * | 2009-01-06 | 2010-07-07 | Skype Ltd | Speech filtering |
CN105654941A (zh) * | 2016-01-20 | 2016-06-08 | 华南理工大学 | 一种基于指向目标人变声比例参数的语音变声方法及装置 |
CN110335630B (zh) * | 2019-07-08 | 2020-08-28 | 北京达佳互联信息技术有限公司 | 虚拟道具显示方法、装置、电子设备及存储介质 |
US11514924B2 (en) * | 2020-02-21 | 2022-11-29 | International Business Machines Corporation | Dynamic creation and insertion of content |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CH581878A5 (fr) * | 1974-07-22 | 1976-11-15 | Gretag Ag | |
US4624012A (en) * | 1982-05-06 | 1986-11-18 | Texas Instruments Incorporated | Method and apparatus for converting voice characteristics of synthesized speech |
CA1334868C (fr) * | 1987-04-14 | 1995-03-21 | Norio Suda | Methode et appareil de synthese de sons |
FR2632725B1 (fr) * | 1988-06-14 | 1990-09-28 | Centre Nat Rech Scient | Procede et dispositif d'analyse, synthese, codage de la parole |
US5054083A (en) * | 1989-05-09 | 1991-10-01 | Texas Instruments Incorporated | Voice verification circuit for validating the identity of an unknown person |
US5522013A (en) * | 1991-04-30 | 1996-05-28 | Nokia Telecommunications Oy | Method for speaker recognition using a lossless tube model of the speaker's |
FI91925C (fi) * | 1991-04-30 | 1994-08-25 | Nokia Telecommunications Oy | Menetelmä puhujan tunnistamiseksi |
US5165008A (en) * | 1991-09-18 | 1992-11-17 | U S West Advanced Technologies, Inc. | Speech synthesis using perceptual linear prediction parameters |
US5528726A (en) * | 1992-01-27 | 1996-06-18 | The Board Of Trustees Of The Leland Stanford Junior University | Digital waveguide speech synthesis system and method |
-
1993
- 1993-02-12 FI FI930629A patent/FI96247C/fi active
-
1994
- 1994-02-10 JP JP6517698A patent/JPH07509077A/ja active Pending
- 1994-02-10 AU AU59730/94A patent/AU668022B2/en not_active Ceased
- 1994-02-10 DE DE69413912T patent/DE69413912T2/de not_active Expired - Fee Related
- 1994-02-10 CN CN94190055A patent/CN1049062C/zh not_active Expired - Fee Related
- 1994-02-10 EP EP94905743A patent/EP0640237B1/fr not_active Expired - Lifetime
- 1994-02-10 WO PCT/FI1994/000054 patent/WO1994018669A1/fr active IP Right Grant
- 1994-02-10 AT AT94905743T patent/ATE172317T1/de not_active IP Right Cessation
- 1994-02-10 US US08/313,195 patent/US5659658A/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
DE69413912T2 (de) | 1999-04-01 |
US5659658A (en) | 1997-08-19 |
EP0640237A1 (fr) | 1995-03-01 |
AU5973094A (en) | 1994-08-29 |
FI96247B (fi) | 1996-02-15 |
CN1049062C (zh) | 2000-02-02 |
CN1102291A (zh) | 1995-05-03 |
AU668022B2 (en) | 1996-04-18 |
FI930629A (fi) | 1994-08-13 |
WO1994018669A1 (fr) | 1994-08-18 |
DE69413912D1 (de) | 1998-11-19 |
ATE172317T1 (de) | 1998-10-15 |
FI96247C (fi) | 1996-05-27 |
JPH07509077A (ja) | 1995-10-05 |
FI930629A0 (fi) | 1993-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0640237B1 (fr) | Procede de conversion de signaux vocaux | |
EP0814458B1 (fr) | Améliorations en relation avec le codage des signaux vocaux | |
US7162415B2 (en) | Ultra-narrow bandwidth voice coding | |
EP1686565B1 (fr) | Extension de la largeur de bande d'un signal vocal à bande étroite | |
EP0987827A2 (fr) | Procédé de codage de signal audio sans transmission d'information d'allocation de bits | |
JPH09204199A (ja) | 非活性音声の効率的符号化のための方法および装置 | |
JP3189598B2 (ja) | 信号合成方法および信号合成装置 | |
EP1093112A2 (fr) | Procédé de génération d'un signal caractéristique de parole et dispositif de mise en oeuvre | |
US5828993A (en) | Apparatus and method of coding and decoding vocal sound data based on phoneme | |
EP1076895B1 (fr) | Systeme et procede pour ameliorer la qualite d'un signal vocal code coexistant avec un bruit de fond | |
US5522013A (en) | Method for speaker recognition using a lossless tube model of the speaker's | |
US5715362A (en) | Method of transmitting and receiving coded speech | |
EP0537316B1 (fr) | Methode de reconnaissance de la parole | |
KR100554164B1 (ko) | 서로 다른 celp 방식의 음성 코덱 간의 상호부호화장치 및 그 방법 | |
Kaleka | Effectiveness of Linear Predictive Coding in Telephony based applications of Speech Recognition | |
EP0929065A2 (fr) | Approche modulaire pour l'amélioration de la qualité de la voix avec application au codage de la parole | |
Ananth | A phoneme-level approach for output-based objective speech quality estimation | |
Keiser et al. | Efficient Speech Coding Techniques | |
JPH02115899A (ja) | 音声符号化方式 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LI MC NL PT SE |
|
17P | Request for examination filed |
Effective date: 19940929 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: NOKIA TELECOMMUNICATIONS OY |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
17Q | First examination report despatched |
Effective date: 19971222 |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAH | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOS IGRA |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LI MC NL PT SE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 19981014 Ref country code: ES Free format text: THE PATENT HAS BEEN ANNULLED BY A DECISION OF A NATIONAL AUTHORITY Effective date: 19981014 |
|
REF | Corresponds to: |
Ref document number: 172317 Country of ref document: AT Date of ref document: 19981015 Kind code of ref document: T |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: ICB INGENIEURS CONSEILS EN BREVETS SA Ref country code: CH Ref legal event code: EP |
|
REF | Corresponds to: |
Ref document number: 69413912 Country of ref document: DE Date of ref document: 19981119 |
|
ET | Fr: translation filed | ||
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 19990114 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 19990114 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 19990210 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 19990831 |
|
26N | No opposition filed | ||
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20010205 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: SE Payment date: 20010206 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20010207 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20010213 Year of fee payment: 8 Ref country code: CH Payment date: 20010213 Year of fee payment: 8 Ref country code: AT Payment date: 20010213 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: NL Payment date: 20010228 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: BE Payment date: 20010427 Year of fee payment: 8 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: IF02 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020210 Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020210 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020211 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020228 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020228 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020228 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E |
|
BERE | Be: lapsed |
Owner name: NOKIA TELECOMMUNICATIONS OY Effective date: 20020228 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020901 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20020903 |
|
EUG | Se: european patent has lapsed |
Ref document number: 94905743.4 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20020210 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20021031 |
|
NLV4 | Nl: lapsed or anulled due to non-payment of the annual fee |
Effective date: 20020901 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050210 |