EP1344211B1 - Device and method for differentiated speech output - Google Patents

Device and method for differentiated speech output Download PDF

Info

Publication number
EP1344211B1
EP1344211B1 EP01991746A EP01991746A EP1344211B1 EP 1344211 B1 EP1344211 B1 EP 1344211B1 EP 01991746 A EP01991746 A EP 01991746A EP 01991746 A EP01991746 A EP 01991746A EP 1344211 B1 EP1344211 B1 EP 1344211B1
Authority
EP
European Patent Office
Prior art keywords
speech
output
parameters
speech output
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP01991746A
Other languages
German (de)
French (fr)
Other versions
EP1344211A1 (en
Inventor
Georg Obert
Klaus Bengler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Publication of EP1344211A1 publication Critical patent/EP1344211A1/en
Application granted granted Critical
Publication of EP1344211B1 publication Critical patent/EP1344211B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser

Definitions

  • the present invention relates to a differential speech output apparatus and method, systems for use with the speech output apparatus, and combinations of a speech output apparatus having at least two systems, particularly for use in a vehicle.
  • the EP-A-0 901 000 describes a device for processing messages with receiving means for receiving sent messages, a memory for storing a plurality of different articulations (tone of voice respectively) and assignment means for assigning an articulation of the plurality of articulations to at least one received message. Another articulation is assigned to another received message and output means issues the first message with a first articulation and the second message with a second articulation.
  • the invention is in particular the object of providing a central speech output device with a plurality of systems, in which a single speech generator with a small parameter memory is driven by the systems.
  • the invention has the advantage that speech outputs for different systems are possible with a single speech output device or speech synthesis device, wherein each system can be identified by vocal characteristic differences.
  • a parameter set is assigned for each system, which is used by the speech synthesis device in a speech output from this system.
  • a first parameter set for an on-board computer a second parameter set for a navigation system, a third parameter set for traffic information, a fourth parameter set for a TTS system (text to speech system), such as e-mail and one or more additional parameter sets for additional Systems provided.
  • a TTS system text to speech system
  • the speech synthesizer generates the speech output, for example, with a soft female voice, e.g. for voice output of a navigation system, or with a hard male bass voice, z. B. for the voice output of traffic reports.
  • a method and apparatus is used for full speech synthesis, preferably a formant synthesizer.
  • the control parameters for the synthesizer are divided into classes.
  • a class of dynamic parameters controls the articulation, as the movement of the language tract in speaking.
  • a second class of static parameters controls speaker characteristic features, such as the generator fundamental frequency and fixed formants, in a child, a woman, or a woman a male speaker formed by the different geometric dimension of the language tract.
  • the device according to the invention or the method according to the invention can be used in particular in systems of a vehicle.
  • Each system has two ways to control the voice output for a voice output.
  • the first possibility of the speech output comprises sending an output of control commands for the language articulation, wherein the sequence of control parameters for words, sentences and sentence sequences are stored in the system.
  • the second way to control the speech output is via a second output, which switches a parameter set, which determines the speaker characteristic.
  • the generator and formant parameters are additionally dynamically changed.
  • This allows audible differences in prosody to be achieved, such as the duration and / or stress on syllable segments and / or sentence melody.
  • a prosodic modulation depending on z. B. be used by a traffic situation or a traffic situation for the voice output of announcements.
  • the explosiveness of information can be expressed by modulating the voice.
  • the invention has the advantage that z. B. in a vehicle only a single voice generator with a small parameter memory of several information sources driven can be.
  • the sources of information can be equipped with different voice characteristics.
  • emotional expression in the voice can also be given according to the invention.
  • Predefined parameter templates make it easy to change the tuning characteristics.
  • the method is also suitable for the implementation of free texts in speech (text to speech), z. B. reading aloud e-mail.
  • Fig. 1 shows a schematic diagram of a preferred embodiment of the invention for differentiated speech output with multiple systems according to the invention.
  • Fig. 1 illustrated preferred embodiment of the invention comprises a speech output unit 1 with a speech synthesis device 10, which in the example is a vocal tract synthesis module based on a full synthesis of the language.
  • a speech synthesis device 10 for example, a formant synthesizer such as KLATTALK can be used.
  • the speech synthesizer 10 is connected to an amplifier 12 whose output 14 provides an audio signal that outputs speech through a speaker (not shown).
  • the speech synthesis device 10 is assigned to N parameter sets 21, 22 to 2N, which in the example shown are stored in a memory 20 of the speech output unit 1.
  • N systems 31, 32 to 3N are shown, which are each connected to the voice output unit 1 via a data connection, such as individual lines, a bus system or data channels.
  • Each system can voice over the voice output unit.
  • an on-board computer 31 with an associated parameter set for the on-board computer 21 a navigation system 32 with an associated parameter set for the navigation 22, a traffic information system 33 with an associated parameter set for the traffic information 23, an e-mail system such as TTS system 34 with an associated parameter set for e-mail 24.
  • Additional systems 3N may be provided with a respective assigned parameter set 2N.
  • a parameter set 23 may be provided, with which a hard male bass voice is used in the speech output.
  • the order of the voice outputs may be sequential in time according to the receipt of the voice output request from the systems.
  • higher priority information e.g. Traffic information in case of danger such as wrong-way driver first issued by voice output.
  • information of the highest priority e.g. Information issued by the on-board computer about malfunction of the vehicle or incipient road smoothness immediately, with a current voice output can be interrupted. The interrupted speech output can then be completed or repeated.
  • the invention has the advantage that systems with an acoustic display provide the driver without distracting him from his task, as is the case with visual displays to provide information from various systems.
  • the use of a speech synthesis device which can be used by various on-board computers, can save costs. Compared to previously used speech-producing method in, for example, navigation systems, the storage space requirement can be reduced.
  • the invention is particularly advantageously used in motor vehicles.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a device and to a method for differentiated speech output. The systems available in a motor vehicle, such as on-board computer, navigation system and others can be linked with a speech output device. The speech output of different systems can be differentiated by voice characteristics.

Description

Die vorliegende Erfindung betrifft eine Vorrichtung zur differenzierten Sprachausgabe bzw Spracherzeugung und ein zugehöriges Verfahren, Systeme zur Verwendung mit der Sprachausgabevorrichtung und Kombinationen einer Sprachausgabevorrichtung mit mindestens zwei Systemen, insbesondere zum Einsatz in einem Fahrzeug.The present invention relates to a differential speech output apparatus and method, systems for use with the speech output apparatus, and combinations of a speech output apparatus having at least two systems, particularly for use in a vehicle.

In Fahrzeugen werden einzelne Systeme eingesetzt, die über eine akustische Mensch-Maschine-Schnittstelle zur Sprachausgabe verfügen. Bei diesen Systemen ist jeweils ein Sprachausgabemodul direkt zugeordnet. Die verwendeten sprachproduzierenden Verfahren beruhen meist auf Puls-Code-Modulation (= PCM), wobei eine nachfolgende Komprimierung (z. B. MPEG) angeschlossen sein kann. Andere Systeme verwenden Sprachsyntheseverfahren, die hauptsächlich über das Zusammenstellen von Silbensegmenten (Phoneme) Worte und Sätze bilden (Signalmanipulation).In vehicles, individual systems are used which have an acoustic man-machine interface for voice output. In these systems, a voice output module is assigned directly in each case. The speech-producing methods used are usually based on pulse-code modulation (= PCM), with subsequent compression (eg MPEG) being able to be connected. Other systems use speech synthesis methods, which form words and sentences mainly through the compilation of syllable segments (phonemes) (signal manipulation).

Bei den genannten Sprachausgabeverfahren besteht auch eine Sprecherabhängigkeit, die es erfordert, bei Erweiterung des Wort- oder Textumfanges immer wieder den gleichen menschlichen Sprecher für Aufnahmen zu bemühen. Des weiteren erfordern PCM-Verfahren genauso wie eine qualitativ hochwertige Phonemsynthese durch Signalmanipulation erheblichen Speicherplatz, um Texte oder Silbensegmente abzulegen. Bei beiden Verfahren nimmt der Speicherplatz noch erheblich zu, wenn unterschiedliche Landessprachen ausgegeben werden sollen.In the mentioned speech output methods, there is also a speaker dependency, which requires, as the word or text extent is expanded, to repeatedly strive for the same human speaker for recordings. Furthermore, PCM methods, as well as high-quality phoneme synthesis by signal manipulation, require considerable memory space to store texts or syllable segments. In both methods, the memory capacity increases significantly when different national languages are to be output.

Weiterhin sind Verfahren bekannt, die auf einer Vollsynthese der Sprache beruhen. Bekannt sind insbesondere Verfahren, die den menschlichen Vokaltrakt als elektrische Entsprechung umsetzen und mit einem Tongenerator und mehreren nachgeschalteten Filtern arbeiten (Quelle-Filter-Modell). Ein nach diesem Verfahren arbeitendes Gerät ist ein sog. Formantsynthetisator (z. B. KLATTALK). Ein solcher Formantsynthetisator hat den Vorteil, daß die stimmcharakteristischen Eigenschaften beeinflußbar sind.Furthermore, methods are known which are based on a full synthesis of the language. In particular, methods are known which implement the human vocal tract as electrical equivalent and work with a tone generator and several downstream filters (source filter model). A device operating according to this method is a so-called formant synthesizer (eg KLATTALK). Such a formant synthesizer has the advantage that the tuning characteristic properties can be influenced.

Die EP-A-0 901 000 beschreibt eine Vorrichtung zur Verarbeitung von Nachrichten mit Empfangsmitteln zum Empfang von versandten Nachrichten, einem Speicher zur Speicherung einer Mehrzahl von unterschiedlichen Artikulierungen (tone of voice bzw. voice tone) und Zuweisungsmitteln zur Zuweisung einer Artikulierung aus der Mehrzahl der Artikulierungen zu mindestens einer empfangenen Nachricht. Eine andere Artikulierung wird einer anderen empfangenen Nachricht zugewiesen und Ausgabemittel geben die erste Nachricht mit einer ersten Artikulierung und die zweite Nachricht mit einer zweiten Artikulierung aus.The EP-A-0 901 000 describes a device for processing messages with receiving means for receiving sent messages, a memory for storing a plurality of different articulations (tone of voice respectively) and assignment means for assigning an articulation of the plurality of articulations to at least one received message. Another articulation is assigned to another received message and output means issues the first message with a first articulation and the second message with a second articulation.

RUTLEDGE J C ET AL: "SYNTHESIZING STYLED SPEECH USING THE KLATT SYNTHESIZER" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ACOUSTICS; SPEECH; AND SIGNAL PROCESSING (ICASSP): DETROIT; MAY 9 -12, 1995. SPEECH, NEW YORK, IEEE, US, Bd. 1, 9. Mai 1995 (1995-05-09), Seiten 648-651, XP000658077 ISBN: 0-7803-2432-3 beschäftigt sich allgemein mit der Arbeitsweise und den experimentellen Erfahrungen beim Aufbau von vokaltraktbasierten Stimmensynthesizern (Klatt-Syntheziser) zur Synthese von verschiedenen Sprachstilen mit verschiedenen Stimmcharakteristiken. RUTLEDGE JC ET AL: "SYNTHESIZING STYLED SPEECH USING THE CLAD SYNTHESIZER" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ACOUSTICS; SPEECH; AND SIGNAL PROCESSING (ICASSP): DETROIT; MAY 9-12, 1995. SPEECH, NEW YORK, IEEE, US, Vol. 1, 9 May 1995 (1995-05-09), pages 648-651, XP000658077 ISBN: 0-7803-2432-3 He is generally concerned with the working and experimental experience in building vocal tract-based vocal synthesizers (Klatt Synthesizers) for the synthesis of different speech styles with different vocal characteristics.

Der Erfindung liegt insbesondere die Aufgabe zugrunde, eine zentrale Sprachausgabevorrichtung mit einer Vielzahl von Systemen bereitzustellen, bei der ein einziger Sprachgenerator mit kleinem Parameterspeicher von den Systemen angesteuert wird.The invention is in particular the object of providing a central speech output device with a plurality of systems, in which a single speech generator with a small parameter memory is driven by the systems.

Diese Aufgabe wird vorrichtungsmäßig mit den Merkmalen des Patentanspruchs 1 gelöst. Vorteilhafte Ausgestaltungen sind Gegenstand der abhängigen Ansprüche.This object is achieved by the device with the features of claim 1. Advantageous embodiments are the subject of the dependent claims.

Die Erfindung hat den Vorteil, dass mit einer einzigen Sprachausgabeeinrichtung bzw. Sprachsyntheseeinrichtung Sprachausgaben für verschiedene Systeme möglich sind, wobei jedes System durch stimmcharakteristische Unterschiede identifizierbar ist.The invention has the advantage that speech outputs for different systems are possible with a single speech output device or speech synthesis device, wherein each system can be identified by vocal characteristic differences.

Gemäß einer bevorzugten Ausführungsform der Erfindung ist für jedes System ein Parametersatz zugeordnet, der von der Sprachsyntheseeinrichtung bei einer Sprachausgabe von diesem System verwendet wird. Beispielsweise wird ein erster Parametersatz für einen Bordcomputer, ein zweiter Parametersatz für ein Navigationssystem, ein dritter Parametersatz für Verkehrsinformationen, ein vierter Parametersatz für ein TTS-System (Text to Speech-System), wie E-Mail und ein oder mehrere weitere Parametersätze für zusätzliche Systeme bereitgestellt.According to a preferred embodiment of the invention, a parameter set is assigned for each system, which is used by the speech synthesis device in a speech output from this system. For example, a first parameter set for an on-board computer, a second parameter set for a navigation system, a third parameter set for traffic information, a fourth parameter set for a TTS system (text to speech system), such as e-mail and one or more additional parameter sets for additional Systems provided.

Abhängig von dem zugeordneten Parametersatz erzeugt die Sprachsyntheseeinrichtung die Sprachausgabe beispielsweise mit einer weichen weiblichen Stimme, z.B. für Sprachausgaben eines Navigationssystems, oder mit einer harten männlichen Bassstimme, z. B. für die Sprachausgabe von Verkehrsmeldungen.Depending on the assigned parameter set, the speech synthesizer generates the speech output, for example, with a soft female voice, e.g. for voice output of a navigation system, or with a hard male bass voice, z. B. for the voice output of traffic reports.

Gemäß einer bevorzugten Ausführungsform der Erfindung wird ein Verfahren und eine Vorrichtung für eine Vollsynthese der Sprache verwendet, vorzugsweise ein Formantsynthetisator. Dabei werden die Steuerparameter für den Synthetisator in Klassen geteilt. Eine Klasse von dynamischen Parametern steuert die Artikulation, wie die Bewegung des Sprachtraktes beim Sprechen. Eine zweite Klasse von statischen Parametern steuert sprechercharakteristische Merkmale, wie die Generatorgrundfrequenz und feststehende Formanten, die bei einem Kind, einer Frau oder einem männlichen Sprecher durch die unterschiedliche geometrische Dimension des Sprachtraktes gebildet werden.According to a preferred embodiment of the invention, a method and apparatus is used for full speech synthesis, preferably a formant synthesizer. The control parameters for the synthesizer are divided into classes. A class of dynamic parameters controls the articulation, as the movement of the language tract in speaking. A second class of static parameters controls speaker characteristic features, such as the generator fundamental frequency and fixed formants, in a child, a woman, or a woman a male speaker formed by the different geometric dimension of the language tract.

Bei einem erweiterten Modell des Formantsynthetisators ist eine getrennte Generierung von stimmhaften und stimmlosen Lauten möglich. Dabei können durch weitere Parameter zusätzliche Resonatoren oder Dämpfungsglieder eingeschaltet oder die dynamischen Parameter für die Artikulation beeinflußt werden.In an extended model of the formant synthesizer, a separate generation of voiced and unvoiced sounds is possible. In this case, additional parameters or additional resonators or attenuators can be switched on or the dynamic parameters for the articulation can be influenced.

Die erfindungsgemäße Vorrichtung bzw. das erfindungsgemäße Verfahren ist insbesondere einsetzbar in Systemen eines Fahrzeuges. Jedes System hat für eine Sprachausgabe zwei Möglichkeiten, die Sprachausgabe zu steuern. Die erste Möglichkeit der Sprachausgabe umfaßt das Senden einer Ausgabe von Steuerbefehlen für die Sprachartikulation, wobei die Abfolge der Steuerparameter für Wörter, Sätze und Satzfolgen im System abgespeichert sind. Die zweite Möglichkeit zur Steuerung der Sprachausgabe erfolgt über eine zweite Ausgabe, die einen Parametersatz umschaltet, der für die Sprechercharakteristik bestimmend ist.The device according to the invention or the method according to the invention can be used in particular in systems of a vehicle. Each system has two ways to control the voice output for a voice output. The first possibility of the speech output comprises sending an output of control commands for the language articulation, wherein the sequence of control parameters for words, sentences and sentence sequences are stored in the system. The second way to control the speech output is via a second output, which switches a parameter set, which determines the speaker characteristic.

Alternativ oder zusätzlich ist es auch möglich, diesen Parameterdatensatz direkt im System zu speichern und bei einer erforderlichen Sprachausgabe den Parameterdatensatz in die Sprachsyntheseeinrichtung zu laden.Alternatively or additionally, it is also possible to store this parameter data record directly in the system and to load the parameter data record into the speech synthesis device when a voice output is required.

Gemäß einer weiteren bevorzugten Ausführungsform, die alternativ oder zusätzlich zu den vorstehenden Ausführungsformen einsetzbar ist, können zur Unterscheidung der Informationsquellen, d. h. der Systeme, die eine Sprachausgabe durchführen, die Generator- und Formantparameter zusätzlich dynamisch geändert werden. Dadurch können hörbare Unterschiede in der Prosodie erreicht werden, wie die Dauer und/oder Betonung von Silbensegmenten und/oder der Satzmelodie. Im speziellen kann eine prosodische Modulation in Abhängigkeit z. B. von einer Verkehrslage oder einer Verkehrssituation für die Sprachausgabe von Ansagetexten genutzt werden. Schließlich kann die Brisanz einer Information durch Modulation der Stimme ausgedrückt werden.According to a further preferred embodiment, which can be used alternatively or in addition to the above embodiments, in order to distinguish the sources of information, i. H. of the systems that perform a speech output, the generator and formant parameters are additionally dynamically changed. This allows audible differences in prosody to be achieved, such as the duration and / or stress on syllable segments and / or sentence melody. In particular, a prosodic modulation depending on z. B. be used by a traffic situation or a traffic situation for the voice output of announcements. Finally, the explosiveness of information can be expressed by modulating the voice.

Die Erfindung hat den Vorteil, daß z. B. in einem Fahrzeug nur ein einziger Sprachgenerator mit kleinem Parameterspeicher von mehreren Informationsquellen angesteuert werden kann. Die Informationsquellen können dabei mit unterschiedlichen Stimmcharakteristiken ausgestattet werden.The invention has the advantage that z. B. in a vehicle only a single voice generator with a small parameter memory of several information sources driven can be. The sources of information can be equipped with different voice characteristics.

Bei dem Einsatz einer Vollsyntheseeinrichtung, z. B. einer Vokaltrakt-Syntheseeinrichtung ergibt sich, daß das Verfahren sprecherunabhängig ist und keine hochwertigen Studioaufzeichnungen benötigt werden.When using a Vollsyntheseeinrichtung, z. As a vocal tract synthesis device shows that the method is speaker independent and no high-quality studio records are needed.

Bei einem erweiterten Formantsynthetisator kann erfindungsgemäß auch emotionaler Ausdruck in der Stimme mitgegeben werden.In the case of an extended formant synthesizer, emotional expression in the voice can also be given according to the invention.

Durch vorgefertigte Parameterschablonen kann sehr einfach die Stimmcharakteristik verändert werden. Das Verfahren eignet sich auch für die Umsetzung freier Texte in Sprache (Text to Speech), z. B. das Vorlesen von E-Mail.Predefined parameter templates make it easy to change the tuning characteristics. The method is also suitable for the implementation of free texts in speech (text to speech), z. B. reading aloud e-mail.

Die Erfindung wird nachstehend anhand eines Ausführungsbeispiels und der Zeichnung näher erläutert.The invention is explained below with reference to an embodiment and the drawing.

Fig. 1 zeigt eine Prinzipdarstellung einer bevorzugten Ausführungsform der Erfindung zur differenzierten Sprachausgabe mit mehreren erfindungsgemäßen Systemen. Fig. 1 shows a schematic diagram of a preferred embodiment of the invention for differentiated speech output with multiple systems according to the invention.

Die in Fig. 1 dargestellte bevorzugte Ausführungsform der Erfindung weist eine Sprachausgabeeinheit 1 mit einer Sprachsyntheseeinrichtung 10 auf, die in dem Beispiel ein Vokaltraktsynthesemodul ist und auf einer Vollsynthese der Sprache beruht. Beispielsweise kann ein Formantsynthetisator wie KLATTALK eingesetzt werden. Die Sprachsyntheseeinrichtung 10 ist mit einem Verstärker 12 verbunden, dessen Ausgang 14 ein Audiosignal liefert, das über einen Lautsprecher (nicht dargestellt) Sprache ausgibt. Der Sprachsyntheseeinrichtung 10 sind N Parametersätze 21, 22 bis 2N zugeordnet, die in dem gezeigten Beispiel in einem Speicher 20 der Sprachausgabeeinheit 1 gespeichert sind. Weiterhin sind N Systeme 31, 32 bis 3N gezeigt, die jeweils über eine Datenverbindung, wie einzelne Leitungen, ein Bussystem oder Datenkanäle, mit der Sprachausgabeeinheit 1 verbunden sind. Jedes System kann über die Sprachausgabeeinheit eine Sprachausgabe durchführen. Im einzelnen ist ein Bordcomputer 31 mit einem zugehörigen Parametersatz für den Bordcomputer 21, ein Navigationssystem 32 mit einem zugehörigen Parametersatz für die Navigation 22, ein Verkehrsinformationssystem 33 mit einem zugehörigen Parametersatz für die Verkehrsinformation 23, ein E-Mail-System wie TTS-System 34 mit einem zugehörigen Parametersatz für E-Mail 24 vorhanden. Es können weitere Systeme 3N mit einem jeweiligen zugeordneten Parametersatz 2N vorgesehen werden. In dem gezeigten Beispiel ist es möglich, unter Verwendung einer einzigen Sprachausgabeeinheit 1 das Navigationssystem 32 zum Beispiel mit einer weichen weiblichen Stimme sprechen zu lassen, die durch den Parametersatz für das Navigationssystem 22 bestimmt ist. Weiter kann beispielsweise für Verkehrsmeldungen ein Parametersatz 23 vorgesehen sein, mit dem eine harte männliche Baßstimme bei der Sprachausgabe verwendet wird.In the Fig. 1 illustrated preferred embodiment of the invention comprises a speech output unit 1 with a speech synthesis device 10, which in the example is a vocal tract synthesis module based on a full synthesis of the language. For example, a formant synthesizer such as KLATTALK can be used. The speech synthesizer 10 is connected to an amplifier 12 whose output 14 provides an audio signal that outputs speech through a speaker (not shown). The speech synthesis device 10 is assigned to N parameter sets 21, 22 to 2N, which in the example shown are stored in a memory 20 of the speech output unit 1. Furthermore, N systems 31, 32 to 3N are shown, which are each connected to the voice output unit 1 via a data connection, such as individual lines, a bus system or data channels. Each system can voice over the voice output unit. In detail, an on-board computer 31 with an associated parameter set for the on-board computer 21, a navigation system 32 with an associated parameter set for the navigation 22, a traffic information system 33 with an associated parameter set for the traffic information 23, an e-mail system such as TTS system 34 with an associated parameter set for e-mail 24. Additional systems 3N may be provided with a respective assigned parameter set 2N. In the example shown, it is possible, using a single voice output unit 1, to have the navigation system 32 speak, for example, a soft female voice determined by the parameter set for the navigation system 22. Further, for example, for traffic reports, a parameter set 23 may be provided, with which a hard male bass voice is used in the speech output.

Die Reihenfolge der Sprachausgaben kann zeitlich nacheinander erfolgen entsprechend dem Eingang des Auftrags zur Sprachausgabe von den Systemen. Vorzugsweise werden Informationen mit höherer Priorität, z.B. Verkehrsinformationen bei Gefahrsituationen wie Falschfahrer zuerst per Sprachausgabe ausgegeben. Besonders bevorzugt werden Informationen mit höchster Priorität, z.B. Informationen vom Bordcomputer über Fehlfunktionen des Fahrzeuges oder einsetzende Fahrbahnglätte sofort ausgegeben, wobei eine laufende Sprachausgabe unterbrochen werden kann. Die unterbrochene Sprachausgabe kann anschließend zu Ende geführt oder wiederholt werden.The order of the voice outputs may be sequential in time according to the receipt of the voice output request from the systems. Preferably, higher priority information, e.g. Traffic information in case of danger such as wrong-way driver first issued by voice output. Particular preference is given to information of the highest priority, e.g. Information issued by the on-board computer about malfunction of the vehicle or incipient road smoothness immediately, with a current voice output can be interrupted. The interrupted speech output can then be completed or repeated.

Die Erfindung hat den Vorteil, daß Systeme mit akustischer Anzeige dem Fahrer, ohne ihn von seiner Aufgabe abzulenken, wie das bei visuellen Anzeigen der Fall ist, Auskunft von verschiedenen Systemen bereitzustellen. Durch den Einsatz einer Sprachsyntheseeinrichtung, die von verschiedenen Bordcomputern verwendbar ist, lassen sich Kosten sparen. Gegenüber bisher verwendeten sprachproduzierenden Verfahren bei beispielsweise Navigationssystemen läßt sich der Speicherplatzbedarf verringern.The invention has the advantage that systems with an acoustic display provide the driver without distracting him from his task, as is the case with visual displays to provide information from various systems. The use of a speech synthesis device, which can be used by various on-board computers, can save costs. Compared to previously used speech-producing method in, for example, navigation systems, the storage space requirement can be reduced.

Die Erfindung ist insbesondere in vorteilhafter Weise einsetzbar in Kraftfahrzeugen.The invention is particularly advantageously used in motor vehicles.

Claims (10)

  1. A device for differentiated speech output (1), which can be connected to a first system (31) and at least one further system (32, 33 to 3N), a first voice characteristic being assigned to the speech output of the first system (31) and a further voice characteristic, which audibly differs from the first voice characteristic, being assigned to the further speech output of the further system (32, 33 to 3N), characterised by a speech synthesis mechanism (10), which receives control parameters, which have a first class of dynamic parameters and a second class of static parameters, the dynamic parameters controlling the articulation in accordance with the movement of a speech tract, and the static parameters controlling the voice-characteristic features, the static parameters for the systems being stored as assigned parameter sets in a memory (20) of the speech output device and an assigned parameter set being used by the speech synthesis mechanism (10) for the speech output depending on a selection signal of a system, and the dynamic parameters being stored in accordance with the sequence of words, sentences and sentence sequences in each system.
  2. A device according to claim 1, wherein the static parameters have a fundamental generator frequency and/or fixed formants, which preferably correspond to the different geometric dimension of the speech tract in a child, a woman or a male speaker.
  3. A device according to claim 2, wherein the generator and/or formant parameters can be changed for the speech output of various systems and audible differences are preferably brought about in the prosody and the duration and/or emphasis of syllable segments and/or the sentence melody.
  4. A device according to any one of claims 1 to 3, wherein the speech synthesis mechanism (10) is a formant synthesiser, with which the voice-characteristic properties can be influenced.
  5. A device according to claim 4, wherein the formant synthesiser is suitable for separately generating voiced and unvoiced sounds, and wherein additional resonators or attenuators can be switched on, especially using further parameters, and/or the dynamic parameters for the articulation can be influenced.
  6. A device according to any one of claims 1 to 5, wherein the speech synthesis mechanism (10) is connected to an amplifier (12) and a speech output takes place by means of an audio output (14) of the amplifier (12).
  7. A system for use with a device according to any one of claims 1 to 6, with a first output for outputting dynamic parameters and a second output for outputting a selection signal to switch over a parameter set in the speech output device (10).
  8. A system for use with a device according to any one of claims 1 to 6, with an output for outputting dynamic parameters and static parameters, preferably as a parameter set, to the speech output device (10).
  9. A combination of a device according to any one of claims 1 to 6, with at least a first and a further system, such as an on-board computer (31), a navigation system (32), a traffic information system (33), an e-mail system (34), or an information system (3N), preferably for use in a vehicle.
  10. A method for differentiated speech output using a device according to any one of claims 1 to 6.
EP01991746A 2000-12-20 2001-11-21 Device and method for differentiated speech output Expired - Lifetime EP1344211B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE10063503 2000-12-20
DE10063503A DE10063503A1 (en) 2000-12-20 2000-12-20 Device and method for differentiated speech output
PCT/EP2001/013488 WO2002050815A1 (en) 2000-12-20 2001-11-21 Device and method for differentiated speech output

Publications (2)

Publication Number Publication Date
EP1344211A1 EP1344211A1 (en) 2003-09-17
EP1344211B1 true EP1344211B1 (en) 2011-02-16

Family

ID=7667936

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01991746A Expired - Lifetime EP1344211B1 (en) 2000-12-20 2001-11-21 Device and method for differentiated speech output

Country Status (6)

Country Link
US (1) US7698139B2 (en)
EP (1) EP1344211B1 (en)
JP (1) JP2004516515A (en)
DE (2) DE10063503A1 (en)
ES (1) ES2357700T3 (en)
WO (1) WO2002050815A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2412046A (en) * 2004-03-11 2005-09-14 Seiko Epson Corp Semiconductor device having a TTS system to which is applied a voice parameter set
DE102005063077B4 (en) * 2005-12-29 2011-05-05 Airbus Operations Gmbh Record digital cockpit ground communication on an accident-protected voice recorder
EP2030195B1 (en) * 2006-06-02 2010-01-27 Koninklijke Philips Electronics N.V. Speech differentiation
DE102008019071A1 (en) * 2008-04-15 2009-10-29 Continental Automotive Gmbh Method for displaying information, particularly in motor vehicle, involves occurring display of acoustic paraverbal information for display of information, particularly base information
JP7133149B2 (en) * 2018-11-27 2022-09-08 トヨタ自動車株式会社 Automatic driving device, car navigation device and driving support system
JP7336862B2 (en) * 2019-03-28 2023-09-01 株式会社ホンダアクセス Vehicle navigation system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5667470A (en) * 1979-11-07 1981-06-06 Canon Inc Voice desk-top calculator
US5559927A (en) * 1992-08-19 1996-09-24 Clynes; Manfred Computer system producing emotionally-expressive speech messages
US5561736A (en) * 1993-06-04 1996-10-01 International Business Machines Corporation Three dimensional speech synthesis
JPH08328573A (en) * 1995-05-29 1996-12-13 Sanyo Electric Co Ltd Karaoke (sing-along machine) device, audio reproducing device and recording medium used by the above
US5924068A (en) * 1997-02-04 1999-07-13 Matsushita Electric Industrial Co. Ltd. Electronic news reception apparatus that selectively retains sections and searches by keyword or index for text to speech conversion
JP3287281B2 (en) * 1997-07-31 2002-06-04 トヨタ自動車株式会社 Message processing device
JP3502247B2 (en) * 1997-10-28 2004-03-02 ヤマハ株式会社 Voice converter
DE19908137A1 (en) * 1998-10-16 2000-06-15 Volkswagen Ag Method and device for automatic control of at least one device by voice dialog
US20020087655A1 (en) * 1999-01-27 2002-07-04 Thomas E. Bridgman Information system for mobile users
GB9925297D0 (en) * 1999-10-27 1999-12-29 Ibm Voice processing system
US6181996B1 (en) * 1999-11-18 2001-01-30 International Business Machines Corporation System for controlling vehicle information user interfaces
US6539354B1 (en) * 2000-03-24 2003-03-25 Fluent Speech Technologies, Inc. Methods and devices for producing and using synthetic visual speech based on natural coarticulation

Also Published As

Publication number Publication date
US7698139B2 (en) 2010-04-13
EP1344211A1 (en) 2003-09-17
WO2002050815A1 (en) 2002-06-27
JP2004516515A (en) 2004-06-03
ES2357700T3 (en) 2011-04-28
DE50115798D1 (en) 2011-03-31
DE10063503A1 (en) 2002-07-04
US20030225575A1 (en) 2003-12-04

Similar Documents

Publication Publication Date Title
DE60020773T2 (en) Graphical user interface and method for changing pronunciations in speech synthesis and recognition systems
DE602005002706T2 (en) Method and system for the implementation of text-to-speech
DE60035001T2 (en) Speech synthesis with prosody patterns
DE69821673T2 (en) Method and apparatus for editing synthetic voice messages, and storage means with the method
DE60112512T2 (en) Coding of expression in speech synthesis
DE19610019C2 (en) Digital speech synthesis process
DE69925932T2 (en) LANGUAGE SYNTHESIS BY CHAINING LANGUAGE SHAPES
DE69909716T2 (en) Formant speech synthesizer using concatenation of half-syllables with independent cross-fading in the filter coefficient and source range
US6405169B1 (en) Speech synthesis apparatus
DE60004420T2 (en) Recognition of areas of overlapping elements for a concatenative speech synthesis system
DE2115258A1 (en) Speech synthesis by concatenating words encoded in formant form
DE10042944A1 (en) Grapheme-phoneme conversion
DE69627865T2 (en) VOICE SYNTHESIZER WITH A DATABASE FOR ACOUSTIC ELEMENTS
EP1105867B1 (en) Method and device for the concatenation of audiosegments, taking into account coarticulation
EP1282897B1 (en) Method for creating a speech database for a target vocabulary in order to train a speech recognition system
EP1344211B1 (en) Device and method for differentiated speech output
EP0058130B1 (en) Method for speech synthesizing with unlimited vocabulary, and arrangement for realizing the same
EP1110203B1 (en) Device and method for digital voice processing
EP0725382A2 (en) Method and device providing digitally coded traffic information by synthetically generated speech
EP1554715B1 (en) Method for computer-aided speech synthesis of a stored electronic text into an analog speech signal, speech synthesis device and telecommunication apparatus
JPH09179576A (en) Voice synthesizing method
EP2592623B1 (en) Technique for outputting an acoustic signal by means of a navigation system
DE19837661C2 (en) Method and device for co-articulating concatenation of audio segments
EP1170723A2 (en) Method for the computation of phoneme duration statistics and method for the determination of the duration of isolated phonemes for speech synthesis
JP2577372B2 (en) Speech synthesis apparatus and method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030425

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

RBV Designated contracting states (corrected)

Designated state(s): DE ES FR GB IT SE

17Q First examination report despatched

Effective date: 20070808

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE ES FR GB IT SE

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REF Corresponds to:

Ref document number: 50115798

Country of ref document: DE

Date of ref document: 20110331

Kind code of ref document: P

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 50115798

Country of ref document: DE

Effective date: 20110331

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2357700

Country of ref document: ES

Kind code of ref document: T3

Effective date: 20110428

REG Reference to a national code

Ref country code: SE

Ref legal event code: TRGR

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20111117

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 50115798

Country of ref document: DE

Effective date: 20111117

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20201126

Year of fee payment: 20

Ref country code: GB

Payment date: 20201123

Year of fee payment: 20

Ref country code: ES

Payment date: 20201214

Year of fee payment: 20

Ref country code: FR

Payment date: 20201119

Year of fee payment: 20

Ref country code: IT

Payment date: 20201130

Year of fee payment: 20

Ref country code: SE

Payment date: 20201123

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 50115798

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20211120

REG Reference to a national code

Ref country code: SE

Ref legal event code: EUG

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20211120

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20220228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20211122

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230502