EP1554715B1 - Verfahren zur rechnergestützten sprachsynthese eines gespeicherten elektronischen textes zu einem analogen sprachsignal, sprachsyntheseeinrichtung und telekommunikationsgerät - Google Patents
Verfahren zur rechnergestützten sprachsynthese eines gespeicherten elektronischen textes zu einem analogen sprachsignal, sprachsyntheseeinrichtung und telekommunikationsgerät Download PDFInfo
- Publication number
- EP1554715B1 EP1554715B1 EP03757683A EP03757683A EP1554715B1 EP 1554715 B1 EP1554715 B1 EP 1554715B1 EP 03757683 A EP03757683 A EP 03757683A EP 03757683 A EP03757683 A EP 03757683A EP 1554715 B1 EP1554715 B1 EP 1554715B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- electronic
- text
- lexicon
- sequence
- phonetic units
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000015572 biosynthetic process Effects 0.000 title claims description 54
- 238000003786 synthesis reaction Methods 0.000 title claims description 53
- 238000000034 method Methods 0.000 title claims description 32
- 238000004458 analytical method Methods 0.000 claims description 35
- 230000006870 function Effects 0.000 claims description 24
- 238000013507 mapping Methods 0.000 claims description 13
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 12
- 230000001944 accentuation Effects 0.000 description 10
- 238000007781 pre-processing Methods 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000006835 compression Effects 0.000 description 7
- 238000007906 compression Methods 0.000 description 7
- MQJKPEGWNLWLTK-UHFFFAOYSA-N Dapsone Chemical compound C1=CC(N)=CC=C1S(=O)(=O)C1=CC=C(N)C=C1 MQJKPEGWNLWLTK-UHFFFAOYSA-N 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000013518 transcription Methods 0.000 description 6
- 230000035897 transcription Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 4
- 239000000654 additive Substances 0.000 description 3
- 230000000996 additive effect Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- XUKUURHRXDUEBC-KAYWLYCHSA-N Atorvastatin Chemical compound C=1C=CC=CC=1C1=C(C=2C=CC(F)=CC=2)N(CC[C@@H](O)C[C@@H](O)CC(O)=O)C(C(C)C)=C1C(=O)NC1=CC=CC=C1 XUKUURHRXDUEBC-KAYWLYCHSA-N 0.000 description 2
- 101100099898 Enterococcus faecalis Int-Tn gene Proteins 0.000 description 2
- 101100014660 Rattus norvegicus Gimap8 gene Proteins 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- HTIQEAQVCYTUBX-UHFFFAOYSA-N amlodipine Chemical compound CCOC(=O)C1=C(COCCN)NC(C)=C(C(=O)OC)C1C1=CC=CC=C1Cl HTIQEAQVCYTUBX-UHFFFAOYSA-N 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- PDZKPUQLQNDXMQ-UHFFFAOYSA-M sodium 5-(carbamoyldiazenyl)-6-hydroxy-1-methyl-2,3-dihydroindole-2-sulfonate Chemical compound CN1C(CC2=CC(=C(C=C21)O)N=NC(=O)N)S(=O)(=O)[O-].[Na+] PDZKPUQLQNDXMQ-UHFFFAOYSA-M 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 240000006240 Linum usitatissimum Species 0.000 description 1
- 241000276498 Pollachius virens Species 0.000 description 1
- 108010076504 Protein Sorting Signals Proteins 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002813 epsilometer test Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- -1 ion ion Chemical class 0.000 description 1
- 150000002500 ions Chemical class 0.000 description 1
- 210000002239 ischium bone Anatomy 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 239000011049 pearl Substances 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
Definitions
- the invention relates to a method for computer-assisted speech synthesis of a stored electronic text to an analog voice signal, a speech synthesis device and a telecommunication device.
- speech synthesis As a means of communicating information to humans in systems where other output media, e.g.
- graphics are not possible, for example because no monitor is available for displaying information or can not be used for reasons of space.
- a speech synthesis device and a method for speech synthesis is needed, which manages with very small demands on available resources in terms of computing power and in terms of the required storage space and yet a full synthesis, for example, for "reading aloud" a text, preferably an electronic message provides.
- the invention is based on the problem to provide a speech synthesis, which requires less storage space than is required in known speech synthesis or speech synthesis.
- the problem is solved by the method for computer-assisted speech synthesis of a stored electronic text to an analog voice signal, by a speech synthesis device and by a telecommunication device having the features according to the independent claims.
- the stored electronic text is usually stored in a predetermined electronic word processing format, such as ASCII. Additionally, control characters of a word processing system, such as page break control characters or formatting control characters, may be included in the electronic text.
- This text is converted by means of the method into an analog voice signal which is output by means of a loudspeaker to a user.
- text analysis rules are to be understood as a set of rules which are processed one after the other and which, as will be explained in more detail below, usually represent language-specific rules which describe a usual mapping of certain parts of the electronic text to one or more sound-language units.
- the abbreviation lexicon contains a mapping table of predefined abbreviations coded in the format in which the electronic text exists and the associated phonetic transcription of the abbreviation, for example encoded in SAMPA, as a corresponding representation of the respective given abbreviation.
- a second series of audio-language units is formed which is associated with the respective electronic abbreviation in the electronic text in the abbreviation lexicon.
- the electronic function dictionary is in this context a mapping table with predetermined function words, in turn encoded in the electronic text format used, and the respective functional word associated with audio units encoded in the respective phonetic transcription, preferably SAMPA, as a corresponding representation of the respective predetermined function word ,
- a function word is to be understood as meaning a word which functions nouns or verbs connects, for example the words: “for”, “under”, “on”, “with”, etc.
- a third sequence of audio-language units is formed corresponding to the corresponding entry in the electronic dictionary of functional words.
- a fourth dictionary is created using an exception lexicon Sequence of phonetic units formed.
- exception strings are stored, and the associated sequence of phonetic units, wherein a data tuple again contains two elements per data entry, the first element of the data tuple the respective term, encoded in the format of the electronic text, and the second element of the data tuple is the respective representation of the first element encoded in the respective phonetic transcription.
- a prosody is generated for the respectively formed sequence of loudspeaker units using predetermined prosody rules, and then the speech signal, preferably the analog speech signal to be output, is generated from the respective sequence of loudspeaker units and the prosody formed for the respective sequence of loudspeaker units.
- a speech synthesizer for synthesizing a stored electronic text into an analog speech signal has a text memory for storing the speech electronic text, as well as a rule memory for storing text analysis rules and storing prosody rules.
- a lexicon memory for storing an electronic abbreviation lexicon, an electronic function dictionary and an electronic exception lexicon.
- the speech synthesizer further includes a processor arranged to perform the method steps described above using the stored text analysis rules and prosody rules as well as the stored electronic lexicons.
- a telecommunication device with a speech synthesis device is provided.
- Another advantage of the invention is to be seen in the very easy scalability to increase the achievable quality of speech synthesis, since the respective electronic dictionaries and the rules are very easily expandable.
- the voice units are stored in compressed form and at least a portion of the stored compressed audio-language units, in particular the compressed speech units required to form the sequence of audio-language units, are decompressed before the formation of the respective sequence of audio-language units, in particular before the formation of the first sequence of loud-speaking units. Compressing the audio units achieves a further significant reduction in disk space requirements.
- both lossless and lossy compression algorithms can be used.
- the method is used in an embedded system, which is why the speech synthesis device according to one embodiment of the invention is set up as an embedded system.
- Fig.1 11 shows a telecommunication terminal 100 having a data display unit 101 for displaying information, an antenna 102 for receiving radio signals, a speaker 103 for outputting an analog voice signal, a keyboard 104 having input keys 105 for controlling the cellular phone 100, and a microphone 106 for recording a speech signal.
- the mobile telephone 100 is adapted for communication according to the GSM standard, alternatively according to the UMTS standard, the GPRS standard or any other suitable mobile radio standard.
- the mobile telephone 100 is set up to send and receive textual information, for example SMS messages (Short Message Service messages) or MMS messages (Multimedia Service messages).
- SMS messages Short Message Service messages
- MMS messages Multimedia Service messages
- Fig.2 1 shows a block diagram of the individual components integrated into the mobile telephone 100, in particular a speech synthesis unit explained in detail below, which is integrated into the mobile telephone 100 as an embedded system.
- the microphone 106 is coupled to an input interface 201.
- a central processing unit 202 as well as a memory 203 and an ADPCM coding / decoding unit 204 are provided and an output interface 205.
- the individual components are coupled to one another via a computer bus 206. With the output interface 205 of the speaker 103 is coupled.
- the central processor unit 202 is set up such that the method steps described below for speech synthesis, as well as for the operation of the mobile telephone, in particular for the decoding and coding of Mobile radio signals, required process steps are performed.
- the mobile telephone 100 is additionally configured for voice recognition.
- the computer programs 207 required for operating the mobile telephone 100 are stored, and also the corresponding text analysis rules 208 and prosody rules 209 explained in more detail below. Furthermore, a multiplicity of different electronic lexicons are stored in the memory 203, according to this embodiment an abbreviation lexicon 210, a function dictionary 331 and an exception lexicon 212.
- a respective mapping to a sequence of audio-language units is defined and stored for specific specifiable textual units.
- diphones are used.
- the diphones used in the speech synthesis are stored in a diphone lexicon 213, also stored in the memory 203.
- the diphone lexicon 213, which is also referred to below as a diphone inventory or as an inventory contains, as set forth above, the diphones used for speech synthesis, according to this embodiment, however, mapped to a sampling frequency of 8 kHz, thereby further reducing the required memory space is achieved, since usually a sampling frequency for the diphones of 16 kHz or even a higher sampling frequency is used, which according to the invention in an alternative embodiment of the invention is of course also possible.
- the diphones are encoded according to the ADPCM (Adapted Differential Pulse Code Modulation) and thus stored in the memory 203 in compressed form.
- ADPCM Adapted Differential Pulse Code Modulation
- an LPC method As described above, as an alternative to compressing the diphones, an LPC method, a CELP method, or even the GSM method may be employed, generally any compression method that achieves sufficiently large compression even with small signal portions while ensuring a sufficiently low information loss due to the compression. In other words, selecting a compression method which has a short transient of the encoder and causes a small quantization noise.
- the stored electronic text is stored in an electronic file 301 and, in addition to preferably ASCII coded words, includes special characters or control characters such as a "New Line” control character or a "New Paragraph” control character or a control character for formatting part or all of the electronic file 301 stored electronic text stored.
- the electronic text is subjected to different preprocessing rules as part of a word processor (block 302).
- the processed electronic text 303 is then sent to a module, i. a computer program component supplied to the prosody 304, in which, as will be explained in more detail below, the prosody for the electronic text is generated supplied.
- an ADPCM decoding using the inventory that is, using the diphone dictionary 213 whose compressed diphones 306 are ADPCM decoded prior to the processing described below, is performed by the ADPCM encoding / decoding unit 204, a building block selection, ie a selection of loudspeaker units, according to this embodiment of a selection of required diphons 307 (block 308).
- the selected diphones 307 ie generally the selected speaker units, become a computer program component for acoustic synthesis (Block 309) supplied there and merged into a speech signal to be output, which output speech signal is initially digital and digital / analog converted to an analog voice signal 310, which is supplied via the output interface 205 to the speaker 103 and is output to the user of the mobile phone 100 ,
- FIG. 4 In block diagram 400, the blocks of word processor 302 and prosody controller 304 are shown in more detail.
- the electronic file 301 stores a sufficiently long electronic text which is transferred to the processor unit 202 in a complete contiguous memory area.
- the electronic text has according to this embodiment, at least one subset, so that an appropriate Prosodiegener mich is possible.
- the electronic file 301 in the case where the respective transferred electronic text from the electronic file 301 is shorter than a subset, i.e., the electronic file 301, the electronic file 301 is transmitted. in the event that no punctuation within the given electronic text are determined, the text is interpreted as a subset and artificially added a dot as a punctuation mark.
- the text preprocessing (block 401) has the function of adapting the input electronic text to the character set used internally on the speech synthesis frame.
- a character table which encodes format information for each character. Access to the table (not shown), which is also stored in memory 203, is via the numeric value of the character.
- Control characters or characters that are not contained in the table are deleted from the entered electronic text.
- the table is used by the two program components text preprocessing (block 401) and the "spell" program component described below (block 408).
- the respective character class is coded one byte and the pronunciation form of the character as a string, i. as a consequence of loudspeaking units, i. according to the embodiment as Diphonate added. Overall, this results in a memory requirement of about one kbyte.
- the input text 402 filtered by the text preprocessing 401 is subsequently evaluated by means of a special text analysis rule set in the context of a grapheme-phoneme conversion (block 403), which text analysis rule is stored in the memory 203 and by means of the different connections of numbers in the filtered input text 402 are recognized and implemented (block 404). Since numbers can contain not only digit sequences, but also measures or currency information, the evaluation takes place before the further decomposition of the filtered electronic text 402.
- the filtered and number-examined electronic text 405 is then divided into sub-strings (i.e., words and phrases) using the program component tokenizer (block 406).
- the substrings are hereinafter referred to as tokens.
- the tokens undergo the lexical translation and the phonemic text analysis policy 407, respectively.
- the token can not be converted into a phonemic, e.g. be converted into a sequence of speech units, i. conversion of the respective token in the context of the output by means of spelling, i. the token is considered in the speech output as a sequence of individual letters and correspondingly mapped to a sequence of diphones for the individual letters and this sequence is output as a spelled character string to the user by means of the computer program component "spell" (block 408).
- numbers and number formats are recognized as part of the number conversion and converted into a sequence of phonetic units. First, it is checked in accordance with the number conversion text analysis rules whether the string is a known sequence of numbers and additional information.
- the number rules of the number conversion text analysis rules are implemented such that there is a strict separation of the rule interpreter, which is language independent, and the rules themselves, which are language dependent.
- reading in and converting the text analysis rules from the textual form and a memory efficient binary format is separate from the actual program, thus enabling efficient handling of the text analysis rules at runtime.
- the definition of the conversion rules is restricted to the most important number formats, again to save storage space. Cardinal numbers and ordinal numbers, date and time are implemented (including the adjusted token "clock"). However, a supplement to other formats is readily possible at any time by simply supplementing the number conversion text analysis rules.
- the determined character string according to the text analysis rule 208 in those of the respective Rule-assigned sequence of diphones converted, in other words, the found string is replaced by the rule target.
- the rule target contains placeholders for the numbers that are determined by the second level of the rule set.
- rules such as ordinals or years, for cardinal numbers that are specifically called by the rules of the first level written above.
- the number to be converted must first satisfy one condition, otherwise the next text analysis rule will be checked.
- a second condition can be tested, for which the number can be changed beforehand.
- two numbers are generated by arithmetic operations, which are used in the control target for final conversion. For example, a translation of the first rule above into colloquial language would be:
- Pattern rules ie the first-level rules and numerical rules described above, ie the second-level rules, include additional translation into a normal language form to facilitate troubleshooting. There Any messages can be generated in order to be able to follow the exact sequence of rule replacement from the outside.
- Any number format which does not satisfy any of the existing number conversion text analysis rules will be passed unhandled and eventually translated into the analog speech signal 306 in spell mode 408 into a sequence of diphones, one letter at a time, and output to the user.
- the program component "Tokenizer” detects word boundaries, i. Individual words are detected based on the intervening white characters. According to the character types, the token is classified either as a word (upper and lower case letters) or as a special format (special character).
- sentence boundaries are marked at all those locations where punctuation marks followed by spaces are detected immediately after a word. If a token which is not a number contains more than one special character, it will be mapped into the analogue speech signal by spelling mode and output.
- the class function word contains words that occur very frequently and therefore have a low information content and are rarely accentuated, which property is exploited in the context of the acoustic synthesis 309, as explained in more detail below.
- the word classes are coded for later accentuation in one byte and assigned to the respective word.
- X and Z may contain the characters "@” and "#", where "@” may be wildcards for any character and "@" represents the word boundary.
- the rules are arranged according to the first letter of the rule body, so that only a part of all rules needs to be searched. Within the respective section, the rules are ordered from the most specific to the most general, so that it is ensured that at least the last rule is processed. If a rule is applicable, the rule processing is jumped, the rule result W is appended to the sequence of phonemes already existing for the current word, and the pointer to the string to be converted is continued by the number of characters in the rule body.
- Efforts to efficiently present the policy as it is stored in memory 203 are based on a rule number of 1254 rules. If all four parts of a rule are stored in a table with a fixed number of rows and a number of columns, one row at a time, the table width must be the length of the longest overall rule, in this case 19 bytes. Access to the rules is very simple due to the field structure, but results in a memory requirement of 23 kilobytes.
- rule components are packed tightly into an array, which is why an additional field of pointers with the length of 2500 bytes is needed for the access, but a total of only a memory requirement of 15 kilobytes exists.
- the token is spelled out by replacing each character with its corresponding phonetic representation and outputting it in a corresponding manner. Due to the resulting disproportionate extension of the text (substitution of each character by n new characters), the number of characters spellable per token according to this embodiment is limited to a maximum of 10.
- prosodic processing modules in prosody control 304 namely accentuation and silbification (block 409), volume control (block 410) and intonation control (block 411)
- Some of the relevant information is already contained in the phoneme sequence of the token, if it was generated using one of the lexicons 210, 211, 212, with the rules for the implementation of numbers and number intervals or in spelling mode. In this part, the information mentioned is collected from the phoneme sequence.
- the syllable boundary information or accentuation information is not yet available, it will be generated via further heuristic rules, which will be explained in more detail below.
- the phoneme table contains 49 phonemes and special characters (main and secondary accent, hyphenator, pauses) as well as classification characteristics (long vowel, short vowel, diphthong, consonant class, etc.).
- syllable nuclei and syllable nucleus types are determined for the hyphenation and within the intervocal consonant sequence according to heuristic rules the syllable boundary is determined.
- the accentuation rules assign an accent to the first syllable in the word with long vowel or diphthong. If neither of these two syllable core types is present, the accent is assigned to the first syllable with short vowel.
- the output sound length can be stretched or truncated by factors associated with the influences, with a reduction allowed only for a minimum duration.
- the model provides a specific sound duration for each sound and the duration of pauses at syntactic limits. Phrase boundaries, subset boundaries, and landing boundaries provide pauses of increasing length.
- the phrase-based component is formed using the knowledge that over each phrase, the fundamental frequency continuously decreases from the beginning to the end of the phrase (declination).
- the interval width of the fundamental frequency movement is freely selectable as the control variable of the model.
- FIG. 5a shows in a timing diagram 500 a minimum fundamental frequency 501 and a relative average fundamental frequency 502 and the course 503, the fundamental frequency over time.
- the knowledge is used that, depending on the type of sentence to be realized (declarative sentence, continuation, exclamation, question) at the end of each phrase, the declination line is associated with a phrase-typical final movement.
- This movement extends from the position of the last sentence accent in the phrase to the end of the phrase, but at most over the last five syllables of the phrase.
- a first fundamental frequency curve 511 represents the final movement, a second fundamental frequency curve 512 a further, ie a continuation rate and a third fundamental frequency curve 513, a question.
- an accent-based component is taken into account as the component for the entire prosody, the finding being used that in the case where one syllable carries a sentence accent, the fundamental frequency is raised over the entire syllable and lowered back to the declination line over the duration of the subsequent syllable becomes.
- the Akzenthub can be selected as a control variable of the model again freely adapted application.
- a first accent component 521 consisting of three regions, where in a first ascending region (in a first time region 522) from the declination line the fundamental frequency is raised to the accent stroke 523, there is retained during a second time range 524 and is only returned to the declination line in a third time range 525.
- a second accent structure 526 is formed of only two time ranges, the ascending branch 527, in which the fundamental frequency is increased from the declination line to the accent stroke 523 and the descending branch 528, according to the immediately after reaching the accent stroke 523, the fundamental frequency again continuously on the declination line is reduced (second time range 528).
- 5d shows a Rescuesprosodie 531 in a fourth time chart 530, the Autosprosodie the additive superposition of the in 5a to 5c represents individual components shown.
- the total contour 531 is given to each participating phoneme, i. each phoneme in the word string for which the overall melody was determined, in each case assigned a value corresponding to the total prosodie determined.
- the intonation contour is then reproduced by linear interpolation between the phoneme-based support points.
- the accentuation on the first long vowel or if no such can be found on the first short vowel of the word.
- the syllables are considered from right to left in contrast to the solution described above, i. starting at the suffix of the word.
- the suffix is a "heavy" syllable, it gets the accent (1), otherwise it goes over to the penultimate syllable. If the penultimate syllable is conspicuous, ie not a "Schwa syllable", this is emphasized, otherwise in each step one syllable further forward in the direction of the beginning of the word, until a conspicuous syllable has been determined or the beginning of the word is reached ,
- Syllables that do not have a coda are basically light syllables. If the coda consists of two or more consonants, it is a heavy syllable.
- the coda consists of exactly one consonant.
- it is decided on the basis of the syllable nucleus, whether it is a slight (in the case of a short vowel as a syllable nucleus) or a heavy syllable (in the case of a long vowel or diphthongs in the syllable nucleus).
- the syllable initial plays no role in the determination of the syllable weight.
- the intensity parameter is generated by preprocessing and serves to influence the dynamic range (and thus the naturalness) of the speech-synthesized signal.
- the operation of the intensity control is thus comparable to the operation of the basic frequency control, as described above.
- the respective nodes of the intensity control and the Basic frequency control can be selected independently of each other.
- the target intensities are given in units of [dB]. A target intensity of 0 dB does not change the sample values of the signal modules.
- the target intensities to be set form an indication of the relative change in intensity possessed by the inventory blocks. This means that it is advantageous to use an inventory with balanced intensity gradients.
- the task of the block selection 304 is to determine the dependence of the symbol sequence supplied by the preprocessing (phoneme sequence or syllable sequence) from the inventory or the inventory description the suitable building blocks, according to the embodiment, the suitable diphones for the acoustic synthesis and select.
- the block sequence generated in this way is provided with additional prosodic information, as explained above (sound duration, fundamental frequency profile), which was generated by the preprocessing.
- Each element of the array contains the information for a symbol (phoneme, syllable, ).
- An array structure of the data structure SM is generated by the block selection and passed to the acoustic synthesis.
- the component unit contains the name of the block, anzLaute the number of symbols (phonemes, syllables, ...) contained in the block. All other components are taken from the preprocessing data structure SMPROS.
- the array of the data structure INV contains the description data for an inventory.
- the array is read from the appropriate binary file of the inventory to be used before startup.
- Each element of the array INV contains the data of a sound module.
- the elements are sorted by the start symbol of the element canon of the structure, by the number of symbols contained in the building block (phonemes, syllables, ...) and by the length of the element sequence kanon of the structure (in that order). This allows an effective search for the required device in the array.
- Figure 6 shows in a structogram 600 the procedure of the block selection according to the embodiment of the invention.
- a break of length 0 is inserted before the first element which is identified by the pointer * SMPROS. This serves to find the start module in the inventory.
- the variable i is initialized to the value 0 (step 602) and the following steps are performed in a first intonation loop 603 for all elements of the respective SMPROS structure (all sounds).
- the longest sound sequence that matches the element sequence at the current position i of the structure is determined (step 604).
- step 605 If such a block is found (step 605, step 606), then the block is added to the data structure SM, and the variable i is added by the value num of the maximum number Symbols whose symbol sequence is equal to the symbol sequence in * (SMPROS + i + j) are increased.
- test step 607 it is checked whether there are substitute sounds for the sounds included in the device (test step 607), and in the case where such a substitute sound exists, the sound is replaced (step 608). Otherwise, the value of the variable i is incremented by the value 1 (step 609) and the iteration loop of steps 604-609 is again run through for the new value of the variable i until all elements of the SMPROS structure have been tested.
- the function of the Acoustic Synthesis 309 is to concatenate the signal sections as dictated by the block selection.
- the fundamental frequency and the sound duration are manipulated by means of the PSOLA algorithm.
- the input of the acoustic synthesis 309 is the SM structure, which is generated by the program component "block selection" 308.
- the SM structure contains the blocks to be linked and the information about the fundamental frequency and the duration of the sound, which were generated by the preprocessing.
- step 702 it is checked in each case whether the sound j represents a pause (step 702).
- the pause is synthesized as a speech signal (step 703).
- variable k is assigned the value of the start period of the sound j (step 706).
- the desired period duration is calculated according to the interpolated fundamental frequency contour (step 709).
- step 710 It is now checked whether the previously synthesized sound duration is less than or equal to the proportionate wanted sound duration (step 710), and in the case that this condition is satisfied, the period of desired period duration is synthesized according to the PSOLA algorithm (step 711).
- step 712 it is checked again whether the previously synthesized sound duration is less than or equal to the proportionate desired sound duration.
- the value of the variable k is incremented by the value 1 (step 713).
- this procedure means that, depending on the insertions and omissions of periods, different periods are superimposed by means of the PSOLA algorithm, otherwise the period is superimposed on itself.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Machine Translation (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10244166 | 2002-09-23 | ||
DE10244166 | 2002-09-23 | ||
PCT/DE2003/003158 WO2004029929A1 (de) | 2002-09-23 | 2003-09-23 | Verfahren zur rechnergestützten sprachsynthese eines gespeicherten elektronischen textes zu einem analogen sprachsignal, sprachsyntheseeinrichtung und telekommunikationsgerät |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1554715A1 EP1554715A1 (de) | 2005-07-20 |
EP1554715B1 true EP1554715B1 (de) | 2010-04-14 |
Family
ID=32038177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03757683A Expired - Fee Related EP1554715B1 (de) | 2002-09-23 | 2003-09-23 | Verfahren zur rechnergestützten sprachsynthese eines gespeicherten elektronischen textes zu einem analogen sprachsignal, sprachsyntheseeinrichtung und telekommunikationsgerät |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1554715B1 (zh) |
CN (1) | CN100354928C (zh) |
DE (1) | DE50312627D1 (zh) |
WO (1) | WO2004029929A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102013219828A1 (de) * | 2013-09-30 | 2015-04-02 | Continental Automotive Gmbh | Verfahren zum Phonetisieren von textenthaltenden Datensätzen mit mehreren Datensatzteilen und sprachgesteuerte Benutzerschnittstelle |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105895075B (zh) * | 2015-01-26 | 2019-11-15 | 科大讯飞股份有限公司 | 提高合成语音韵律自然度的方法及系统 |
CN105895076B (zh) * | 2015-01-26 | 2019-11-15 | 科大讯飞股份有限公司 | 一种语音合成方法及系统 |
CN108231058A (zh) * | 2016-12-17 | 2018-06-29 | 鸿富锦精密电子(天津)有限公司 | 语音辅助测试系统及语音辅助测试方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1217610A1 (de) * | 2000-11-28 | 2002-06-26 | Siemens Aktiengesellschaft | Verfahren und System zur multilingualen Spracherkennung |
JP2002169581A (ja) * | 2000-11-29 | 2002-06-14 | Matsushita Electric Ind Co Ltd | 音声合成方法およびその装置 |
-
2003
- 2003-09-23 EP EP03757683A patent/EP1554715B1/de not_active Expired - Fee Related
- 2003-09-23 CN CNB038226553A patent/CN100354928C/zh not_active Expired - Fee Related
- 2003-09-23 WO PCT/DE2003/003158 patent/WO2004029929A1/de active Application Filing
- 2003-09-23 DE DE50312627T patent/DE50312627D1/de not_active Expired - Lifetime
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102013219828A1 (de) * | 2013-09-30 | 2015-04-02 | Continental Automotive Gmbh | Verfahren zum Phonetisieren von textenthaltenden Datensätzen mit mehreren Datensatzteilen und sprachgesteuerte Benutzerschnittstelle |
Also Published As
Publication number | Publication date |
---|---|
WO2004029929A1 (de) | 2004-04-08 |
DE50312627D1 (de) | 2010-05-27 |
CN1685396A (zh) | 2005-10-19 |
CN100354928C (zh) | 2007-12-12 |
EP1554715A1 (de) | 2005-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE60035001T2 (de) | Sprachsynthese mit Prosodie-Mustern | |
US7558732B2 (en) | Method and system for computer-aided speech synthesis | |
KR900009170B1 (ko) | 규칙합성형 음성합성시스템 | |
EP0886853B1 (de) | Auf mikrosegmenten basierendes sprachsyntheseverfahren | |
DE69925932T2 (de) | Sprachsynthese durch verkettung von sprachwellenformen | |
DE60126564T2 (de) | Verfahren und Anordnung zur Sprachsysnthese | |
DE69028072T2 (de) | Verfahren und Einrichtung zur Sprachsynthese | |
DE69909716T2 (de) | Formant Sprachsynthetisierer unter Verwendung von Verkettung von Halbsilben mit unabhängiger Überblendung im Filterkoeffizienten- und Quellenbereich | |
DE69031165T2 (de) | System und methode zur text-sprache-umsetzung mit hilfe von kontextabhängigen vokalallophonen | |
DE69821673T2 (de) | Verfahren und Vorrichtung zum Editieren synthetischer Sprachnachrichten, sowie Speichermittel mit dem Verfahren | |
EP0504927B1 (en) | Speech recognition system and method | |
EP1159734B1 (de) | Verfahren und anordnung zur ermittlung einer merkmalsbeschreibung eines sprachsignals | |
DE60020434T2 (de) | Erzeugung und Synthese von Prosodie-Mustern | |
DE69937176T2 (de) | Segmentierungsverfahren zur Erweiterung des aktiven Vokabulars von Spracherkennern | |
DE2212472A1 (de) | Verfahren und Anordnung zur Sprachsynthese gedruckter Nachrichtentexte | |
EP0925578B1 (de) | Sprachverarbeitungssystem und verfahren zur sprachverarbeitung | |
DE19825205C2 (de) | Verfahren, Vorrichtung und Erzeugnis zum Generieren von postlexikalischen Aussprachen aus lexikalischen Aussprachen mit einem neuronalen Netz | |
DE69917960T2 (de) | Phonembasierte Sprachsynthese | |
EP3010014B1 (de) | Verfahren zur interpretation von automatischer spracherkennung | |
DE69727046T2 (de) | Verfahren, vorrichtung und system zur erzeugung von segmentzeitspannen in einem text-zu-sprache system | |
EP1554715B1 (de) | Verfahren zur rechnergestützten sprachsynthese eines gespeicherten elektronischen textes zu einem analogen sprachsignal, sprachsyntheseeinrichtung und telekommunikationsgerät | |
EP0058130B1 (de) | Verfahren zur Synthese von Sprache mit unbegrenztem Wortschatz und Schaltungsanordnung zur Durchführung des Verfahrens | |
EP1344211B1 (de) | Vorrichtung und verfahren zur differenzierten sprachausgabe | |
Kumar et al. | Significance of durational knowledge for speech synthesis system in an Indian language | |
Furtado et al. | Synthesis of unlimited speech in Indian languages using formant-based rules |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20050112 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
RBV | Designated contracting states (corrected) |
Designated state(s): DE FR GB IT |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: INFINEON TECHNOLOGIES AG |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): DE FR GB IT |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REF | Corresponds to: |
Ref document number: 50312627 Country of ref document: DE Date of ref document: 20100527 Kind code of ref document: P |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20110117 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20120614 AND 20120620 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20121213 AND 20121219 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL MOBILE COMMUNICATIONS GMBH, DE Free format text: FORMER OWNER: INFINEON TECHNOLOGIES AG, 85579 NEUBIBERG, DE Effective date: 20130315 Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL MOBILE COMMUNICATIONS GMBH, DE Free format text: FORMER OWNER: INFINEON TECHNOLOGIES AG, 85579 NEUBIBERG, DE Effective date: 20130314 Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL MOBILE COMMUNICATIONS GMBH, DE Free format text: FORMER OWNER: INTEL MOBILE COMMUNICATIONS TECHNOLOGY GMBH, 85579 NEUBIBERG, DE Effective date: 20130326 Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL MOBILE COMMUNICATIONS GMBH, DE Free format text: FORMER OWNER: INTEL MOBILE COMMUNICATIONS GMBH, 85579 NEUBIBERG, DE Effective date: 20130315 Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL DEUTSCHLAND GMBH, DE Free format text: FORMER OWNER: INTEL MOBILE COMMUNICATIONS GMBH, 85579 NEUBIBERG, DE Effective date: 20130315 Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL DEUTSCHLAND GMBH, DE Free format text: FORMER OWNER: INTEL MOBILE COMMUNICATIONS TECHNOLOGY GMBH, 85579 NEUBIBERG, DE Effective date: 20130326 Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL DEUTSCHLAND GMBH, DE Free format text: FORMER OWNER: INFINEON TECHNOLOGIES AG, 85579 NEUBIBERG, DE Effective date: 20130314 Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL DEUTSCHLAND GMBH, DE Free format text: FORMER OWNER: INFINEON TECHNOLOGIES AG, 85579 NEUBIBERG, DE Effective date: 20130315 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20140917 Year of fee payment: 12 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20140916 Year of fee payment: 12 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20140906 Year of fee payment: 12 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20150916 Year of fee payment: 13 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 50312627 Country of ref document: DE Owner name: INTEL DEUTSCHLAND GMBH, DE Free format text: FORMER OWNER: INTEL MOBILE COMMUNICATIONS GMBH, 85579 NEUBIBERG, DE |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: CD Owner name: INTEL DEUTSCHLAND GMBH, DE Effective date: 20160126 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150923 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20150923 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20160531 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150923 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20150930 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 50312627 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170401 |