EP1168298B1 - Procédé d'assemblage de messages pour la synthèse de la parole - Google Patents
Procédé d'assemblage de messages pour la synthèse de la parole Download PDFInfo
- Publication number
- EP1168298B1 EP1168298B1 EP01114995A EP01114995A EP1168298B1 EP 1168298 B1 EP1168298 B1 EP 1168298B1 EP 01114995 A EP01114995 A EP 01114995A EP 01114995 A EP01114995 A EP 01114995A EP 1168298 B1 EP1168298 B1 EP 1168298B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- segments
- sentence
- reproduced
- segment
- database
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000015572 biosynthetic process Effects 0.000 title description 2
- 238000003786 synthesis reaction Methods 0.000 title description 2
- 230000007704 transition Effects 0.000 claims abstract description 24
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 3
- 102000003712 Complement factor B Human genes 0.000 claims 4
- 108090000056 Complement factor B Proteins 0.000 claims 4
- 230000033764 rhythmic process Effects 0.000 abstract 2
- 238000011835 investigation Methods 0.000 abstract 1
- 238000005457 optimization Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 230000001944 accentuation Effects 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007596 consolidation process Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/06—Elementary speech units used in speech synthesisers; Concatenation rules
- G10L13/07—Concatenation rules
Definitions
- the invention relates to a method for composing announcements for speech output, in particular with the improvement of the reproduction quality of such speech output.
- the invention has for its object to provide a method for forming announcements from segments, which takes into account the natural pifluß and thus leads to harmonic reproduction results
- Fig. 1 a list of four original sentences is shown, which can be played back as required by means of a voice output device, wherein each of these original sentences is divided by a vertical bar into two or more segments 10.
- each of these four original sentences the same Meaningfulness and also - if one disregards the order once - no differences in the letters and numbers used, but show significant differences between the individual original sentences then, if they are reproduced acoustically. This is due to the fact that, depending on the position of individual words or phrases in the sentence structure, different emphases may arise. If, for example, the sentence "turn left in 100 meters" is to be reproduced as an announcement and the segments 10.1 and 10.2 are used instead of the segments 10.4 and 10.3, this does not result in a harmonic reproduction corresponding to the normal speech flow.
- the different sentences for an announcement are spoken and recorded by a speaker as so-called original sentences.
- the original records thus recorded are divided into segments 10, wherein each of these segments 10 is stored in an audio file.
- a set of search criteria is assigned to each original sentence.
- This group of search criteria is divided according to the segmentation of the original sentences, with each segment 10 being assigned a search criterion.
- the mutual assignment of audio files and search criteria takes place in a database 11, which is shown in more detail in FIG.
- alphanumeric character strings are used here as search criteria, the character strings used as search criteria the textual representation of the associated and stored as audio files segments 10 correspond.
- search criteria the character strings used as search criteria the textual representation of the associated and stored as audio files segments 10 correspond.
- the characters or character strings used as search criteria also identify those segments 10 whose textual content is the same. For example, it is conceivable to associate a segment identification number with each segment.
- the database 11 has further entries 12. According to the column header headings, these entries 12 are the length (L) of the respective segment, its position (P) within the sentence and order two connection sounds or transition values (Ü forward , Ü rear ).
- the words within the assigned search criterion can be used in the present exemplary embodiment. This yields the length value 1 for the audio file or the segment 10 associated with the search criterion "turn off", while the search criterion "in 100 meters” is assigned the length value 3, since the number sequence "100" is regarded as a word.
- the words contained in the search criterion do not necessarily have to be used to obtain the length information.
- the entry 12, which represents the position (P), is obtained, for example, by first determining the number of segments 10 or search criteria each original sentence is determined. If, for example, it results for an original record that it is divided into three segments 10 during its segmentation, the first segment 10 is assigned the position value 0, the second segment 10 the position value 0.5 and the last of the three segments 10 the position value 1. However, if the original sentence is only divided into two segments 10 (approximately in the first two original sentences in FIG. 1), the first segment 10 receives the position value 0, while the second and last segment 10 receives the position value 1. If the original set consists of four segments 10, the first segment 10 has the position value 0, the second segment 10 the position value 0.33 and the third segment 10 the position value 0.66, while the last segment again receives the position value 1.
- transitional values (Ü) are understood to mean the relationships of a segment 10 or search criterion to the segment 10 or search criterion preceding and following this segment 10 or search criterion. This relationship for the particular segment 10 is made herein to the last letter of the previous segment 10 and the first letter of the following segment 10. The more detailed explanation will now be made on the basis of the first Orignalatzes (turn left in 100 meters) as shown in FIG. 1. Since the first segment 10 or search criterion of this original sentence (in 100 meters) has no preceding segment 10 or search criterion, the entry relating to this segment 10 and having the index number 3 (FIG. 2) is used as the front transition value noted "empty".
- transition values (Ü) for the respective segment 10 is not mandatory. Rather, instead of individual letters as respective transition values (Ü), groups of letters or phonemes can also be used by the respective segment 10 preceding and following segments 10. In particular, the use of phonemes leads to a high-quality reproduction of audio files based on the data sets according to FIG. 2.
- entries 12 shown in FIG. 2 need not be limited to the length, the position and the two transition values. Rather, even more entries 12 - not shown - may be provided to further improve the quality of the announcements. Since there is an accentuation difference between question and exclamatory sentences, although the textual representation of the corresponding sentence is completely identical without consideration of punctuation marks, a column 12 can be provided as further entry 12 in the database 11 according to FIG. 2, in which it is noted whether the respective segment 10 or search criterion originates from a question or phrase record.
- the latter may, for example, be organized such that a "0" is awarded if the respective segment 10 originates from an original sentence which raises a question and a "1" is written if the segment 10 has been taken from an original sentence which an exclamation has the object.
- other punctuation marks can also be included as entries 12 in the database 11 according to FIG. 2, which are suitable for bringing about differences in emphasis.
- the entire set intended for reproduction "turn left in 100 meters" is placed in a format in which the search criteria of the corresponding segments 10 are present. Since, in the illustrated embodiment, the search criteria correspond to the textual reproduction of the audio files, the sentence to be reproduced is also brought into this format, if it should not already be present in this format. It then checks to see if there are one or more search criteria in the database 11 that are in complete agreement with the appropriately formatted and rendered sentence "Turn Left 100 Meters". Since this is not the case according to the database shown in FIG.
- the search string of the sentence to be reproduced (turn left in 100 meters) is shortened by the last word "turn off” and it is examined whether this subset "left in 100 meters” in this form occurs in the database 11 as a search criterion. Since this comparison must also be negative due to the contents of the database 11, a further reduction of the sentence intended for reproduction by one word takes place. Then it is again checked whether the then so reduced part of the sentence "in 100 meters” occurs in the data records of the database 11 as a search criterion. According to the contents of the database 11, this can be affirmed for the records with the indices 3 to 6. This then leads to a buffering of the found indexes 3 to 6.
- the parts of the sentence which were separated in the previous steps are reassembled in their original order "turn left" and it is examined whether there is at least a correspondence in the search criteria of the database 11 for this sentence component.
- the records with the indices 9 and 10 are recognized as records in which the search criteria fully match the subset "turn left". These indices 9 and 10 are also cached.
- the search work is completed because the search string can be completely mapped by search criteria in the database 11.
- the length and position information and information on the transitional values of the sentence to be reproduced according to the convention, which were relevant in the creation of the corresponding entries 12 in the database 11 created by the clause, in the relevant Combination is, the length and position information and the respective transition values are cached.
- Such caching is shown in Fig. 4 for the phrase "Turn left in 100 meters" in Fig. 4, where the designation W indicates that it is the position and transition values of the segments in the set to be reproduced and not those in the database 11 stored values.
- an evaluation of the combinations is carried out by determining a score B for each of these combinations with the aid of the entries 12 for the segments 10 or search criteria in the database 11, who are involved in the respective combination.
- a functional relationship f n, i (n) is created for each entry n included in the formula.
- some or all of the functional relationships may be provided with a weighting factor W n .
- the functional relationship f L, i (L) is formed such that the value one is divided by the value of the length L corresponding to the entry (length) in the respective data record i, respectively receive a value that is less than one for each record whose index participates in a combination, as long as the weighting factor W L for the length is equal to one, as assumed here.
- longer segments 10 provide smaller values f Li (L) due to their formula. These smaller values are to be preferred, because the longer segments can better exploit an already existing sentence melody.
- f Pi (P) for the position information P
- the functional relationship for the transition values f Ü, i (Ü forward ), f Ü, i (Ü rear ) can be formed analogous to the preceding paragraph, by the cached transition values Ü forward, W , Ü rear, W of Fig. 4 with the Transition values Ü front.D , Ü back, D of the corresponding data records from the database are related in such a way that a match is given a zero and if there is no match a value greater than zero. Again, a corresponding weighting factor W Ü can be used again.
- the functional relationships for the front and rear transient values should suitably be weighted one at a time W Ü be provided by 0.5.
- the data records with the indices 3 and 5 are marked as duplicates and only according to another convention the data record which has the smallest index number is left in the database.
- the deletion of the record with the index 5 results in that in Fig. 4 no more combinations occur that have the serial numbers 5 and 6. Consequently, in the table of Fig. 5, the consecutive numbers 5 and 6 are omitted, so that no B values are found for these combinations and the combination 3/9 (serial number 1) is determined as the combination with the smallest B value becomes.
- the audio files do not necessarily have to be stored in the database 11 according to FIG. 2. Much more is sufficient if the database 11 contains corresponding references to the audio files stored in another location.
- the playback sentence "Turn left in 100 meters” is assumed. If this sentence is received as a text string, it is first checked whether at least the beginning of this sentence matches a search criteria in the table according to FIG. 2. In this test, the table of FIG. 2 from the rear, d. H. started with the last entry. In the present case, this would be the data record with the index 10. During this test, the entry "in 100 meters” which has the index 6 is then found. Since the found entry "in 100 meters” can not completely cover the playback sentence, the part which is not covered by the search criterion of the record just found, is separated. It also caches the record with index 6.
- the search for the part of the rendering set "left turn” is continued, starting not at the end of the table according to FIG. 2, but after the position at which the last correspondence (here data set with the index 10) was found. This leads to the entry with the index 9 being found.
- the index 6 is also copied here and cached together with the found record with index 9 as a possible interim solution.
- the found part is "left turn off” separated by search string and started the search for the rest. Since the search string no longer has any content by separating the "turn left” part, the combination of indexes 6, 9 is noted as a combination that completely exhausted the sentence to be played back.
- the separation of the found part is again "left" and the search for the remaining part in the search string "turn”.
- This search then causes the entry with index 2 to be found.
- the found part is then separated from the search string. Since the search string is now empty again, the combination of the data sets with the indices 6, 8, 2 is stored as a combination which completely reproduces the playback sentence.
- the previous step is returned and the search for a match of the search string "turn off" is continued, again starting the search for the entry where the last match (here the record with the index 2) was found.
- the data set with the index 1 is found, which leads to the result that the combination of the data records with the indices 6, 8, 1 is stored as a combination which completely reproduces the reproduction sentence.
- the search for combinations is then terminated when a certain predeterminable number of combinations, for example 10 combinations has been found.
- this measure reduces the memory requirement and the required computing power.
- This combination limit is particularly advantageous if the search is carried out according to the last-named method. This is due to the fact that in this search method first ever longer segments are found. This finding of the longer segments provides the guarantee that the best combination is usually already recognized among the first combinations and thus no loss of quality occurs.
Landscapes
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Electrically Operated Instructional Devices (AREA)
- Machine Translation (AREA)
- Document Processing Apparatus (AREA)
- Electric Clocks (AREA)
Claims (12)
- Procédé d'assemblage de messages pour la synthèse de la parole, à partir de segment (10) provenant au moins d'une phrase d'origine, stockée sous forme de fichier audio, dans lequel un message, déterminé pour l'édition synthétique, est composé des segments (10) stockés sous forme de fichiers audio, sélectionnés à l'aide de critères de recherche, à partir du fichier audio stocké,
où, à chaque segment (10), est associée au moins une grandeur caractérisant ses propriétés phonétiques dans la phrase d'origine et, à l'aide des grandeurs (12) caractérisant les propriétés phonétiques dans la phrase d'origine des différents segments (10), il est vérifié si les segments (10), formant la phrase restituée à éditer en tant que message, sont composés de manière correspondante à son flux vocal naturel,
caractérisé en ce que- pour sélectionner les segments (10) stockés sous forme de fichiers audio, pour un message, il est vérifié si le jeu reproduit, souhaité en tant que message, coïncide dans sa totalité avec un critère de recherche, déposé dans une banque de données (11) conjointement avec un fichier audio associé, sachant, que si ceci n'est pas le cas, la fin de la phrase reproduite respective est réduite, tant que l'on n'a pas ensuite vérifié des coïncidences avec des critères de recherche placés dans la banque de données (11), jusqu'à ce que, pour la partie restante de la phrase restituée, soit une soit plusieurs coïncidences soi(en)t trouvée(s),- pour de telles parties de la phrase restituée qui avaient été séparées à l'étape précédente, le contrôle indiqué dans le dernier paragraphe est continué,- pour chaque combinaison de segments (10) dont les critères de recherche coïncident complètement avec la phrase restituée, il est vérifié si les segments (10), formant la phrase restituée à éditer en tant que message, sont composées de manière correspondante à son flux vocal naturel, et- pour reproduire un message souhaité, on utilise les fichiers audio des segments (10), dont la combinaison est la plus proche du flux vocal naturel. - Procédé selon la revendication 1, caractérisé en ce qu'à chaque segment (10) sont associés plusieurs compteurs (12), caractérisant ses propriétés phonétiques dans la phrase d'origine.
- Procédé selon la revendication 1 ou 2, caractérisé en ce qu'on utilise, en tant que grandeur (12) caractérisant les propriétés phonétiques des segments (10) dans la phrase respective d'origine, l'une des grandeurs suivantes :- longueur du segment (10) respectif,- position du segment (10) respectif dans la phrase d'origine,- valeur de transition avant et arrière du segment (10) respectif, par rapport au segment (10), précédent ou subséquent, dans la phrase d'origine.
- Procédé selon la revendication 3, caractérisé en ce qu'on utilise comme longueur du segment (10) respectif la longueur du critère de recherche chaque fois associé.
- Procédé selon la revendication 3 ou 4, caractérisé en ce qu'on utilise, comme valeurs de transition, les dernières ou premières lettres, syllabes ou phonèmes du segment (10), précédent ou subséquent, dans la phrase d'origine.
- Procédé selon l'une des revendications précédentes, caractérisé en ce que, pour une combinaison trouvée de segments (10), qui forme la phrase restituée à éditer en tant que message, on calcule une valeur quantitative d'évaluation à partir des grandeurs (12), caractérisant les propriétés phonétiques dans la phrase d'origine, des différents segments (10) selon la formule suivante :
où fn,i(n) est une relation fonctionnelle de la n-ième grandeur, i est un indice caractérisant le segment (10), et Wn est un facteur de pondération pour la relation fonctionnelle de la n-ième grandeur. - Procédé selon la revendication 6, caractérisé en ce que, pour chaque combinaison trouvée de segments (10), qui forme la phrase restituée à éditer en tant que message, on calcul une valeur quantitative d'évaluation B, et en ce que, à partir des combinaisons trouvées de segments (10), on choisit, en tant que message à restituer synthétiquement, celle dont la valeur quantitative d'évaluation B indique que les segments (10) de la combinaison sont composés de manière correspondante à un flux vocal naturel.
- Procédé selon la revendication 6 ou 7, caractérisé en ce que la valeur quantitative d'évaluation B, est calculée à partir des relations fonctionnelles fn(n) d'au moins les grandeurs suivantes, longueur L et Position P, ainsi que valeurs de transition avant et arrière Üavant, Üarrière, du segment (10) selon la formule suivante :
- Procédé selon l'une des revendications précédentes, caractérisé en ce que la phrase restituée synthétiquement se présente en un format correspondant aux critères objets de recherche, sachant que, de préférence, pour les critères de recherche et les phrases restituées transmises, on utilise des chaînes de caractères alphanumériques.
- Procédé selon l'une des revendications précédentes, caratérisé en ce que les critères de recherche sont disposés hiérarchiquement dans une banque de données.
- Programme pour ordinateur pour composer des messages dans un but de synthèse vocale, adapté pour qu'un ordinateur exécute le procédé selon l'une des revendications précédentes lorsque le programme pour ordinateur est exécuté sur l'ordinateur.
- Dispositif d'assemblage de messages pour synthèse vocale, comprenant :- une banque de données (11), pour stocker des segments d'au moins une phrase d'origine, sous forme de fichiers audio, ainsi que de grandeurs (12), caractérisant des propriétés phonétiques des segments de la phrase d'origine,- des moyens, pour comparer des critères de recherche d'une phrase à restituer aux segments situés dans la bande de données,- des moyens, pour combiner des segments stockés dans la bande de données, et- des moyens, pour évaluer la qualité vocale d'une combinaison de segments,caractérisé en ce que les moyens de comparaison de critères de recherche sont agencés pour sélectionner les segments (10) stockés sous forme de fichier audio, pour vérifier un message, afin de savoir si le jeu restitué, souhaité en tant que message, coïncide dans sa totalité avec un critère de recherche, déposé dans la banque de données (11) conjointement avec un fichier audio associé, et, si ceci n'est pas le cas, réduire entre temps la fin de la phrase restituée respective et, ensuite, vérifier les coïncidences avec des critères de recherche déposées dans la banque de données (11), jusqu'à ce qu'une ou plusieurs coïncidences soi(en)t trouvée(s) pour la partie restante de la phrase restituée,
les moyens de comparaison étant en outre agencés pour que de telles parties de la phrase restituée synthétiquement, ayant été séparées à l'étape précédente, continuent à subir le contrôle indiqué dans le dernier paragraphe,
sachant que, en outre, les moyens d'évaluation de la qualité vocale sont agencés afin de vérifier, pour chaque combinaison de segments (10), dont les critères de recherche coïncident complètement avec la phrase restituée, si les segments (10), formant la phrase restituée à éditer synthétiquement en tant que message, sont composés de manière correspondante à son flux vocal naturel.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10031008 | 2000-06-30 | ||
DE10031008A DE10031008A1 (de) | 2000-06-30 | 2000-06-30 | Verfahren zum Zusammensetzen von Sätzen zur Sprachausgabe |
Publications (3)
Publication Number | Publication Date |
---|---|
EP1168298A2 EP1168298A2 (fr) | 2002-01-02 |
EP1168298A3 EP1168298A3 (fr) | 2002-12-11 |
EP1168298B1 true EP1168298B1 (fr) | 2006-11-29 |
Family
ID=7646792
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP01114995A Expired - Lifetime EP1168298B1 (fr) | 2000-06-30 | 2001-06-20 | Procédé d'assemblage de messages pour la synthèse de la parole |
Country Status (5)
Country | Link |
---|---|
US (1) | US6757653B2 (fr) |
EP (1) | EP1168298B1 (fr) |
JP (1) | JP2002055692A (fr) |
AT (1) | ATE347160T1 (fr) |
DE (2) | DE10031008A1 (fr) |
Families Citing this family (124)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US7089184B2 (en) * | 2001-03-22 | 2006-08-08 | Nurv Center Technologies, Inc. | Speech recognition for recognizing speaker-independent, continuous speech |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US8036894B2 (en) * | 2006-02-16 | 2011-10-11 | Apple Inc. | Multi-unit approach to text-to-speech synthesis |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8027837B2 (en) * | 2006-09-15 | 2011-09-27 | Apple Inc. | Using non-speech sounds during text-to-speech synthesis |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8396714B2 (en) | 2008-09-29 | 2013-03-12 | Apple Inc. | Systems and methods for concatenation of words in text to speech synthesis |
US8352272B2 (en) | 2008-09-29 | 2013-01-08 | Apple Inc. | Systems and methods for text to speech synthesis |
US8352268B2 (en) | 2008-09-29 | 2013-01-08 | Apple Inc. | Systems and methods for selective rate of speech and speech preferences for text to speech synthesis |
US8712776B2 (en) | 2008-09-29 | 2014-04-29 | Apple Inc. | Systems and methods for selective text to speech synthesis |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US8380507B2 (en) | 2009-03-09 | 2013-02-19 | Apple Inc. | Systems and methods for determining the language to use for speech generated by a text to speech engine |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
WO2011089450A2 (fr) | 2010-01-25 | 2011-07-28 | Andrew Peter Nelson Jerram | Appareils, procédés et systèmes pour plateforme de gestion de conversation numérique |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US9372902B2 (en) | 2011-09-23 | 2016-06-21 | International Business Machines Corporation | Accessing and editing virtually-indexed message flows using structured query langauge (SQL) |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
KR20150104615A (ko) | 2013-02-07 | 2015-09-15 | 애플 인크. | 디지털 어시스턴트를 위한 음성 트리거 |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
WO2014144579A1 (fr) | 2013-03-15 | 2014-09-18 | Apple Inc. | Système et procédé pour mettre à jour un modèle de reconnaissance de parole adaptatif |
CN105027197B (zh) | 2013-03-15 | 2018-12-14 | 苹果公司 | 训练至少部分语音命令系统 |
WO2014197336A1 (fr) | 2013-06-07 | 2014-12-11 | Apple Inc. | Système et procédé pour détecter des erreurs dans des interactions avec un assistant numérique utilisant la voix |
WO2014197334A2 (fr) | 2013-06-07 | 2014-12-11 | Apple Inc. | Système et procédé destinés à une prononciation de mots spécifiée par l'utilisateur dans la synthèse et la reconnaissance de la parole |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197335A1 (fr) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interprétation et action sur des commandes qui impliquent un partage d'informations avec des dispositifs distants |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
EP3008641A1 (fr) | 2013-06-09 | 2016-04-20 | Apple Inc. | Dispositif, procédé et interface utilisateur graphique permettant la persistance d'une conversation dans un minimum de deux instances d'un assistant numérique |
AU2014278595B2 (en) | 2013-06-13 | 2017-04-06 | Apple Inc. | System and method for emergency calls initiated by voice command |
CN105453026A (zh) | 2013-08-06 | 2016-03-30 | 苹果公司 | 基于来自远程设备的活动自动激活智能响应 |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
TWI566107B (zh) | 2014-05-30 | 2017-01-11 | 蘋果公司 | 用於處理多部分語音命令之方法、非暫時性電腦可讀儲存媒體及電子裝置 |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9606986B2 (en) | 2014-09-29 | 2017-03-28 | Apple Inc. | Integrated word N-gram and class M-gram language models |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179588B1 (en) | 2016-06-09 | 2019-02-22 | Apple Inc. | INTELLIGENT AUTOMATED ASSISTANT IN A HOME ENVIRONMENT |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
DK179560B1 (en) | 2017-05-16 | 2019-02-18 | Apple Inc. | FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3797037A (en) * | 1972-06-06 | 1974-03-12 | Ibm | Sentence oriented dictation system featuring random accessing of information in a preferred sequence under control of stored codes |
DE3104551C2 (de) * | 1981-02-10 | 1982-10-21 | Neumann Elektronik GmbH, 4330 Mülheim | Elektronischer Textgeber zur Abgabe von Kurztexten |
DE3642929A1 (de) * | 1986-12-16 | 1988-06-23 | Siemens Ag | Verfahren zur natuerlich klingenden sprachausgabe |
US4908867A (en) * | 1987-11-19 | 1990-03-13 | British Telecommunications Public Limited Company | Speech synthesis |
JPH0477962A (ja) * | 1990-07-19 | 1992-03-12 | Sanyo Electric Co Ltd | 機械翻訳装置 |
CA2051135C (fr) * | 1991-09-11 | 1996-05-07 | Kim D. Letkeman | Dictionnaire de langage comprime |
CA2119397C (fr) * | 1993-03-19 | 2007-10-02 | Kim E.A. Silverman | Synthese vocale automatique utilisant un traitement prosodique, une epellation et un debit d'enonciation du texte ameliores |
US5664060A (en) * | 1994-01-25 | 1997-09-02 | Information Storage Devices | Message management methods and apparatus |
DE19518504C2 (de) * | 1994-10-26 | 1998-08-20 | United Microelectronics Corp | Dynamisch programmierbare Ansage-Einrichtung |
GB2296846A (en) * | 1995-01-07 | 1996-07-10 | Ibm | Synthesising speech from text |
US5832434A (en) * | 1995-05-26 | 1998-11-03 | Apple Computer, Inc. | Method and apparatus for automatic assignment of duration values for synthetic speech |
JP3050832B2 (ja) * | 1996-05-15 | 2000-06-12 | 株式会社エイ・ティ・アール音声翻訳通信研究所 | 自然発話音声波形信号接続型音声合成装置 |
JPH1097268A (ja) * | 1996-09-24 | 1998-04-14 | Sanyo Electric Co Ltd | 音声合成装置 |
JP3029403B2 (ja) * | 1996-11-28 | 2000-04-04 | 三菱電機株式会社 | 文章データ音声変換システム |
US5913194A (en) * | 1997-07-14 | 1999-06-15 | Motorola, Inc. | Method, device and system for using statistical information to reduce computation and memory requirements of a neural network based speech synthesis system |
JPH1138989A (ja) * | 1997-07-14 | 1999-02-12 | Toshiba Corp | 音声合成装置及び方法 |
JPH1195796A (ja) * | 1997-09-16 | 1999-04-09 | Toshiba Corp | 音声合成方法 |
US6047255A (en) * | 1997-12-04 | 2000-04-04 | Nortel Networks Corporation | Method and system for producing speech signals |
JPH11305787A (ja) * | 1998-04-22 | 1999-11-05 | Victor Co Of Japan Ltd | 音声合成装置 |
US6266637B1 (en) * | 1998-09-11 | 2001-07-24 | International Business Machines Corporation | Phrase splicing and variable substitution using a trainable speech synthesizer |
US20030028380A1 (en) * | 2000-02-02 | 2003-02-06 | Freeland Warwick Peter | Speech system |
-
2000
- 2000-06-30 DE DE10031008A patent/DE10031008A1/de not_active Withdrawn
-
2001
- 2001-06-20 DE DE50111522T patent/DE50111522D1/de not_active Expired - Lifetime
- 2001-06-20 AT AT01114995T patent/ATE347160T1/de not_active IP Right Cessation
- 2001-06-20 EP EP01114995A patent/EP1168298B1/fr not_active Expired - Lifetime
- 2001-06-28 US US09/894,961 patent/US6757653B2/en not_active Expired - Lifetime
- 2001-06-29 JP JP2001199251A patent/JP2002055692A/ja active Pending
Non-Patent Citations (1)
Title |
---|
TAYLOR P. ET AL: "Speech synthesis by phonological structure matching", PROC. OF EUROSPEECH 99, vol. 2, 5 September 1999 (1999-09-05), BUDAPEST, pages 623 - 626 * |
Also Published As
Publication number | Publication date |
---|---|
JP2002055692A (ja) | 2002-02-20 |
ATE347160T1 (de) | 2006-12-15 |
US20020029139A1 (en) | 2002-03-07 |
DE10031008A1 (de) | 2002-01-10 |
DE50111522D1 (de) | 2007-01-11 |
EP1168298A3 (fr) | 2002-12-11 |
US6757653B2 (en) | 2004-06-29 |
EP1168298A2 (fr) | 2002-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1168298B1 (fr) | Procédé d'assemblage de messages pour la synthèse de la parole | |
EP0285221B1 (fr) | Procédé pour la reconnaissance de la parole continue | |
EP0533260B1 (fr) | Procédé et dispositif pour la reconnaissance des mots prononcés dans un signal de parole | |
DE3878541T2 (de) | Verfahren und einrichtung, um ein markov-modell-referenzmuster von woertern zu erzeugen. | |
DE68928231T2 (de) | Verfahren und Vorrichtung zur Maschinenübersetzung | |
DE68923981T2 (de) | Verfahren zur Bestimmung von Textteilen und Verwendung. | |
DE3886080T2 (de) | Verfahren und System zur Spracherkennung. | |
DE69917961T2 (de) | Phonembasierte Sprachsynthese | |
DE60035001T2 (de) | Sprachsynthese mit Prosodie-Mustern | |
DE69917415T2 (de) | Sprachsynthese mit Prosodie-Mustern | |
DE60126564T2 (de) | Verfahren und Anordnung zur Sprachsysnthese | |
DE3783154T2 (de) | Spracherkennungssystem. | |
DE60201262T2 (de) | Hierarchische sprachmodelle | |
DE60020434T2 (de) | Erzeugung und Synthese von Prosodie-Mustern | |
DE69937176T2 (de) | Segmentierungsverfahren zur Erweiterung des aktiven Vokabulars von Spracherkennern | |
DE69829389T2 (de) | Textnormalisierung unter verwendung einer kontextfreien grammatik | |
DE60118874T2 (de) | Prosodiemustervergleich für Text-zu-Sprache Systeme | |
EP0925578B1 (fr) | Systeme et procede de traitement de la parole | |
EP0797185B1 (fr) | Procédé et dispositif pour la reconnaissance de la parole | |
DE2212472A1 (de) | Verfahren und Anordnung zur Sprachsynthese gedruckter Nachrichtentexte | |
DE69917960T2 (de) | Phonembasierte Sprachsynthese | |
DE102006034192A1 (de) | Spracherkennungsverfahren, -system und -vorrichtung | |
WO2001018792A1 (fr) | Procede d'apprentissage des graphemes d'apres des regles de phonemes pour la synthese vocale | |
EP0981129A2 (fr) | Procédé et système d'execution d'une requête de base de données | |
EP1282897B1 (fr) | Procede pour produire une banque de donnees vocales pour un lexique cible pour l'apprentissage d'un systeme de reconnaissance vocale |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: NOKIA CORPORATION |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
17P | Request for examination filed |
Effective date: 20030610 |
|
AKX | Designation fees paid |
Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20061129 Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20061129 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED. Effective date: 20061129 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
GBT | Gb: translation of ep patent filed (gb section 77(6)(a)/1977) |
Effective date: 20061129 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
REF | Corresponds to: |
Ref document number: 50111522 Country of ref document: DE Date of ref document: 20070111 Kind code of ref document: P |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20070228 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20070312 |
|
REG | Reference to a national code |
Ref country code: SE Ref legal event code: TRGR |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20070430 |
|
ET | Fr: translation filed | ||
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20070830 |
|
BERE | Be: lapsed |
Owner name: NOKIA CORPORATION Effective date: 20070630 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20070630 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20070630 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20070630 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20070301 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20070630 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20070620 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: NL Payment date: 20080603 Year of fee payment: 8 Ref country code: SE Payment date: 20080609 Year of fee payment: 8 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E Free format text: REGISTERED BETWEEN 20090409 AND 20090415 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: TP |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20061129 Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20070620 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20061129 |
|
NLV4 | Nl: lapsed or anulled due to non-payment of the annual fee |
Effective date: 20100101 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100101 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20090621 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 50111522 Country of ref document: DE Representative=s name: VON ROHR PATENTANWAELTE PARTNERSCHAFT MBB, DE |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 16 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20160621 Year of fee payment: 16 Ref country code: GB Payment date: 20160621 Year of fee payment: 16 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 17 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20170511 Year of fee payment: 17 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 50111522 Country of ref document: DE |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20170620 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180103 Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170620 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180630 |