WO2006097598A1 - Method for automatically producing voice labels in an address book - Google Patents

Method for automatically producing voice labels in an address book Download PDF

Info

Publication number
WO2006097598A1
WO2006097598A1 PCT/FR2006/000497 FR2006000497W WO2006097598A1 WO 2006097598 A1 WO2006097598 A1 WO 2006097598A1 FR 2006000497 W FR2006000497 W FR 2006000497W WO 2006097598 A1 WO2006097598 A1 WO 2006097598A1
Authority
WO
WIPO (PCT)
Prior art keywords
address book
user
contact
voice
name
Prior art date
Application number
PCT/FR2006/000497
Other languages
French (fr)
Inventor
Laurent Aubertin
Delphine Charlet
Original Assignee
France Telecom
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom filed Critical France Telecom
Priority to EP06726029A priority Critical patent/EP1859608A1/en
Publication of WO2006097598A1 publication Critical patent/WO2006097598A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/271Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/274Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc
    • H04M1/2745Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc using static electronic memories, e.g. chips
    • H04M1/2753Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc using static electronic memories, e.g. chips providing data content
    • H04M1/2757Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time, e.g. using toothed disc using static electronic memories, e.g. chips providing data content by data transmission, e.g. downloading
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13378Speech recognition, speech analysis

Definitions

  • the present invention relates to a method of automatically creating voice tags in a first address book of a user from a second address book of said user.
  • the invention finds a particularly advantageous application in the field of the management of address books, whether they are embedded in a terminal or located in a telecommunication network.
  • - recognition without a voice tag These technologies make it possible to create speech services with speech analysis without the user having to make prior recordings of sequences that he will use later. A string of characters is sufficient to build the pattern of recognition. These systems are intended to create a textual reference associated with the result of the speech analysis performed. They are generally quite heavy and require a lot of computing power, which is why they are generally located in telecommunication networks. On the other hand, their field of application is broad since they can be implemented whatever the speaker, one speaks then of "flexible" recognition. - recognition with voice tag:
  • Voice recognition mechanisms with and without tags can be combined. This combination makes the most of both technologies. This is because unlabeled voice recognition provides dynamic awareness of new contacts without prior registration and creation of contacts from the contact's textual reference, while voice recognition with a tag facilitates, for example, the recognition of names. of foreign origin or with specific pronunciations that are not handled correctly in the case of flexible recognition. This is notably the case of foreign origin names for which the phonetization is not made correctly and / or the phonemes do not exist in the language of the system (for example, the Spanish name "Jorge", whose phonemes corresponding to the letters "j, g" and "r” do not exist in French).
  • known address book services that involve associating in a directory a contact defined by a name to at least one number of a communication mode of said contact, for example a mobile phone number or fixed .
  • the selection of the contact in the address book leads directly to the composition of said number of a communication mode.
  • This type of address book service can be implemented directly in the terminal, mobile phone for example, a user or in a telecommunications network. In the latter case, the user can access the service through an interface with the network.
  • a network address book service can be easily equipped with voice recognition systems with or without a label, or mixed, it is not the same for terminals such as mobile phones that can hardly receive speech recognition systems without a label given the computing power required, or voice recognition systems with tags due to their cumbersome initialization.
  • the technical problem to be solved by the object of the present invention is to propose a method of automatically creating voice tags in a user's first address book from a second address book of said user.
  • said second address book being associated with a voice tag creation module, which would make it possible to create in the first address book a recognition system with voice tags in a very simple and transparent manner for the user; that is, without having to perform the tedious procedure of creating voice tags.
  • the solution to the technical problem posed consists, according to the present invention, in that said method comprises the steps of: for the user, - creating in said second address book a contact defined by a name and at least one number a mode of communication of said contact,
  • said sound sequence is created during the use of the second address book.
  • said first address book is located in a user's telephone terminal, and said second address book is a network address book of said user.
  • the module for creating voice tags in the network is capable after analysis of a sequence of distinguishing at least the name of the contact and, possibly, the mode of communication if, as provided the invention, said sound sequence also contains said communication mode of the contact.
  • the contact has only one number of a communication mode, its presence in the sound sequence is not mandatory.
  • These different parts, name and mode of communication can be identified and stored as sound files. They are then associated with the contact's phone numbers. After synchronization, contacts and sound sequences are found in the address book of the user's terminal, a mobile phone for example. The sound sequences are then directly usable by the voice recognition system with embedded label in the mobile phone.
  • the voice recognition system embedded in the terminal has a voice tag corresponding to the contact, acquired and validated using another service, namely that of the network address book and this in a manner transparent to the user.
  • said first address book is a network address book of the user
  • said second address book is located in a telephone terminal of said user.
  • a voice tag may be associated with a contact in the user's terminal, for example a mobile phone. After synchronization, this tag is transferred to the network address book, coupled with a speech recognition platform. If the platform is mixed, it will be able to take advantage of the voice tag to handle special cases, such as the foreign-originated names mentioned above and for which the unlabeled speech recognition is inoperative.
  • an advantageous provision of the invention consists in that said voice tag created in the first address book is translated into textual reference.
  • An example of application of this provision is the creation in the network address book of a voice tag from the user's terminal, followed by its translation into a textual reference by a generic voice recognition module, on the first names for example, so as to be able to implement in the network address book a voice recognition without a tag, especially if the service does not have a means of mixed recognition.
  • the invention also relates to a unit for automatically creating voice tags in a first address book of a user from a second address book of said user, said second address book being associated with a user module.
  • voice tag creation notable in that said unit comprises:
  • the invention also relates to a telephone terminal comprising an automatic voice tag creation unit according to the invention.
  • the invention relates to a computer program intended to be implemented in the terminal according to the invention to execute a method of automatically creating voice tags in a first address book of a user from a second address book of said user, said second address book being associated with a voice tag creation module, said program comprising:
  • Figure 1 is a schematic diagram of a communication system between a user's terminal and a network address book service.
  • Figure 1 a terminal 10 of a user, such as a mobile phone or fixed.
  • This terminal 10 is equipped with an embedded voice recognition system and an address book for managing contacts.
  • Said user is, moreover, subscribed to a network address book service 20 through a telecommunications network 1.
  • the elements of the network address book service 20 are as follows:
  • an address book module 21 itself that provides the classical services of managing contacts. However, compared to the known network address book services, this module 21 manages at least one additional attribute, namely a voice tag associated with each communication mode of a given contact (mobile telephone, fixed or any other mode of communication). communication).
  • a module 22 for voice access to the network address book which manages the voice exchanges with the user and accesses the data of said network address book.
  • a voice tag creation module 23 which is very strongly coupled to the voice access module 22. It identifies in the dialogue between the voice server and the user the sound sequences corresponding to the designation of a contact and its mode of communication. When the number of repetitions obtained for the designation of a contact is sufficient, for example two, to create the contact recognition model in the voice tag mode, it provides the voice recognition module of the terminal 10, the signal portions audio, or sound files, corresponding to the contacts. For example, he can provide segments corresponding to the pronunciation of "Paul" extracted from two uses of the network address book service: "Call Paul on his mobile" and "Call Paul home".
  • a synchronization server 24 which makes it possible to synchronize the data of the contacts between the network address book module 21 and the address book of the fixed or mobile terminals benefiting from this function. Synchronization helps to maintain consistent sets of similar data. For example, synchronizing contacts from a network address book to the address book of a mobile phone.
  • the synchronization protocols (for example SyncML) make it possible to synchronize not only fields containing alpha numeric characters, but also files, for example a photo or a sound sequence associated with a contact.
  • the automatic voice tag creation method according to the invention can be illustrated by the following scenario:
  • the user after having subscribed to the network address book service 20, directly creates, or through an application or a service, a contact defined by a name "John” in the book of network addresses.
  • the personal and mobile phone numbers of the "John" contact are also provided.
  • the user connects to the voice access module 20 of the network address book 20 and pronounces the sound sequence "John at home”, which is transmitted to the voice tag creation module 23 which then associates the tag voice "Jean at home", created as a sound file, at the phone number of the contact "Jean”. It is understood that this voice tag is obtained by the normal use of the network address book, without intervention or effort of the user. This constitutes an essential advantage of the invention.
  • the voice tag creation module 23 combines the voice tag "John on his mobile” to the mobile phone number of the contact "John".
  • the user then synchronizes the network address book with the address book of its terminal 10 equipped with a voice recognition system with tag.
  • the "John” contact, the two telephone numbers and the two audio labels are available on the terminal 10. if the user activates voice recognition on his terminal 10 and pronounces "John at home", the corresponding number is dialed by the terminal 10.
  • the user accesses the network address book service 20 and pronounces, assuming that only one repetition is necessary for the creation of the embedded voice recognition models:
  • the module 23 for creating voice tags identifies in the audio streams the speech segments corresponding to the pronunciations of "Laurent”, “at home”, “Pierre”, “on his mobile”. He can then:
  • a generic voice recognition module on the first names makes it possible to generate the most probable first name according to the vocal labels. This module recognizes "Jerome".
  • a validation phase for example by voice synthesis, can be useful: the system asks "did you say” Jerome ""?
  • the system has the sequence of phonemes corresponding to the voice tag, and can generate the textual reference, and thus enrich the network address book, the latter working in this example only from textual entries.
  • the voice recognition module on the first names can be replaced or supplemented by a phoneme sequence recognition module, to extract the most likely sequence, different from a first name, in order to manage the cases of diminutives or rare names.

Abstract

The invention concerns a method for automatically producing voice labels in a user's first address book from a second address book of said user, said second address book being associated with a voice label creating module (23). The invention is characterized in that the method includes the following steps: for the user, creating in said second address book a contact defined by a name and at least one communication mode number of said contact; transmitting to said voice label creating module (23) an audio sequence containing at least said contact name; for the voice label creating module (23), creating from said audio sequence an audio file and associating same with said communication mode number of said contact; for the user, synchronizing said first address book with the second address book. The invention is applicable to address book management.

Description

PROCEDE DE CREATION AUTOMATIQUE D'ETIQUETTES VOCALES DANS UN CARNET D'ADRESSES METHOD FOR AUTOMATICALLY CREATING VOICE LABELS IN AN ADDRESS BOOK
La présente invention concerne un procédé de création automatique d'étiquettes vocales dans un premier carnet d'adresses d'un utilisateur à partir d'un deuxième carnet d'adresses dudit utilisateur.The present invention relates to a method of automatically creating voice tags in a first address book of a user from a second address book of said user.
L'invention trouve une application particulièrement avantageuse dans le domaine de la gestion de carnets d'adresses, qu'ils soient embarqués dans un terminal ou implantés dans un réseau de télécommunication.The invention finds a particularly advantageous application in the field of the management of address books, whether they are embedded in a terminal or located in a telecommunication network.
On connaît de l'état de la technique divers systèmes de reconnaissance vocale : - la reconnaissance sans étiquette vocale : Ces technologies permettent de créer des services vocaux avec analyse de la parole sans que l'utilisateur soit obligé de réaliser des enregistrements préalables de séquences qu'il utilisera par la suite. Une chaîne de caractère est suffisante pour construire le modèle de reconnaissance. Ces systèmes sont destinés à créer une référence textuelle associée au résultat de l'analyse de la parole effectuée. Ils sont en général assez lourds et exigent une puissance de calcul importante, c'est pourquoi ils sont en général implantés dans les réseaux de télécommunication. Par contre, leur champ d'application est large puisqu'ils peuvent être mis en œuvre quel que soit le locuteur, on parle alors de reconnaissance « flexible ». - la reconnaissance avec étiquette vocale :Various prior art speech recognition systems are known: - recognition without a voice tag: These technologies make it possible to create speech services with speech analysis without the user having to make prior recordings of sequences that he will use later. A string of characters is sufficient to build the pattern of recognition. These systems are intended to create a textual reference associated with the result of the speech analysis performed. They are generally quite heavy and require a lot of computing power, which is why they are generally located in telecommunication networks. On the other hand, their field of application is broad since they can be implemented whatever the speaker, one speaks then of "flexible" recognition. - recognition with voice tag:
D'autres systèmes nécessitent que l'utilisateur enregistre préalablement les séquences sonores pour créer les modèles des mots du vocabulaire utilisables dans le dialogue à reconnaître. En général, ils nécessitent deux répétitions d'un mot pour créer le modèle de reconnaissance vocale associé. Ces systèmes sont plus légers et peuvent être embarqués, notamment dans les téléphones mobiles. En revanche, la nécessité de prononcer les mots du vocabulaire pour créer leurs modèles avant la première utilisation est assez peu ergonomique, et peut être un frein à l'utilisation de cette technologie. D'autre part, il faut aussi remarquer que ce type de reconnaissance vocale ne vaut que pour le locuteur qui l'a initialisée. - la reconnaissance vocale mixte :Other systems require the user to record the sound sequences beforehand to create the models of vocabulary words that can be used in the dialogue to be recognized. In general, they require two repetitions of a word to create the associated speech recognition pattern. These systems are lighter and can be embedded, especially in mobile phones. On the other hand, the need to pronounce the words of the vocabulary to create their models before the first use is quite uncomfortable, and can be a brake on the use of this technology. On the other hand, it should also be noted that this type of voice recognition only applies to the speaker who initialized it. - mixed voice recognition:
Les mécanismes de reconnaissance vocale avec et sans étiquette peuvent être combinés. Cette association permet de tirer le meilleur parti des deux technologies. En effet, la reconnaissance vocale sans étiquette offre une prise en compte dynamique de nouveaux contacts sans enregistrement préalable et la création de contacts à partir de la référence textuelle du contact, tandis que la reconnaissance vocale avec étiquette facilite, par exemple, la reconnaissance des noms d'origine étrangère ou avec des prononciations spécifiques qui ne sont pas traités correctement dans le cas de la reconnaissance flexible. C'est le cas notamment des noms d'origine étrangères pour lesquels la phonétisation n'est pas faite correctement et/ou les phonèmes n'existent pas dans la langue du système (par exemple, le prénom espagnol "Jorge", dont les phonèmes correspondant aux lettres "j,g" et "r" n'existent pas en français).Voice recognition mechanisms with and without tags can be combined. This combination makes the most of both technologies. This is because unlabeled voice recognition provides dynamic awareness of new contacts without prior registration and creation of contacts from the contact's textual reference, while voice recognition with a tag facilitates, for example, the recognition of names. of foreign origin or with specific pronunciations that are not handled correctly in the case of flexible recognition. This is notably the case of foreign origin names for which the phonetization is not made correctly and / or the phonemes do not exist in the language of the system (for example, the Spanish name "Jorge", whose phonemes corresponding to the letters "j, g" and "r" do not exist in French).
D'autre part, on connaît les services de carnet d'adresses qui consistent à associer dans un répertoire un contact défini par un nom à au moins un numéro d'un mode de communication dudit contact, par exemple un numéro de téléphone mobile ou fixe. La sélection du contact dans le carnet d'adresses conduit directement à la composition dudit numéro d'un mode de communication. Ce type de service de carnet d'adresses peut être implanté directement dans le terminal, téléphone mobile par exemple, d'un utilisateur ou dans un réseau de télécommunication. Dans ce dernier cas, l'utilisateur peut accéder au service par une interface avec le réseau.On the other hand, known address book services that involve associating in a directory a contact defined by a name to at least one number of a communication mode of said contact, for example a mobile phone number or fixed . The selection of the contact in the address book leads directly to the composition of said number of a communication mode. This type of address book service can be implemented directly in the terminal, mobile phone for example, a user or in a telecommunications network. In the latter case, the user can access the service through an interface with the network.
On comprend que, si un service de carnet d'adresses réseau peut être facilement équipé de systèmes de reconnaissance vocale avec ou sans étiquette, ou mixte, il n'en est pas de même pour des terminaux comme les téléphones mobiles qui peuvent difficilement recevoir des systèmes de reconnaissance vocale sans étiquette compte tenu de la puissance de calcul exigée, ou des systèmes de reconnaissance vocale avec étiquette du fait de leur lourdeur à l'initialisation.It is understood that, if a network address book service can be easily equipped with voice recognition systems with or without a label, or mixed, it is not the same for terminals such as mobile phones that can hardly receive speech recognition systems without a label given the computing power required, or voice recognition systems with tags due to their cumbersome initialization.
Aussi, le problème technique à résoudre par l'objet de la présente invention est de proposer un procédé de création automatique d'étiquettes vocales dans un premier carnet d'adresses d'un utilisateur à partir d'un deuxième carnet d'adresses dudit utilisateur, ledit deuxième carnet d'adresses étant associé à un module de création d'étiquettes vocales, qui permettrait de créer dans le premier carnet d'adresses un système de reconnaissance avec étiquettes vocales de manière très simple et transparente pour l'utilisateur, c'est-à-dire sans avoir à effectuer la procédure fastidieuse de création d'étiquettes vocales.Also, the technical problem to be solved by the object of the present invention is to propose a method of automatically creating voice tags in a user's first address book from a second address book of said user. said second address book being associated with a voice tag creation module, which would make it possible to create in the first address book a recognition system with voice tags in a very simple and transparent manner for the user; that is, without having to perform the tedious procedure of creating voice tags.
La solution au problème technique posé consiste, selon la présente invention, en ce que ledit procédé comprend les étapes consistant à : pour l'utilisateur, - créer dans ledit deuxième carnet d'adresses un contact défini par un nom et au moins un numéro d'un mode de communication dudit contact,The solution to the technical problem posed consists, according to the present invention, in that said method comprises the steps of: for the user, - creating in said second address book a contact defined by a name and at least one number a mode of communication of said contact,
- transmettre audit module de création d'étiquettes vocales une séquence sonore contenant au moins le nom dudit contact, pour le module de création d'étiquettes vocales, - créer à partir de ladite séquence sonore un fichier son et l'associer audit numéro d'un mode de communication dudit contact dans le deuxième carnet d'adresses, pour l'utilisateur,transmitting to said voice tag creation module a sound sequence containing at least the name of said contact, for the voice tag creation module, creating from said sound sequence a sound file and associating it with said number of voice tags; a mode of communication of said contact in the second address book, for the user,
- synchroniser ledit premier carnet d'adresses avec le deuxième carnet d'adresses.- Synchronize said first address book with the second address book.
En particulier, ladite séquence sonore est créée durant l'utilisation du deuxième carnet d'adresses.In particular, said sound sequence is created during the use of the second address book.
Selon un premier mode de réalisation de l'invention, ledit premier carnet d'adresses est implanté dans un terminal téléphonique de l'utilisateur, et ledit deuxième carnet d'adresses est un carnet d'adresses réseau dudit utilisateur.According to a first embodiment of the invention, said first address book is located in a user's telephone terminal, and said second address book is a network address book of said user.
Dans ce cas, le module de création d'étiquettes vocales dans le réseau est capable après analyse d'une séquence de distinguer au moins le nom du contact et, éventuellement, le mode de communication si, comme le prévoit l'invention, ladite séquence sonore contient également ledit mode de communication du contact. On remarquera que, si le contact n'a qu'un seul numéro d'un mode de communication, sa présence dans la séquence sonore n'est pas obligatoire. Ces différentes parties, nom et mode de communication, peuvent être identifiées et stockées sous forme de fichiers son. Elles sont alors associées aux numéros de téléphones du contact. Après synchronisation, les contacts et les séquences sonores se retrouvent dans le carnet d'adresses du terminal de l'utilisateur, un téléphone mobile par exemple. Les séquences sonores sont alors directement utilisables par le système de reconnaissance vocale avec étiquette embarqué dans le téléphone mobile.In this case, the module for creating voice tags in the network is capable after analysis of a sequence of distinguishing at least the name of the contact and, possibly, the mode of communication if, as provided the invention, said sound sequence also contains said communication mode of the contact. Note that if the contact has only one number of a communication mode, its presence in the sound sequence is not mandatory. These different parts, name and mode of communication, can be identified and stored as sound files. They are then associated with the contact's phone numbers. After synchronization, contacts and sound sequences are found in the address book of the user's terminal, a mobile phone for example. The sound sequences are then directly usable by the voice recognition system with embedded label in the mobile phone.
Ainsi, le système de reconnaissance vocale embarqué dans le terminal dispose d'une étiquette vocale correspondant au contact, acquise et validée en utilisant un autre service, à savoir celui du carnet d'adresses réseau et ceci de façon transparente pour l'utilisateur.Thus, the voice recognition system embedded in the terminal has a voice tag corresponding to the contact, acquired and validated using another service, namely that of the network address book and this in a manner transparent to the user.
Selon un autre mode de réalisation de l'invention, ledit premier carnet d'adresses est un carnet d'adresses réseau de l'utilisateur, et ledit deuxième carnet d'adresses est implanté dans un terminal téléphonique dudit utilisateur. Dans ce cas, une étiquette vocale peut être associée à un contact dans le terminal de l'utilisateur, un téléphone mobile par exemple. Après synchronisation, cette étiquette est transférée dans le carnet d'adresses réseau, couplé à une plateforme de reconnaissance vocale. Si la plateforme est mixte, elle pourra tirer parti de l'étiquette vocale pour traiter des cas particuliers, comme celui des noms d'origine étrangère rappelé plus haut et pour lesquels la reconnaissance vocale sans étiquette est inopérante.According to another embodiment of the invention, said first address book is a network address book of the user, and said second address book is located in a telephone terminal of said user. In this case, a voice tag may be associated with a contact in the user's terminal, for example a mobile phone. After synchronization, this tag is transferred to the network address book, coupled with a speech recognition platform. If the platform is mixed, it will be able to take advantage of the voice tag to handle special cases, such as the foreign-originated names mentioned above and for which the unlabeled speech recognition is inoperative.
Enfin, une disposition avantageuse de l'invention consiste en ce que ladite étiquette vocale créée dans le premier carnet d'adresses est traduite en référence textuelle.Finally, an advantageous provision of the invention consists in that said voice tag created in the first address book is translated into textual reference.
Un exemple d'application de cette disposition est la création dans le carnet d'adresses réseau d'une étiquette vocale à partir du terminal de l'utilisateur, suivie de sa traduction en référence textuelle par un module de reconnaissance vocale générique, sur les prénoms par exemple, de manière à pouvoir mettre en œuvre dans le carnet d'adresses réseau une reconnaissance vocale sans étiquette, en particulier si le service ne dispose pas de moyens de reconnaissance mixte.An example of application of this provision is the creation in the network address book of a voice tag from the user's terminal, followed by its translation into a textual reference by a generic voice recognition module, on the first names for example, so as to be able to implement in the network address book a voice recognition without a tag, especially if the service does not have a means of mixed recognition.
L'invention concerne également une unité de création automatique d'étiquettes vocales dans un premier carnet d'adresses d'un utilisateur à partir d'un deuxième carnet d'adresses dudit utilisateur, ledit deuxième carnet d'adresses étant associé à un module de création d'étiquettes vocales, remarquable en ce que ladite unité comprend :The invention also relates to a unit for automatically creating voice tags in a first address book of a user from a second address book of said user, said second address book being associated with a user module. voice tag creation, notable in that said unit comprises:
- des moyens pour créer dans ledit deuxième carnet d'adresses un contact défini par un nom et au moins un numéro d'un mode de communication dudit contact,means for creating in said second address book a contact defined by a name and at least one number of a communication mode of said contact,
- des moyens pour transmettre audit module de création d'étiquettes vocales une séquence sonore contenant au moins le nom dudit contact,means for transmitting to said voice tag creation module a sound sequence containing at least the name of said contact,
- des moyens de synchronisation dudit premier carnet d'adresses avec le deuxième carnet d'adresses. L'invention concerne encore un terminal téléphonique comportant une unité de création automatique d'étiquettes vocales selon l'invention.means for synchronizing said first address book with the second address book. The invention also relates to a telephone terminal comprising an automatic voice tag creation unit according to the invention.
L'invention concerne enfin un programme d'ordinateur destiné à être mis en œuvre dans le terminal selon l'invention pour exécuter un procédé de création automatique d'étiquettes vocales dans un premier carnet d'adresses d'un utilisateur à partir d'un deuxième carnet d'adresses dudit utilisateur, ledit deuxième carnet d'adresses étant associé à un module de création d'étiquettes vocales, ledit programme comprenant :Finally, the invention relates to a computer program intended to be implemented in the terminal according to the invention to execute a method of automatically creating voice tags in a first address book of a user from a second address book of said user, said second address book being associated with a voice tag creation module, said program comprising:
- des instructions pour créer dans ledit deuxième carnet d'adresses un contact défini par un nom et au moins un numéro d'un mode de communication dudit contact,instructions for creating in said second address book a contact defined by a name and at least one number of a communication mode of said contact,
- des instructions pour transmettre audit module de création d'étiquettes vocales une séquence sonore contenant au moins le nom dudit contact,instructions for transmitting to said voice tag creation module a sound sequence containing at least the name of said contact,
- des instructions de synchronisation dudit premier carnet d'adresses avec le deuxième carnet d'adresses. La description qui va suivre en regard du dessin annexé, donné à titre d'exemple non limitatif, fera bien comprendre en quoi consiste l'invention et comment elle peut être réalisée. La figure 1 est un schéma d'un système de communication entre un terminal d'un utilisateur et un service de carnet d'adresses réseau.- Synchronization instructions of said first address book with the second address book. The following description with reference to the accompanying drawing, given by way of non-limiting example, will make it clear what the invention is and how it can be achieved. Figure 1 is a schematic diagram of a communication system between a user's terminal and a network address book service.
Sur la figure 1 est représenté un terminal 10 d'un utilisateur, tel qu'un téléphone mobile ou fixe. Ce terminal 10 est équipé d'un système de reconnaissance vocale embarqué et d'un carnet d'adresses pour la gestion de contacts.In Figure 1 is shown a terminal 10 of a user, such as a mobile phone or fixed. This terminal 10 is equipped with an embedded voice recognition system and an address book for managing contacts.
Ledit utilisateur est, par ailleurs, abonné à un service de carnet d'adresses réseau 20 à travers un réseau 1 de télécommunication.Said user is, moreover, subscribed to a network address book service 20 through a telecommunications network 1.
Les éléments du service de carnet d'adresses réseau 20 sont les suivants :The elements of the network address book service 20 are as follows:
- un module 21 de carnet d'adresses proprement dit qui fournit les services classiques de gestion de contacts. Mais, par rapport aux services de carnets d'adresses réseau connus, ce module 21 gère au moins un attribut supplémentaire, à savoir une étiquette vocale associée à chaque mode de communication d'un contact donné (téléphone mobile, fixe ou tout autre mode de communication).an address book module 21 itself that provides the classical services of managing contacts. However, compared to the known network address book services, this module 21 manages at least one additional attribute, namely a voice tag associated with each communication mode of a given contact (mobile telephone, fixed or any other mode of communication). communication).
- un module 22 d'accès vocal au carnet d'adresses réseau qui gère les échanges vocaux avec l'utilisateur et accède aux données dudit carnet d'adresses réseau. - un module 23 de création d'étiquettes vocales qui est très fortement couplé au module 22 d'accès vocal. Il permet d'identifier dans le dialogue entre le serveur vocal et l'utilisateur les séquences sonores correspondant à la désignation d'un contact et de son mode de communication. Lorsque le nombre de répétitions obtenues pour la désignation d'un contact est suffisant, par exemple deux, pour créer le modèle de reconnaissance du contact dans le mode avec étiquette vocale, il fournit au module de reconnaissance vocale du terminal 10, les portions de signal audio, ou fichiers son, correspondant aux contacts. Par exemple, il peut fournir les segments correspondants à la prononciation de "Paul" extraits à partir de deux utilisations du service de carnet d'adresses réseau: "appelle Paul sur son mobile" et "appelle Paul à la maison". Il peut, bien sur, aussi fournir comme étiquette vocale la séquence complète "appelle Paul sur son mobile". - un serveur 24 de synchronisation qui permet de synchroniser les données des contacts entre le module 21 de carnet d'adresses réseau et le carnet d'adresses des terminaux fixe ou mobile bénéficiant de cette fonction. La synchronisation permet de maintenir en cohérence des ensembles de données similaires. Par exemple, la synchronisation des contacts d'un carnet d'adresses réseau avec le carnet d'adresses d'un téléphone mobile. Les protocoles de synchronisation (par exemple SyncML) permettent de synchroniser non seulement des champs contenant des caractères alpha numériques, mais aussi des fichiers, par exemple une photo ou une séquence sonore associée à un contact.a module 22 for voice access to the network address book which manages the voice exchanges with the user and accesses the data of said network address book. a voice tag creation module 23 which is very strongly coupled to the voice access module 22. It identifies in the dialogue between the voice server and the user the sound sequences corresponding to the designation of a contact and its mode of communication. When the number of repetitions obtained for the designation of a contact is sufficient, for example two, to create the contact recognition model in the voice tag mode, it provides the voice recognition module of the terminal 10, the signal portions audio, or sound files, corresponding to the contacts. For example, he can provide segments corresponding to the pronunciation of "Paul" extracted from two uses of the network address book service: "Call Paul on his mobile" and "Call Paul home". He can, of course, also provide as a vocal tag the complete sequence "call Paul on his mobile". a synchronization server 24 which makes it possible to synchronize the data of the contacts between the network address book module 21 and the address book of the fixed or mobile terminals benefiting from this function. Synchronization helps to maintain consistent sets of similar data. For example, synchronizing contacts from a network address book to the address book of a mobile phone. The synchronization protocols (for example SyncML) make it possible to synchronize not only fields containing alpha numeric characters, but also files, for example a photo or a sound sequence associated with a contact.
Le procédé de création automatique d'étiquettes vocales, conforme à l'invention, peut être illustré par le scénario suivant :The automatic voice tag creation method according to the invention can be illustrated by the following scenario:
- l'utilisateur, après avoir souscrit au service de carnets d'adresses réseau 20, crée directement, ou par l'intermédiaire d'une application ou d'un service, un contact défini par un nom « Jean » dans le carnet d'adresses réseau. Les numéros de téléphone personnel et mobile du contact « Jean » sont également renseignés.the user, after having subscribed to the network address book service 20, directly creates, or through an application or a service, a contact defined by a name "John" in the book of network addresses. The personal and mobile phone numbers of the "John" contact are also provided.
- l'utilisateur se connecte au module 22 d'accès vocal du carnet d'adresses réseau 20 et prononce la séquence sonore "Jean à la maison", laquelle est transmise au module 23 de création d'étiquettes vocales qui associe alors l'étiquette vocale "Jean à la maison", créée sous forme d'un fichier son, au numéro de téléphone fixe du contact « Jean ». On comprend que cette étiquette vocale est obtenue par l'utilisation normale du carnet d'adresses réseau, sans intervention ni effort de l'utilisateur. Ceci constitue un avantage essentiel de l'invention.the user connects to the voice access module 20 of the network address book 20 and pronounces the sound sequence "John at home", which is transmitted to the voice tag creation module 23 which then associates the tag voice "Jean at home", created as a sound file, at the phone number of the contact "Jean". It is understood that this voice tag is obtained by the normal use of the network address book, without intervention or effort of the user. This constitutes an essential advantage of the invention.
- de même, après que l'utilisateur ait prononcé " Jean sur son mobile", le module 23 de création d'étiquettes vocales associe l'étiquette vocale "Jean sur son mobile" au numéro du téléphone mobile du contact « Jean ».- Similarly, after the user has said "John on his mobile", the voice tag creation module 23 combines the voice tag "John on his mobile" to the mobile phone number of the contact "John".
- l'utilisateur synchronise alors le carnet d'adresses réseau avec le carnet d'adresses de son terminal 10 équipé d'un système de reconnaissance vocale avec étiquette. Le contact "Jean", les deux numéros de téléphone et les deux étiquettes sonores sont disponibles sur le terminal 10. - si l'utilisateur active la reconnaissance vocale sur son terminal 10 et prononce "Jean à la maison", le numéro correspondant est composé par le terminal 10.- The user then synchronizes the network address book with the address book of its terminal 10 equipped with a voice recognition system with tag. The "John" contact, the two telephone numbers and the two audio labels are available on the terminal 10. if the user activates voice recognition on his terminal 10 and pronounces "John at home", the corresponding number is dialed by the terminal 10.
Un autre scénario représentatif du procédé selon l'invention est le suivant :Another representative scenario of the process according to the invention is the following:
- l'utilisateur crée sur son terminal un contact nommé "Jorge". Il enregistre la séquence "Jorge" et l'associe comme étiquette vocale au numéro de téléphone fixe de « Jorge ».- the user creates on his terminal a contact named "Jorge". He records the sequence "Jorge" and associates it as a voice tag with the landline number of "Jorge".
- l'utilisateur synchronise les deux carnets d'adresses. Le contact « Jorge » est donc créé dans le carnet d'adresses réseau.- the user synchronizes the two address books. The contact "Jorge" is created in the network address book.
- l'utilisateur accède au carnet d'adresses réseau. Il prononce " Jorge ". Ce prénom n'est pas reconnu par les plateformes de reconnaissance vocale sans étiquette si celles-ci n'intègrent pas la prononciation de phonèmes étrangers. Toutefois, la technologie mixte permet ici de résoudre cette situation en ce qu'elle met en jeu la reconnaissance vocale avec étiquette.- the user accesses the network address book. He pronounces "Jorge". This name is not recognized by speech recognition platforms without a label if they do not include the pronunciation of foreign phonemes. However, the mixed technology here to solve this situation in that it involves voice recognition with tag.
Le scénario suivant est également illustratif du procédé conforme à l'invention :The following scenario is also illustrative of the process according to the invention:
- l'utilisateur accède au service de carnet d'adresses réseau 20 et prononce, en supposant qu'une seule répétition est nécessaire à la création des modèles de reconnaissance vocale embarquée:the user accesses the network address book service 20 and pronounces, assuming that only one repetition is necessary for the creation of the embedded voice recognition models:
* "Laurent à la maison"* "Laurent at home"
* "Pierre sur son mobile" si plusieurs répétitions sont nécessaires, il suffit d'étendre le scénario à plusieurs répétitions. - le module 23 de création d'étiquettes vocales identifie dans les flux audio les segments de parole correspondants aux prononciations de "Laurent", "à la maison", "Pierre", "sur son mobile". Il peut alors:* "Stone on your mobile" if multiple repetitions are needed, simply extend the scenario to several repetitions. the module 23 for creating voice tags identifies in the audio streams the speech segments corresponding to the pronunciations of "Laurent", "at home", "Pierre", "on his mobile". He can then:
* créer par concaténation les nouvelles séquences vocales:* create by concatenation the new vocal sequences:
"Laurent sur son mobile" "Pierre à la maison", et ainsi permettre au module de reconnaissance vocale embarqué dans le terminal 10 de créer les contacts vocaux correspondants, sans même que ces contacts aient été explicitement prononcés. * créer des étiquettes vocales distinctes pour "à la maison", "sur son mobile", "Pierre" et "Laurent"."Laurent on his mobile""Pierrehome", and thus allow the voice recognition module embedded in the terminal 10 to create the corresponding voice contacts, without these contacts have been explicitly spoken. * create separate voice tags for "at home", "on his mobile", "Pierre" and "Laurent".
Un exemple d'application particulièrement avantageux du procédé selon l'invention est représenté par le scénario suivant : - l'utilisateur a créé l'étiquette vocale correspondant à "Jérôme" dans le carnet d'adresses de son terminal 10. Ce contact est absent du carnet d'adresses réseau.An example of a particularly advantageous application of the method according to the invention is represented by the following scenario: the user has created the voice tag corresponding to "Jérôme" in the address book of his terminal 10. This contact is absent the network address book.
- un module de reconnaissance vocale générique sur les prénoms permet de générer le prénom le plus probable selon les étiquettes vocales. Ce module reconnaît "Jérôme".- a generic voice recognition module on the first names makes it possible to generate the most probable first name according to the vocal labels. This module recognizes "Jerome".
- ensuite, une phase de validation, par exemple par synthèse vocale, peut être utile : le système demande "avez-vous dit "Jérôme""?- Then, a validation phase, for example by voice synthesis, can be useful: the system asks "did you say" Jerome ""?
- si oui, alors le système dispose de la séquence de phonèmes correspondant à l'étiquette vocale, et permet de générer la référence textuelle, et ainsi d'enrichir le carnet d'adresses réseau, ce dernier fonctionnant dans cet exemple uniquement à partir d'entrées textuelles.- If yes, then the system has the sequence of phonemes corresponding to the voice tag, and can generate the textual reference, and thus enrich the network address book, the latter working in this example only from textual entries.
A noter que le module de reconnaissance vocale sur les prénoms peut être remplacé ou complété par un module de reconnaissance de séquences de phonèmes, pour extraire la séquence la plus probable, différente d'un prénom, dans le but de gérer les cas des diminutifs ou des noms rares. Note that the voice recognition module on the first names can be replaced or supplemented by a phoneme sequence recognition module, to extract the most likely sequence, different from a first name, in order to manage the cases of diminutives or rare names.

Claims

REVENDICATIONS
1. Procédé de création automatique d'étiquettes vocales dans un premier carnet d'adresses d'un utilisateur à partir d'un deuxième carnet d'adresses dudit utilisateur, ledit deuxième carnet d'adresses étant associé à un module (23) de création d'étiquettes vocales, caractérisé en ce que ledit procédé comprend les étapes consistant à : pour l'utilisateur,1. A method of automatically creating voice tags in a first address book of a user from a second address book of said user, said second address book being associated with a module (23) creation voice tags, characterized in that said method comprises the steps of: for the user,
- créer dans ledit deuxième carnet d'adresses un contact défini par un nom et au moins un numéro d'un mode de communication dudit contact,in the second address book, creating a contact defined by a name and at least one number of a communication mode of said contact,
- transmettre audit module (23) de création d'étiquettes vocales une séquence sonore contenant au moins le nom dudit contact, pour le module (23) de création d'étiquettes vocales,transmitting to said voice tag creation module (23) a sound sequence containing at least the name of said contact, for the module (23) for creating voice tags,
- créer à partir de ladite séquence sonore un fichier son et l'associer audit numéro d'un mode de communication dudit contact dans le deuxième carnet d'adresses, pour l'utilisateur,creating from said sound sequence a sound file and associating it with said number of a communication mode of said contact in the second address book, for the user,
- synchroniser ledit premier carnet d'adresses avec le deuxième carnet d'adresses.- Synchronize said first address book with the second address book.
2. Procédé selon la revendication 1 , caractérisé en ce que ladite séquence sonore est créée durant l'utilisation du deuxième carnet d'adresses. 2. Method according to claim 1, characterized in that said sound sequence is created during the use of the second address book.
3. Procédé selon l'une des revendications 1 ou 2, caractérisé en ce que ladite séquence sonore contient également ledit mode de communication du contact.3. Method according to one of claims 1 or 2, characterized in that said sound sequence also contains said communication mode of the contact.
4. Procédé selon l'une quelconque des revendications 1 à 3, caractérisé en ce que ladite étiquette vocale créée dans le premier carnet d'adresses est traduite en référence textuelle.4. Method according to any one of claims 1 to 3, characterized in that said voice tag created in the first address book is translated into textual reference.
5. Procédé selon l'une quelconque des revendications 1 à 4, caractérisé en ce que ledit premier carnet d'adresses est implanté dans un terminal téléphonique (10) de l'utilisateur, et en ce que ledit deuxième carnet d'adresses est un carnet d'adresses réseau dudit utilisateur.5. Method according to any one of claims 1 to 4, characterized in that said first address book is located in a terminal telephone (10) of the user, and in that said second address book is a network address book of said user.
6. Procédé selon l'une quelconque des revendications 1 à 4, caractérisé en ce que ledit premier carnet d'adresses est un carnet d'adresses réseau de l'utilisateur, et en ce que ledit deuxième carnet d'adresses est implanté dans un terminal téléphonique(IO) dudit utilisateur.6. Method according to any one of claims 1 to 4, characterized in that said first address book is a network address book of the user, and in that said second address book is implanted in a telephone terminal (IO) of said user.
7. Procédé selon l'une des revendications 5 ou 6, caractérisé en ce que ledit terminal téléphonique est un téléphone mobile (10).7. Method according to one of claims 5 or 6, characterized in that said telephone terminal is a mobile phone (10).
8. Unité de création automatique d'étiquettes vocales dans un premier carnet d'adresses d'un utilisateur à partir d'un deuxième carnet d'adresses dudit utilisateur, ledit deuxième carnet d'adresses étant associé à un module (23) de création d'étiquettes vocales, caractérisé en ce que ladite unité comprend :8. Unit for automatically creating voice tags in a user's first address book from a second address book of said user, said second address book being associated with a creation module (23) voice tags, characterized in that said unit comprises:
- des moyens pour créer dans ledit deuxième carnet d'adresses un contact défini par un nom et au moins un numéro d'un mode de communication dudit contact,means for creating in said second address book a contact defined by a name and at least one number of a communication mode of said contact,
- des moyens pour transmettre audit module (23) de création d'étiquettes vocales une séquence sonore contenant au moins le nom dudit contact,means for transmitting to said voice tag creation module (23) a sound sequence containing at least the name of said contact,
- des moyens de synchronisation dudit premier carnet d'adresses avec le deuxième carnet d'adresses. means for synchronizing said first address book with the second address book.
9. Terminal téléphonique comportant une unité de création automatique d'étiquettes vocales selon la revendication 8.Telephone terminal comprising an automatic voice tag creation unit according to claim 8.
10. Programme d'ordinateur destiné à être mis en œuvre dans le terminal selon la revendication 9 pour exécuter un procédé de création automatique d'étiquettes vocales dans un premier carnet d'adresses d'un utilisateur à partir d'un deuxième carnet d'adresses dudit utilisateur, ledit deuxième carnet d'adresses étant associé à un module de création d'étiquettes vocales, ledit programme comprenant :10. Computer program intended to be implemented in the terminal according to claim 9 for executing a method of automatically creating voice tags in a first address book of a user from a second logbook. addresses of said user, said second address book being associated with a voice tag creation module, said program comprising:
- des instructions pour créer dans ledit deuxième carnet d'adresses un contact défini par un nom et au moins un numéro d'un mode de communication dudit contact,instructions for creating in said second address book a contact defined by a name and at least one number of a communication mode of said contact,
- des instructions pour transmettre audit module de création d'étiquettes vocales une séquence sonore contenant au moins le nom dudit contact, - des instructions de synchronisation dudit premier carnet d'adresses avec le deuxième carnet d'adresses. instructions for transmitting to said voice tag creation module a sound sequence containing at least the name of said contact, - Synchronization instructions of said first address book with the second address book.
PCT/FR2006/000497 2005-03-16 2006-02-28 Method for automatically producing voice labels in an address book WO2006097598A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP06726029A EP1859608A1 (en) 2005-03-16 2006-02-28 Method for automatically producing voice labels in an address book

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0502620 2005-03-16
FR0502620 2005-03-16

Publications (1)

Publication Number Publication Date
WO2006097598A1 true WO2006097598A1 (en) 2006-09-21

Family

ID=35241022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FR2006/000497 WO2006097598A1 (en) 2005-03-16 2006-02-28 Method for automatically producing voice labels in an address book

Country Status (2)

Country Link
EP (1) EP1859608A1 (en)
WO (1) WO2006097598A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018130048A1 (en) * 2017-01-13 2018-07-19 北京搜狗科技发展有限公司 Contact adding method, electronic device and server

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0945851A2 (en) * 1998-03-27 1999-09-29 International Business Machines Corporation Extending the vocabulary of a client-server speech recognition system
WO2000065807A1 (en) * 1999-04-22 2000-11-02 Siemens Aktiengesellschaft Generation of a reference-model directory for a voice-controlled communications device
EP1215661A1 (en) * 2000-12-14 2002-06-19 TELEFONAKTIEBOLAGET L M ERICSSON (publ) Mobile terminal controllable by spoken utterances
EP1220200A1 (en) * 2000-12-18 2002-07-03 Siemens Aktiengesellschaft Method and system for speaker independent recognition for a telecommunication or data processing device
US20020141546A1 (en) * 2001-01-31 2002-10-03 Gadi Inon Telephone network-based method and system for automatic insertion of enhanced personal address book contact data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0945851A2 (en) * 1998-03-27 1999-09-29 International Business Machines Corporation Extending the vocabulary of a client-server speech recognition system
WO2000065807A1 (en) * 1999-04-22 2000-11-02 Siemens Aktiengesellschaft Generation of a reference-model directory for a voice-controlled communications device
EP1215661A1 (en) * 2000-12-14 2002-06-19 TELEFONAKTIEBOLAGET L M ERICSSON (publ) Mobile terminal controllable by spoken utterances
EP1220200A1 (en) * 2000-12-18 2002-07-03 Siemens Aktiengesellschaft Method and system for speaker independent recognition for a telecommunication or data processing device
US20020141546A1 (en) * 2001-01-31 2002-10-03 Gadi Inon Telephone network-based method and system for automatic insertion of enhanced personal address book contact data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018130048A1 (en) * 2017-01-13 2018-07-19 北京搜狗科技发展有限公司 Contact adding method, electronic device and server

Also Published As

Publication number Publication date
EP1859608A1 (en) 2007-11-28

Similar Documents

Publication Publication Date Title
US8046220B2 (en) Systems and methods to index and search voice sites
US7788095B2 (en) Method and apparatus for fast search in call-center monitoring
US8117268B2 (en) Hosted voice recognition system for wireless devices
US9715873B2 (en) Method for adding realism to synthetic speech
EP3053162B1 (en) Method for dialogue between a machine, such as a humanoid robot, and a human interlocutor; computer program product; and humanoid robot for implementing such a method
US20090024389A1 (en) Text oriented, user-friendly editing of a voicemail message
EP0993730A1 (en) System and method for coding and broadcasting voice data
US9936068B2 (en) Computer-based streaming voice data contact information extraction
JP2012514938A5 (en)
KR101904817B1 (en) Call conversation Speech to Text converting system
EP2164212A1 (en) Communication method and system for determining a sequence of services associated with a conversation
ES2408906B1 (en) SYSTEM AND METHOD FOR ANALYZING THE CONTENT OF A VOICE CONVERSATION
US11470279B1 (en) Automated recording highlights for conferences
US11570403B2 (en) Automated recording highlights for conferences
US20220353100A1 (en) Automated Recording Highlights For Conferences
EP1859608A1 (en) Method for automatically producing voice labels in an address book
US11863711B2 (en) Speaker segment analysis for conferences
FR2799913A1 (en) VOICE COMMAND TELEPHONE COMMUNICATION METHOD AND CORRESPONDING VOICE SERVER
EP1474933A1 (en) Interactive telephone voice services
EP1703418B1 (en) Transcription rule base enrichment
FR2935854A1 (en) METHOD AND COMMUNICATION SYSTEM FOR DISPLAYING A LINK TO A SERVICE FROM AN EXPRESSION PRESENT DURING CONVERSATION.
FR2966635A1 (en) Method for displaying e.g. song lyrics of audio content under form of text on e.g. smartphone, involves recognizing voice data of audio content, and displaying recognized voice data in form of text on device
FR2852438A1 (en) Voice messages translating system for use in multi-lingual audio-conference, has temporizing unit to temporize messages such that terminals except terminal which transmits current message, receive endings in speakers language
FR3003966A1 (en) METHOD FOR DYNAMICALLY ADAPTING A SOFTWARE ENVIRONMENT EXECUTED FROM A COMMUNICATION TERMINAL OF A USER DURING COMMUNICATION BETWEEN THE USER AND AT LEAST ONE INTERLOCUTOR
EP2677708B1 (en) Communication method of an audiovisual message, and communication system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006726029

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

WWP Wipo information: published in national office

Ref document number: 2006726029

Country of ref document: EP