EP1370995A1 - Procede et systeme de communication destines a produire des messages de reponse - Google Patents

Procede et systeme de communication destines a produire des messages de reponse

Info

Publication number
EP1370995A1
EP1370995A1 EP02703576A EP02703576A EP1370995A1 EP 1370995 A1 EP1370995 A1 EP 1370995A1 EP 02703576 A EP02703576 A EP 02703576A EP 02703576 A EP02703576 A EP 02703576A EP 1370995 A1 EP1370995 A1 EP 1370995A1
Authority
EP
European Patent Office
Prior art keywords
messages
text
communication system
database
stored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02703576A
Other languages
German (de)
English (en)
Inventor
Wolfgang Jugovec
Shaun Baker
Markus Von Arx
Marcel Henggeler
Matthias Giger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Schweiz AG
Original Assignee
Siemens Schweiz AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Schweiz AG filed Critical Siemens Schweiz AG
Priority to EP02703576A priority Critical patent/EP1370995A1/fr
Publication of EP1370995A1 publication Critical patent/EP1370995A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/60Medium conversion

Definitions

  • the present invention relates to a method for generating response messages to incoming messages with a communication system and a communication system for generating response messages to incoming messages according to the preamble of patent claims 1 and 10.
  • language processing includes on the one hand a conversion from speech to text - usually called speech recognition - and on the other hand a conversion from text to speech
  • a voice recorder - also called an answering machine - that is dialed directly or activated when an agent is unavailable;
  • WO 99/07118 specifies a communication system which allows a time management system to be compared with spoken messages contained in e-mail with regard to appointment data such as date / time, location and possibly subject query and automatically generate an e-mail that contains the originally agreed or the new appointment data.
  • the present invention is therefore based on the object of specifying a method and a communication system of the type mentioned at the outset which enables automated generation of response messages and can be adapted to different products for speech synthesis and / or speech recognition with little integration effort.
  • incoming calls are automatically analyzed regardless of the type and calls convertible into queries to a database lead to response messages generated by a database, which are used directly as a text response message or by a conversion in a speech module as a spoken response routed to the origin, with the non-convertible calls assigned to an agent for personal follow-up.
  • the communication system specified in claim 10 has a division into functional units, so that products that can be used for the various units can be easily exchanged and enable a division between different servers of a computer system.
  • step D1 prior to the conversion in the speech output module, the answer message is grouped in a text preprocessing unit contained in the speech output module with respect to the digits contained in the answer message or is supplemented phonetically with respect to foreign language words contained in the answer message; the response message generated can be easily adapted to the different habits in the language regions or countries and increases acceptance in the use of such communication systems (claim 7).
  • the switching unit and / or the voice input module and / or the voice output module is implemented in a manner distributed over several servers connected via a network;
  • the communication system according to the invention can be implemented in a scalable manner, adapted to the respective application, to the respective server environment. (Claim 14).
  • Interfaces to the speech synthesis unit or to the speech recognition unit are designed such that different speech synthesis units or different speech recognition units on the servers can be interchanged on the different servers; Different products can be used in parallel without the need for reintegration, and by using different products in parallel, specific product properties can be used or avoided in particular (claim 15).
  • FIG. 1 structure of a communication system according to the invention specified in functional units
  • FIG. 2 implementation of the method according to the invention on a computer system
  • FIG. 1 shows a structure, given in functional units, of an exemplary embodiment of a communication system according to the invention.
  • the specified paths are not necessarily to be regarded as physical connections or connections, but preferably as logical or functional paths.
  • the communication system shown in FIG. 1 firstly contains the functional units dialogue machine 5 and switching unit 4, which is connected to a public or private network via external interfaces 60.
  • This interface 60 can be connected to a packet network, e.g. on the
  • the communication system contains a voice input module 53 and a voice output module 52.
  • a database 9, which contains a knowledge or information base, is associated with the communication system via a further interface 69, for example:
  • Dialog machine 5 is connected to terminals via interfaces 68.
  • Workstation systems designed as personal computers are preferably provided as terminals, which, in addition to the keyboard and screen, also have voice input / output. have medium such as a headset.
  • the interface 68 is preferably designed as a local area network (LAN).
  • a switching server 4 can be assigned to the switching unit, which contains the content and control information required by the dialog machine 5 in interaction with the agents or with users communicating via the interface 60. It is also possible that the user terminals themselves are connected to the LAN 68. It is also possible to implement the web server as part of the switching unit 4.
  • An incoming message designed as a call automatically leads to the generation of a reply message.
  • Incoming calls via the interface 60 to the switching unit 4 are answered by the dialog machine 5 with a corresponding spoken message.
  • These information texts are preferably stored in the speech output module 52 as directly addressable audio files.
  • the call is analyzed in terms of type, origin and origin and stored together with the spoken call text in a first memory area as a typed address TYPE_ADDR.
  • TYPE_ADDR An example of a structure of a typed address TYPE_ADDR is given in Table 1 below:
  • ADDR_SRC either the e-mail address noted as the sender, for example in the case of pure IP voice transmission, or the phone number according to the ISDN feature CLIP (Calling Line Identification Presentation) is stored in the typed address TYPE_ADDR.
  • the incoming path is entered for the origin ADDR_SRC_PATH, this is relevant for a subsequent one
  • the received spoken message text ie a call is preferably stored as an audio file on a mass storage device, which is referred to below as the second storage area (not shown in FIG. 1).
  • the file can be saved directly in the received format, e.g. .wav or .mp3.
  • a pointer is preferably provided for linking the typed address TYPE_ADDR with the message. The link can be provided on two sides, ie in addition to the saved file, a pointer to the typed address TYPE_ADDR is also stored.
  • Method step B1 The incoming call is further processed in a voice input module 53, which contains the already mentioned acoustic preprocessing unit 6, a speech recognition unit 3 and a text output dispatcher 7.
  • the flow of information or data is generally designated in the voice input module 53 with the reference symbol 63; in the explanations below, the respective interface or the respective format between the individual units is specifically indicated.
  • the stored call is first fed to the acoustic preprocessing unit 6. Based on the information in the fields
  • ADDR_SRC_CODING, ADDR_SRC_PATH and ADDR_SRC_MSG_TYPE can be carried out in the acoustic preprocessing unit 6, for example, to correct a systematic acoustic deviation from a standard level or to suppress noise; preferably, a conversion to a uniform file format is also carried out in the acoustic preprocessing unit 6.
  • the incoming call is fed to the speech recognition unit 3 and this generates a text file which contains the content of the spoken message as text.
  • This text file is then fed to a text output dispatcher 7, in which a semantic analysis takes place.
  • a query file QUERY_FILE is generated according to a defined syntax.
  • the structure of such a query file QUERY_FILE is given as an example in Table 2:
  • the query file QUERY_FILE is transmitted to a database 9 using a command COMMAND supplied to the switching unit 4 via the control interface 66.
  • the response message generated by the database 9 is preferably stored as a structured file by the switching unit 4 in a second memory area and the corresponding typed address is linked to this answer file, for this purpose a field PTR_ANSWER_FILE is provided in the typed address.
  • a field INPUT_ADDRESS contained in the typed address can be used for the aforementioned semantic analysis.
  • the rule can be derived which type of response messages the calling person wishes. This is particularly important if the communication system according to the invention completely different categories of incoming messages and corresponding responses Requests to speak should be generated, for example address information for a wide audience and network status information for a narrowly limited group of customers of a network operator.
  • a corresponding command COMMAND is transmitted to the switching unit 4 via the control interface 66, which command transfers the incoming call e.g. sent to an agent by email.
  • This email contains the typed address on the one hand and the call on the other hand, e.g. as a so-called attachment in .wav format.
  • the typed address in this e-mail can only contain those fields that are required for processing by an agent.
  • the content is preferably converted into a user-friendly format for display.
  • the delivery to an agent need not be personalized, but a single incoming mailbox can be provided for all agents, which they have to process sequentially.
  • a corresponding command COMMAND is transmitted to the switching unit 4 via the control interface 66 and the incoming call is forwarded to an agent as described above by means of an e-mail.
  • the aforementioned delivery as well as the forwarding of calls take place independently in the dialogue machine 5.
  • the response message stored in text form in the second memory area is fed to the text preprocessing unit 1 contained in the speech output module 52.
  • the flow of information or data in the speech output module 52 is generally designated by the reference symbol 62. draws, in the following explanations each is specifically referred to the respective interface or format.
  • steps are to be applied to the response message which is in text form.
  • the text preprocessing unit 1 is expanded using a phonetic or of a syntactic lexicon, the text file is adapted to the habits of the respective diction.
  • Telephone numbers of the type "0714953286" are not to be pronounced as a number, but as a sequence of numbers. Accordingly, the aforementioned number should be divided into the sequence of numbers "0 71 495 32 86".
  • This speech output file is preferably in the .mp3 format and is supplied to the speech output dispatcher 8.
  • the voice output dispatcher 8 supplements the above-mentioned voice output file with so-called “voice prompts”, which are spoken text modules in order to transmit information to the caller in a form that is good practice. Examples of such text modules: “The desired address is:” or "Thank you for your call.”
  • Control interface 61 a corresponding command COMM überffentMD is transmitted to switching unit 4, which reports the successful generation of a speech output file.
  • This speech output file is preferably also stored in the second memory area.
  • a pointer to the speech output file is set in the typed address and the status is updated, eg READY_FOR_DISPATCHING. Depending on the information in the typed address, the response file will be sent back to the original address as an e-mail or will be played after a connection has been established (CONNECT) to the original address.
  • the information in the typed address in particular that in the Status field, also makes it possible to receive an incoming call in real time with the ones explained above
  • Answer procedural steps An iteration of method steps for speech recognition and for speech synthesis takes place immediately before method step B1 in accordance with the dialog to be conducted. This iteration is carried out until the query can be formed by the speech recognition unit 3 and the text output dispatcher.
  • a message received in text form preferably has an agreed format that is either generated by an application for the requesting person or is created directly by the requesting person.
  • a format is expediently agreed which has a structure as shown in Table 2. Additional fields can be provided in which, for example, the desired response type or the desired response time can be specified. Specifying the time of response is advantageous for generating an additional service in which For example, a mobile subscriber can receive certain information during his travel time that is dependent on the time of the query in the database, for example status data of a network.
  • An embodiment of the present invention which is provided cumulatively under “Format I” is described below.
  • the process step AI does not differ from that in which a call is treated as an incoming message.
  • the information in typed address can be used to control the entire process sequence to supplement the method step AI explained under I.
  • the fields ADDR_SRC_TYPE, ADDR_SRC, DATE_TIME_ANSWER, ANSWER_TYPE and the update of the processing status of an incoming message in the STATUS field via the control interfaces 61 and 66 can be directly assigned to a unit in the two modules 53 and 52 or the database 9 .
  • it is also possible without affecting the specified information flow based on the information in the fields in the typed address, files from the individual units are transparent, i.e. forward without executing a processing step.
  • the incoming message is sent to the text output dispatcher 7, in which the generation of a query file QUERY_FILE described under B1 is carried out.
  • the semantic analysis mentioned can also be omitted, since in particular no filler words are to be expected in a received text message. Nevertheless, the case should be included where the text output dispatcher 7 cannot generate a query file. If this occurs, a corresponding command COMMAND is transmitted to the switching unit 4 via the control interface 66, and the relevant message is either provided with an explained one Text sent back to the original address or assigned to an agent in the form of an email.
  • a query file QUERY_FILE can be successfully generated, further processing is carried out in accordance with the method steps Cl and Dl described above.
  • the method steps explained above in the speech input module 53 and in the speech output module 52 are independent of one another in accordance with the respective application and the respective origin of the messages and can therefore be freely combined.
  • the further processing of the response message generated by the speech synthesis unit is also independent of e.g. acoustic preprocessing.
  • FIG. 2 shows a preferred implementation of the method according to the invention on a computer system.
  • the reference numerals 10, 20, 30 and 40 represent four servers, each of which has a processor system and a mass storage device. These servers are connected to one another via a local network 48.
  • the terminals provided for the agents are connected to this network 48 directly or via, for example, routers. Gateways or an extension can be provided for the connection to the outside, which can convert an incoming call from the public circuit-switched network into a packet-oriented format, for example based on the Internet protocol, via further routers and possibly a firewall local network can be switched directly to an Internet service provider (ISP).
  • ISP Internet service provider
  • the conversions to be carried out in the speech synthesis unit 2 and the speech recognition unit 3 in accordance with the method steps B1 and Dl require high computing power.
  • the aforementioned units 2 and 3 are distributed to the servers 10, 20 and 30, that is to say fully implemented once, this is indicated by reference numerals 12, 13; 22, 23 and 32, 33. This enables a parallel mode of operation, which increases the processing capacity as well as the redundancy of the inventive method significantly increased.
  • the switching unit 4 is also implemented in triplicate.
  • the dialog machine 5 is assigned to the fourth server 40.
  • the load distribution for the aforementioned three servers is also carried out on the server 5.
  • database 9 is included on fourth server 40.
  • the database can also be remote or operated by an external provider.
  • the special lexicons contained in at least one further database, which are required by the text output dispatcher 7, are preferably implemented once on a server.
  • the assignment to the individual servers of the further units, such as the acoustic preprocessing unit 6 or the text preprocessing unit 1, is not shown in FIG. 2.
  • a "distributed" assignment or an assignment to a single server is also possible.
  • CORBA An architecture according to CORBA is preferably used for the implementation of the individual units.
  • CORBA stands for Common Object Request Broker Architecture.
  • ORB Object Request Broker
  • An ORB enables a client - this is, for example, a received message - to send a COMMAND command (cf., for example, in process steps B1, Dl) to a server object that can run on the same or a different server.
  • the ORB is the instance that finds the server object, passes the parameters, calls the function there and, after processing, returns the result to the client.
  • the CORBA architecture contains an implementation-independent interface description language IDL (Interface Description Language).
  • different languages can then be used in the coding of the client and server object, for example Java for the client, C ++ for the server object and a database query language for access to database 9 or the database containing the special lexica.
  • Java for the client
  • C ++ for the server object
  • database query language for access to database 9 or the database containing the special lexica.
  • the object-oriented programming language Java can also be used for implementation on the various servers.
  • the interfaces 62 to the speech synthesis unit 2 can be implemented using SAPI (Speech Application Programming Interface) or, if the aforementioned programming language Java is selected, using JSAPI (Java Speech Application Programming Interface).
  • SAPI Seech Application Programming Interface
  • JSAPI Java Speech Application Programming Interface
  • Java and JSAPI are characterized by the fact that the source program code - including the interface definitions - translated after compilation into so-called "bytecode" can run on any server of the computer system; the only requirement is that a corresponding runtime system, in this case a so-called "Java Virtual machine" is installed that the "ByteCode" executable on the server in question
  • the language JSML Java Speech Markup Language
  • JSAPI Java Speech Markup Language
  • SAYAS prosody analysis and prosodic marking of the phrases and parts of the phrase in text form can be carried out. It is thus also possible to mark the beginning and end of a sentence or section in the phrases to be synthesized
  • SAYAS element from JSML, whereby application-specific abbreviations are preferably expanded beforehand through ordinary text substitution.
  • the specific implementation is based, for example, on a delivery package that is in a class “j vax.
  • Java has the advantage that the programming languages are homogeneous both for the dialog machine 5 and for the interface to the speech synthesis unit 2 in so far as this simplifies the development.
  • a caller with an incoming Message can define a so-called alarm window.
  • a threshold can be defined for the state of a network element. If a threshold defined by the caller is exceeded, for example a certain number of alarms or alarms from a certain priority, the communication system according to the invention transmits a response message to the aforementioned caller; eg "The Wülflingen 3 network element has 4 level 2 and higher alarms".
  • the medium of this response message can be designed as a text avoidance with the SMS (Short Message Service) service on a GSM terminal or as a call with its synthetic voice was generated in the speech synthesis unit 2.
  • the above-mentioned threshold value can also be applied to non-technical applications, for example a specific share price.
  • FIG. 3 shows a multimedia communication system in the
  • Embodiment according to Figure 1 shown in a highly summarized form.
  • the switching unit 4 can be connected to the service providers 80 via the interfaces 60 mentioned in this further embodiment using the TCP / IP protocol.
  • the interface need not be carried out separately for each service provider, but instead can be routed, for example, to an Internet service provider from which the other service providers and / or information providers (content providers) can be addressed.
  • services implemented in the multimedia communication system can be subscribed to as follows.
  • the service providers shown in FIG. 3 with the reference symbol 80 represent airlines which offer their offers e.g. offer via a web interface.
  • the reference numeral 80 according to FIG. 3 subsumes that, from a technical point of view, the offer data is kept on at least one database or on a database system that is external, e.g. can be queried via the Internet as a transport medium.
  • the aforementioned service of the multimedia communication system generates 80 inquiries of a certain periodicity with the aforementioned service providers, the responses are received as messages in Format II (see description above) and are stored as a typed address.
  • a comparison is also made with the threshold value specified by the subscribing person. Only when the current value falls below the threshold value is a generation generated in this method step B1
  • step D1 the response is transmitted as a text or voice message to the address assigned to the person concerned.
  • a presence application can also be subsumed behind such an address, in which the actual availability of the person concerned is stored, so that the response message is transmitted in that medium that is compatible with the type of device that person has at the moment can.
  • sending a response message - in whatever format - is referred to as notification.
  • the SIP protocol is advantageously used in particular for the implementation of such a service with an associated presence application.
  • the service provider can also provide for a query and response, likewise via a message exchange based on the SIP protocol. An example of such a sequence is listed below, the direction of the message being shown in FIG. 3 with the reference symbols “sub” and “notif”: SUBSCRIBE sip: sipuaconfig ⁇ conf ig. localdomain. com SIP / 2. 0
  • sip sipuaconf igöconf ig. localdomain. com
  • Service would only be possible from a terminal 70, but can be switched by means of each via the interfaces 60 and 68 outgoing message can be realized.
  • a particular advantage of this implementation is that the respective people remain anonymous by querying the service providers.
  • the communication system according to the invention fulfills a so-called trust center function.
  • Another advantage of the proposed implementation of such services is that the service providers or information providers 80 do not require any software adjustments to the databases and servers available there.
  • the AIR_TICKET_OFFER service described above is only an example, further possible variants of such services are, for example: i) electronic lost property office, in which items can be reported as lost, and when such an item is handed in, the person (s) receiving an item is notified reported the species in question as lost; ii) Notification for the delivery of a mail item, iii) Notification of rental offers for apartments, in which the person subscribing to this service can indicate the size, the accommodation, a price category and the time of availability of the apartment.
  • SMS Short Message Service GSM service sub message SUBSCRIBE in the SIP protocol

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

L'invention concerne un procédé et un système de communication destinés à produire des messages de réponses à la suite de messages entrants. Le système selon l'invention est composé d'une machine de dialogue (5), d'une unité de transmission (4), d'une unité d'entrée vocale (53), et d'un module de sortie vocale (52). Les messages entrants sont conçus en tant qu'appels en temps réel, en tant qu'appels enregistrés ou en tant que messages de texte. Une adresse typifiée est établie à partir de ces messages, et alimentée à un module d'entrée vocale (53). Un requête adressée à une base de données (9) est produite au moyen d'opérations effectuées dans une unité de prétraitement acoustique (6), une unité de reconnaissance vocale (3), et un distributeur de sortie de texte (7). Des messages de réponse provenant de cette base de données (9) sont soit transmis directement à l'émetteur du message entrant, soit par l'intermédiaire d'une unité de prétraitement de texte (1), d'une unité de synthèse vocale (2), et d'un distributeur de sortie vocale (8). Des appels ne pouvant pas être traités sont transmis à des agents. La construction modulaire du système de communication selon l'invention permet d'employer différents produits de synthèse vocale ou de reconnaissance vocale sans nouvelle intégration.
EP02703576A 2001-03-13 2002-01-25 Procede et systeme de communication destines a produire des messages de reponse Withdrawn EP1370995A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP02703576A EP1370995A1 (fr) 2001-03-13 2002-01-25 Procede et systeme de communication destines a produire des messages de reponse

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP01106128A EP1241600A1 (fr) 2001-03-13 2001-03-13 Méthode et système de communication pour la production de réponses à des questions
EP01106128 2001-03-13
EP02703576A EP1370995A1 (fr) 2001-03-13 2002-01-25 Procede et systeme de communication destines a produire des messages de reponse
PCT/EP2002/000742 WO2002073480A1 (fr) 2001-03-13 2002-01-25 Procede et systeme de communication destines a produire des messages de reponse

Publications (1)

Publication Number Publication Date
EP1370995A1 true EP1370995A1 (fr) 2003-12-17

Family

ID=8176759

Family Applications (2)

Application Number Title Priority Date Filing Date
EP01106128A Withdrawn EP1241600A1 (fr) 2001-03-13 2001-03-13 Méthode et système de communication pour la production de réponses à des questions
EP02703576A Withdrawn EP1370995A1 (fr) 2001-03-13 2002-01-25 Procede et systeme de communication destines a produire des messages de reponse

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP01106128A Withdrawn EP1241600A1 (fr) 2001-03-13 2001-03-13 Méthode et système de communication pour la production de réponses à des questions

Country Status (3)

Country Link
US (1) US20040052342A1 (fr)
EP (2) EP1241600A1 (fr)
WO (1) WO2002073480A1 (fr)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040203629A1 (en) * 2002-03-04 2004-10-14 Dezonno Anthony J. Intelligent interactive voice response unit
GB0213021D0 (en) * 2002-06-07 2002-07-17 Hewlett Packard Co Telephone communication with silent response feature
US7058578B2 (en) * 2002-09-24 2006-06-06 Rockwell Electronic Commerce Technologies, L.L.C. Media translator for transaction processing system
DE10353980B4 (de) * 2003-11-19 2006-08-24 Combots Product Gmbh & Co. Kg Verfahren und Vorrichtung zur Unterstützung eines Empfängers von Sprachnachrichten
US20070140471A1 (en) * 2004-01-20 2007-06-21 Koninklijke Philips Electronics N.V. Enhanced usage of telephone in noisy surroundings
FR2865846A1 (fr) * 2004-02-02 2005-08-05 France Telecom Systeme de synthese vocale
GB2412191A (en) * 2004-03-18 2005-09-21 Issuebits Ltd A method of generating answers to questions sent from a mobile telephone
US8903820B2 (en) * 2004-06-23 2014-12-02 Nokia Corporation Method, system and computer program to enable querying of resources in a certain context by definition of SIP even package
JP4822761B2 (ja) * 2005-07-29 2011-11-24 富士通株式会社 メッセージ代行通知方法及び装置
US7861159B2 (en) * 2006-04-07 2010-12-28 Pp Associates, Lp Report generation with integrated quality management
ATE550749T1 (de) 2007-01-19 2012-04-15 Vodafone Plc System und verfahren zum automatischen antworten auf eine grosse anzahl eingehender nachrichte
US8943018B2 (en) 2007-03-23 2015-01-27 At&T Mobility Ii Llc Advanced contact management in communications networks
DE102008019967A1 (de) * 2008-04-21 2009-11-26 Navigon Ag Verfahren zum Betrieb eines elektronischen Assistenzsystems
WO2009146238A1 (fr) * 2008-05-01 2009-12-03 Chacha Search, Inc. Procédé et système pour une amélioration d'un traitement de requête
TWI387309B (zh) * 2009-09-04 2013-02-21 Interchan Global Ltd Information and voice query method
WO2013187610A1 (fr) * 2012-06-15 2013-12-19 Samsung Electronics Co., Ltd. Appareil terminal et méthode de commande de celui-ci
US9848082B1 (en) * 2016-03-28 2017-12-19 Noble Systems Corporation Agent assisting system for processing customer enquiries in a contact center

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2558682B2 (ja) * 1987-03-13 1996-11-27 株式会社東芝 知的ワ−クステ−シヨン
JP2001512862A (ja) * 1997-07-30 2001-08-28 ブリティッシュ・テレコミュニケーションズ・パブリック・リミテッド・カンパニー 通信装置
JPH11194899A (ja) * 1997-12-26 1999-07-21 Toshiba Corp ディスク記憶システム及び同システムに適用するデータ更新方法
US5950167A (en) * 1998-01-26 1999-09-07 Lucent Technologies Inc. Screen-less remote voice or tone-controlled computer program operations via telephone set
FI115434B (fi) * 1998-02-12 2005-04-29 Elisa Oyj Menetelmä puhelujen välittämiseksi
IL131135A0 (en) * 1999-07-27 2001-01-28 Electric Lighthouse Software L A method and system for electronic mail
FI116643B (fi) * 1999-11-15 2006-01-13 Nokia Corp Kohinan vaimennus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO02073480A1 *

Also Published As

Publication number Publication date
WO2002073480A1 (fr) 2002-09-19
US20040052342A1 (en) 2004-03-18
EP1241600A1 (fr) 2002-09-18

Similar Documents

Publication Publication Date Title
DE60305458T2 (de) System und verfahren zur bereitstellung einer nachrichtengestützten kommunikationsinfrastruktur für einen automatisierten anrufzentralenbetrieb
EP1370995A1 (fr) Procede et systeme de communication destines a produire des messages de reponse
DE69837578T2 (de) Verfahren und Gerät für automatische Sprachmodusselektion
DE69839068T2 (de) System und Verfahren zur automatischen Verarbeitung von Anruf und Datenübertragung
US7167830B2 (en) Multimodal information services
DE69735297T2 (de) Automatische sprache/text umsetzung für ein sprachnachrichtensystem
DE69531160T2 (de) Netzwerkbasierter kundiger assistent
DE69633883T2 (de) Verfahren zur automatischen Spracherkennung von willkürlichen gesprochenen Worten
DE69824508T2 (de) Fernsprechbasiertes Anweisungssystem
DE60316125T2 (de) Verfahren und betrieb eines sprach-dialogsystems
DE602004011610T2 (de) Web-anwendungsserver
DE10208295A1 (de) Verfahren zum Betrieb eines Sprach-Dialogsystems
DE10100725C1 (de) Automatisches Dialogsystem mit Datenbanksprachmodell
DE102009031304A1 (de) Zuordnung von Sytemanfragen zu SMS-Anwenderantworten
EP1454464B1 (fr) Systeme de conversion de donnees textuelles en sortie vocale
EP0920238B1 (fr) Méthode de transmission d'un numéro d'abonné d'un abonné désiré, et système d'annuaire téléphonique et terminal pour le même
DE10147549A1 (de) Vermittlungsverfahren zwischen Dialogsystemen
DE60018349T2 (de) Erzeugung von einem Namenwörterbuch aus aufgezeichneten telephonischen Grüssen für die Spracherkennung
EP1251680A1 (fr) Service d'annuaire à commande vocale pour connection a un Réseau de Données
DE60312651T2 (de) Vorrichtung und verfahren zur integrierten computergesteurten anrufverarbeitung in pakettelefonnetzen
WO1999016257A2 (fr) Procede et dispositif pour la traduction automatique de messages dans un systeme de communication
EP1305936B1 (fr) Dispositif et procede de transfert d'appel dans des reseaux de telecommunication
DE102007027363A1 (de) Verfahren zum Betreiben eines Voice-Mail-Systems
EP2822261B1 (fr) Procédé et agencement d'unitisation de files d'attente multimodales et recherche d'appels téléphoniques actuels pour un utilisateur dans un réseau de communication
DE102012213914A1 (de) Verfahren und System zum Bereitstellen einer Übersetzung eines Sprachinhalts aus einem ersten Audiosignal

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030725

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20060801