USRE42868E1 - Voice-operated services - Google Patents

Voice-operated services Download PDF

Info

Publication number
USRE42868E1
USRE42868E1 US09/930,395 US93039595A USRE42868E US RE42868 E1 USRE42868 E1 US RE42868E1 US 93039595 A US93039595 A US 93039595A US RE42868 E USRE42868 E US RE42868E
Authority
US
United States
Prior art keywords
words
recognition
vocabulary
speech
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/930,395
Other languages
English (en)
Inventor
David J. Attwater
Steven J. Whittaker
Francis J. Scahill
Alison D. Simons
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Assigned to CISCO TECHNOLOGY, INC. reassignment CISCO TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CISCO RAVENSCOURT LLC
Assigned to CISCO RAVENSCOURT L.L.C. reassignment CISCO RAVENSCOURT L.L.C. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: BT RAVENSCOURT L.L.C.
Assigned to BT RAVENSCOURT LLC reassignment BT RAVENSCOURT LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANY
Application granted granted Critical
Publication of USRE42868E1 publication Critical patent/USRE42868E1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4931Directory assistance systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4936Speech interaction details
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0631Creating reference templates; Clustering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/35Aspects of automatic or semi-automatic exchanges related to information services provided via a voice call
    • H04M2203/355Interactive dialogue design tools, features or methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2242/00Special services or facilities
    • H04M2242/22Automatic class or number identification arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42025Calling or Called party identification service
    • H04M3/42034Calling party identification service
    • H04M3/42059Making use of the calling party identifier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42025Calling or Called party identification service
    • H04M3/42085Called party identification service
    • H04M3/42093Notifying the calling party of information on the called or connected party
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42025Calling or Called party identification service
    • H04M3/42085Called party identification service
    • H04M3/42102Making use of the called party identifier

Definitions

  • the present invention is concerned with automated voice-interactive services employing speech recognition, particularly, though not exclusively, for use over a telephone network.
  • a typical application is an enquiry service where a user is asked a number of questions in order to elicit replies which, after recognition by a speech recogniser, permit access to one or more desired entries in an information bank.
  • An example of this is a directory enquiry system in which a user, requiring the telephone number of a telephone subscriber, is asked to give the town name and road name of the subscriber's address, and the subscriber's surname.
  • a speech recognition apparatus comprising a store of data containing entries to be identified and information defining for each entry a connection with a word of a first set of words and a connection with a word of a second set of words; speech recognition means; and control means operable:
  • the speech recognition means is operable upon receipt of the first voice signal to generate for each identified word a measure of similarity with the first voice signal, and the control means is operable to generate for each word of the list a measure obtained from the measure(s) for the relevant word(s) of the first set (i.e those identified words of the first set with which a word of the list has a common entry).
  • the speech recognition means is then operable upon receipt of the second voice signal to perform the identification of one or more words of the list in accordance with a recognition process weighted in dependence on the measures generated for the words of the list.
  • the apparatus may also include a store containing recognition data for all words of the second set and the control means is operable following the compilation of the list and before recognition of the word(s) of the list to mark in the recognition data store those items of data therein which correspond to the words not in the list or those which correspond to words which are in the list, whereby the recognition means may ignore all words so marked or, respectively, not marked.
  • the recognition data may be generated dynamically either before recognition or during recognition, the control means being operable following the compilation of the list to generate recognition data for each word of the list.
  • Methods for dynamically generating recognition data fall outside the scope of the present invention but will be clear to those skilled in this art.
  • control means is operable to select for output that entry or entries defined as connected both with an identified word(s) of the first set and an identified word of the second set.
  • the store of data may also contain information defining for each entry a connection with a word of a third set of words, the control means being operable:
  • means may be included to store at least one of the received voice signals, the apparatus being arranged to perform an additional recognition process in which the control means is operable:
  • the apparatus includes means to recognise a failure condition and to initiate the said additional recognition process only in the event of such failure being recognised.
  • the apparatus may comprise a telephone line connection; a speech recogniser for recognising spoken words received via the telephone line connection, by reference to recognition data representing a set of possible utterances; and means responsive to receipt via the telephone line connection of signals indicating the origin or destination of a telephone call to access stored information identifying a subset of the set of utterances and to restrict the recogniser operation to that subset.
  • a telephone apparatus comprises a telephone line connection; a speech recogniser for determining or verifying the identity of the speaker of spoken words received via the telephone line connection, by reference to recognition data corresponding to a set of possible speakers; and means responsive to receipt via the telephone line connection of signals indicating the origin or destination of a telephone call to access stored information identifying a subset of the set of speakers and to restrict the recogniser operation to that subset.
  • a telephone information apparatus comprises a telephone line connection; a speech recogniser for recognising spoken words received via the telephone line connection, by reference to one of a plurality of stored sets of recognition data; and means responsive to receipt via the telephone line connection of signals indicating the origin or destination of a telephone call to access stored information identifying one of the sets of recognition data and to supply this set to the recogniser.
  • the stored sets may, for example, correspond to different languages or regional accents or, say, two of the sets may correspond to the characteristics of different types of telephone apparatus, for instance the characteristics of a mobile telephone channel.
  • a recognition apparatus comprises
  • the patterns may represent speech and the recognition means be a speech recogniser.
  • a speech recognition apparatus comprises
  • the first set of signals are voice signals representing spelled versions of the words of the second set or initial portions thereof and the identifying means are formed by the speech recognition means operating by reference to stored recognition information for the said spelled voice signals.
  • the first set of signals may be signals consisting of tones and the identifying means is a tone recogniser.
  • the first set of signals may indicate the origin or destination of the receive signal.
  • a method of identifying entries in a store of data by reference to stored information defining connections between entries and words comprises
  • a speech recognition apparatus comprises
  • a method of speech recognition by reference to a stored set of words to be recognised comprises
  • the second signal may also be a speech signal, and the second signal may be recognised by reference to recognition data representing the letters of the alphabet, either individually or as sequences.
  • the second signal may be a signal consisting of tones generated by a keypad.
  • a method of speech recognition comprises
  • FIG. 1 shows schematically the architecture of a directory enquiry system
  • FIG. 2 is a flow chart illustrating the operation of the directory enquiry system of FIG. 1 ;
  • FIG. 2a is a flow chart illustrating a second embodiment of operation of the directory enquiry system of FIG. 1 ;
  • FIG. 3 is a flow chart illustrating the use of CLI in the operation of the directory enquiry system of FIG. 1 ;
  • FIG. 3a includes a further information gathering step for use in the operation of the directory enquiry system of FIG. 1 ;
  • FIG. 4 is a flow chart illustrating a further mode of operation of the directory enquiry system of FIG. 1 .
  • the embodiment of the invention addresses the same directory enquiry task as was discussed in the introduction. It operates by firstly asking an enquirer for a town name and, using a speech recogniser, identifies as “possible candidates” two or more possible town names. It then asks the enquirer for a road name and recognition of the reply to this question then proceeds by reference to stored data pertaining to all road names which exist in any of the candidate towns. Similarly, the surname is asked for, and a recognition stage then employs recognition data for all candidate road names in candidate towns. The number of candidates retained at each stage can be fixed, or (preferably) all candidates meeting a defined acceptance criterion—e.g. having a recognition score above a defined threshold—may be retained.
  • a defined acceptance criterion e.g. having a recognition score above a defined threshold
  • a speech synthesiser 1 is provided for providing announcements to a user via a telephone line interface 2 , by reference to stored, fixed messages in a message data store 3 , or from variable information supplied to it by a main control unit 4 .
  • Incoming speech signals from the telephone line interface 2 are conducted to a speech recogniser 5 which is able to recognise spoken words by reference to, respectively, town name, road name or surname recognition data in recognition data stores of 6 , 7 , 8 .
  • a main directory database 9 contains, for each telephone subscriber in the area covered by the directory enquiry service, an entry containing the name, address and telephone number of that subscriber, in text form.
  • the town name recognition data store 6 contains, in text form, the names of all the towns included in the directory database 9 , along with stored data to enable the speech recogniser 5 to recognise those town names in the speech signal received from the telephone line interface 2 .
  • any type of speech recogniser may be used, but for the purposes of the present description it is assumed that the recogniser 5 operates by recognising distinct phonemes in the input speech, which are decoded by reference to stored data in the store 6 representing a decoding tree structure constructed in advance from phonetic translations of the town names stored in the store 6 , decoded by means of a Viterbi algorithm.
  • the stores 7 , 8 for road name recognition data and surname recognition data are organised in the same manner.
  • the surname recognition data store 8 contains data for all the surnames included in the directory database 9 , it is configurable by the control unit 4 to limit the recognition process to only a subset of the names, typically by flagging the relevant parts of the recognition data so that the “recognition tree” is restricted to recognising only those names within a desired subset of the names.
  • Each entry in the town data store 6 contains, as mentioned above, text corresponding to each of the town names appearing in the database 9 , to act as a label to link the entry in the store 6 to entries in the database 9 (though other kinds of label may be used if preferred).
  • the store 6 may contain an entry for every town name that the user might use to refer to geographical locations covered by the database, whether or not all these names are actually present in the database. Noting that some town names are not unique (there are four towns in the UK called Southend), and that some town names carry the same significance (e.g.
  • an equivalence data store 39 is also provided, containing such equivalents, which can be consulted following each recognition of a town name, to return additional possibilities to the set of town names considered to be recognised. For example if “Hammersmith” is recognised, London is added to the set; if “Southend” is recognised, then Southend-on-Sea, Southend (Campbeltown), Southend (Swansea) and Southend (Reading) are added.
  • the equivalence data store 39 could, if desired, contain similar information for roads and surnames, or first names if these are used; for example Dave and David are considered to represent the same name.
  • the vocabulary equivalence data store 39 may act as a translation between labels used in the name stores 6 , 7 , 8 and the labels used in the database (whether or not the labels are names in text form).
  • each leaf in the tree may have one or more textual labels attached to it.
  • the recogniser should preferably return only textual labels in that list, not labels associated with a pronunciation associated with a label in the list that are not themselves in the list.
  • the system operation is illustrated by means of the flowchart set out in FIG. 2 .
  • the process starts ( 10 ) upon receipt of an incoming telephone call signalled to the control unit 4 by the telephone line interface 2 ; the control unit responds by instructing the speech synthesiser 1 to play ( 11 ) a message stored in the message store 3 requesting the caller to give the name of the required town.
  • the caller's response is received ( 12 ) by the recogniser.
  • the recogniser 3 then performs its recognition process ( 13 ) with reference to the data stored in the store 6 and communicates to the control unit 4 the name of the town which most clearly resembles the received reply or (more preferably) the names of ail all those towns which meet a prescribed threshold of similarity with the received reply.
  • the control unit 4 responds by instructing the speech synthesiser to play ( 14 ) a further message from the message data store 3 and meanwhile accesses ( 15 ) the directory database 9 to compile a list of all road names which are to be found in any of the geographical locations corresponding to those four town names and also any additional location entries obtained by accessing the equivalence data store 39 . It then uses ( 16 ) this information to update the road name recognition data store 7 so that the recogniser 3 is able to recognise only the road names in that list.
  • the next stage is that a further response, relating to the road name, is received ( 17 ) from the caller and is processed by the recogniser 3 utilising the data store 7 ; suppose that five road names meet the recognition criterion.
  • the control unit 4 then instructs the playing ( 19 ) of a further message asking for the name of the desired telephone subscriber and meanwhile ( 20 ) retrieves from the database 9 a list of the surnames of all subscribers residing in roads having any of the five road names in any of the four geographical locations (and any equivalents), and updating the surname recognition data store 8 in a similar manner as described above for the road name recognition data store.
  • the surname may be recognised ( 23 ) by reference to the data in the surname recognition data store.
  • the database 9 may contain more than one entry for the same name in the same road in the same town. Therefore at step 24 the number of directory entries which have one of the recognised surnames and one of the recognised road names and one of the recognised town names is tested. If the number is manageable, for example if it is three or fewer, the control means instructs ( 25 ) the speech synthesiser to play an announcement from the message data store 3 , followed by recitation of the name, address and telephone number of each entry, generated by the speech synthesiser 1 using text-to-speech synthesis, and the process is complete ( 26 ). If, on the other hand, the number of entries is excessive then further steps 27 , to be discussed further below, will be necessary in order to meet the caller's enquiry.
  • the recogniser is of the type (e.g. recognisers using Hidden Markov models) which require setting up for a particular vocabulary
  • the relevant store there are two options for updating the relevant store to limit the recogniser's operation to words in the list.
  • One is to start with a fully set-up recogniser, and disable all the words not in the list; the other is to clear the relevant recognition data store and set it up afresh (either completely, or by adding words to a permanent basic set).
  • some recognisers do not store recognition data for ail all words which may be recognised.
  • recognisers generally have a store of textual information relating to the words that may be recognised but do not prestore data to enable the speech recogniser to recognise words in a received signal.
  • dynamic recognisers the recognition data is generated either immediately before or during recognition.
  • the first option requires large data stores but is relatively inexpensive computationally for any list size.
  • the second option is generally computationally expensive for large lists but requires much smaller data stores and is useful when there are frequent data changes. Generally the first option would be preferred, with the second option being invoked in the case of a short list, or where the data change frequently.
  • the criterion for limiting the number of recognition ‘hits’ at steps 13 , 18 or 23 may be that all candidates are retained which meet some similarity criterion, though other criteria such as retaining always a fixed number of candidates may be chosen if preferred. It may be, in the earlier recognition stages, that the computational load and effect on recognition performances of retaining a large town (say) with a low score is not considered to be justified, whereas retaining a smaller town with the same score might be. In this case the scores of a recognised word may be weighted by factors dependent on the number of entries referencing that word, in order to achieve such differential selection.
  • a list of words (such as road names) to be recognised is generated based on the results of an earlier recognition of a word (the town name).
  • the unit in the earlier recognition step or in the list be single words; they could equally well be sequences of words.
  • One possibility is a sequence of the names of the letters of the alphabet, for example a list of words for a town name recognition step may be prepared from an earlier recognition of the answer to the question “please spell the first four letters of the town name.” If recording facilities are provided (as discussed further below) it is not essential that the order of recognition be the same as the order of receipt of the replies (it being more natural to ask for the spoken word first, followed by the spelled version, though it is preferred to process them in the opposite sequence).
  • a spelling of the town name is requested 41 allowing all permissible spellings of all town names in the recognition vocabulary. Following a confident recognition 43 two spellings are recognised. These two town names may be considered more confident than the four spoken town names recognised previously, but a comparison 44 of both lists may reveal one or more common town names in both lists. If this is so 46 then a very high confidence of success may be inferred for these common town names and the enquiry may proceed, for example, in the same manner as FIG. 2 using-these common towns to prepare the road name recognition 15 .
  • the two spelt towns may be retained 47 for use in the next stage which may be preparing the road name recogniser 15 with the two town names as shown in the diagram, or may be a different processing step not shown in FIG. 2a , for example a confirmation of the more confident of the two town names with the user in order to increase the system confidence before a subsequent request for information is made.
  • the response to be recognised be discrete responses to discrete questions. They could be words extracted by a recogniser from a continuous sentence, for systems which work in this way.
  • a directory enquiry system this may be a signal indicating the origin of a telephone call, such as the calling line identity (CLI) or a signal identifying the originating exchange.
  • CLI calling line identity
  • this identification of the calling line or exchange may be used to access stored information compiled to indicate the enquiry patterns of the subscriber in question or of subscribers in that area (as the case may be).
  • a sample of directory enquiries in a particular area might show that 40% of such calls were for numbers in the same exchange area and 20% for immediately adjacent areas.
  • Separate statistical patterns might be compiled for business or residential lines, or for different times of day, or other observed trends such as global usage statistics of a service that are not related to the nature or location of the originating line.
  • FIG. 1 additionally shows a CLI detector 20 , (used here only to indicate the originating exchange) which is used to select from a store 21 a list of likely towns for enquiries from that exchange, to be used by the control unit 4 to truncate the “town name” recognition, as indicated in the flowchart of FIG. 3 , where the calling line indicator signal is detected at step 10 a, and selects ( 12 a) a list of town names from the store 21 which is then used ( 12 b) to update the town name recognition store 6 prior to the town name recognition step 13 . The remainder of the process is not shown as it is the same as that given in FIG. 2 .
  • a CLI detector 20 (used here only to indicate the originating exchange) which is used to select from a store 21 a list of likely towns for enquiries from that exchange, to be used by the control unit 4 to truncate the “town name” recognition, as indicated in the flowchart of FIG. 3 , where the calling line indicator signal is detected at step 10 a, and selects ( 12
  • FIG. 3a The spoken town name is asked for 11 , and the CLI is detected 10 a. As in FIG. 3 , the CLI is then related to town names commonly requested by callers with that CLI identity 12 a. These town names update the spoken town name store 12 b. This process is identical to that shown in FIG. 3 so far. Additionally, as the speech is gathered for recognition it is stored for later re-recognition 37 . The restricted town name set used in the recognition 13 will typically be a small vocabulary covering a significant proportion of enquiries. If a word within this vocabulary is spoken and confidently recognised 48 then the enquiry may immediately use this recognised town or towns to prepare the road name store and continue as described in FIG. 2 .
  • an additional message 49 is played to ask the caller for more information, which in this case is the first four letters of the town name.
  • an additional re-recognition of the spoken town name 53 may be performed which can recognise any of the possible town names in the directory.
  • the caller may be spelling in the first four letters of the town name 50 and two spellings 51 have been confidently recognised. These two spellings are then expanded to the full town names which match them 52 .
  • a comparison 55 identical in purpose to that described in FIG. 2a ( 44 ) may then be performed between the five town names derived from the two spellings and the four re-recognised town names. If common words are found in these two sets, (only one common word is assumed in this example,) then this town name may confidently be assumed to be the correct one and the road name recognition data store 7 may be prepared from it and the enquiry proceeds as shown in FIG. 2 .
  • the spoken recognition 53 will be in error and no common words will be found.
  • the recognition of the town name 53 , and its subsequent comparison 55 may be considered optional and omitted.
  • the spoken town store will be updated 57 with the five towns derived from the two spellings 52 and the spoken town name re-recognised again 58 .
  • This town name may be used to configure the road name recognition data store 7 and the enquiry proceeds as shown in FIG. 2 .
  • the deliberate restriction of a vocabulary to only the very most likely words as described above need not necessarily depend on CLI.
  • the preparation of the road name vocabulary based on the recognised town names is itself an example of this, and the approach of asking for additional information, as shown in FIG. 3a , may be used if any such restricted recognition results are not confident.
  • Global observed or postulated behaviour can also be used to restrict a vocabulary (e.g. the town store) in a similar way to CLI derived information, as can signals indicating the destination of a call. For example, callers may be encouraged to dial different access numbers for particular information. On receipt of a call by a common apparatus for all the information, the dialed number determines the subset of the vocabulary to be used in subsequent operation of the apparatus. The operation of the apparatus would then continue similarly as described above with relation to CLI.
  • the re-recognition of a gathered word that has been constrained by additional information could be based on any kind of information, for example DTMF entry via the telephone keypad, or a yes,no response to a question restricting the scope of the search (e.g. “Please say yes or no: does the person live in a city?”).
  • This additional information could even be derived from the CLI using a different area store 21 based on different assumptions to the previously used one.
  • no account is taken of the relative probability of recognition, for example if the town recognition step 13 recognises town names Norwich and Harwich, then when, at road recognition step 18 , the recogniser has to evaluate the possibility that the caller said “Wright Street” (which we suppose to be in Norwich) or “Rye Street” (in Harwich), no account is taken of the fact that the spoken town bore a closer resemblance to “Norwich” than it did to “Harwich”.
  • the recogniser may be arranged to produce (in known manner) figures or “scores” indicating the relative similarity of each of the candidates identified by the recogniser to the original utterance and hence the supposed probability of it being the correct one.
  • These scores may then be retained whilst a search is made in the directory database to derive a list of the vocabulary items of the next desired vocabulary that are related to the recognised words. These new vocabulary items may then be given the scores that the corresponding matching word attained. In the case where a word came from a match with more than one recognised word of the previous vocabulary, the maximum score of the two may be selected for example. These scores may then be fed as a priori probabilities to the next recognition stage to bias the selection. This may be implemented in the process depicted in FIG. 2 as follows.
  • Step 13 The recogniser produces for each town, a score—e.g.
  • a failure condition can be identified by noting low recogniser output “scores”, or of excessive numbers of recognised words all having similar scores (whether by reference to local scores or to weighted scores) or by comparing the scores with those produced by a recogniser comparing the speech to out-of-vocabulary models.
  • a failure condition may arise in an unconstrained search like that of the town name recognition of step 13 in FIG. 2 . In this case it may be that better results might be obtained by performing (for example) the road name recognition step first (unconstrained) and compiling a list of all town names containing the roads found, to constrain a subsequent town name recognition step. Or it may arise in a constrained search such as that of step 13 in FIG. 3 or steps 18 and 23 in FIG. 2 , where perhaps the constraint has removed the correct candidate from the recognition set; in this case removing the constraint—or applying a different one—may improve matters.
  • one possible approach is to make provision for recording the caller's responses, and in the event of failure, reprocessing them using the steps set out in FIG. 2 (except the “play message” steps 11 , 14 , 19 ) but with the original sequence town name/road name/surname modified. There are of course six permutations of these. One could choose that one (or more) of these which experience shows to be the most likely to produce an improvement. The result of such a reprocessing could be used alone, or could be combined with the previous result, choosing for output those entries identified by both processes.
  • Another possibility is to perform an additional search omitting one stage, and comparing the results as for the ‘spelled input’ case.
  • processing using two (or more) such sequences could be performed routinely (rather than only under failure conditions); to reduce delays an additional sequence might commence before completion of the first; for example (in FIG. 4 ) an additional, unconstrained “road name” search 30 could be performed (without recording the road name) during the “which surname” announcement.
  • a list of surnames is compiled ( 31 ) and the surname store updated ( 32 ).
  • a town name list may be compiled ( 34 ) and the town name store updated ( 35 ).
  • the spoken town name, previously stored at step 37 may be recognised.
  • the results of the two recognition processes may then be compiled, suitably by selecting ( 38 ) those entries which are identified by both processes. Alternatively, if no common entries are found, the entries found by one or the other or both of the processes may be used. The remaining steps shown in FIG. 4 are identical to those in FIG. 2 .
  • CLI CLI
  • the origin of the telephone call as given by the CLI may be used to extract from a store the identity of a number of individuals known to the system to be related to this origin. This store may also contain representative speech which is already verified to have come from these individuals. If there is only one individual authorised to access the given service from the designated origin, or the caller has made a specific claim to identity by means of additional information (e.g.
  • a spoken utterance may be gathered from the caller and compared with the stored speech patterns associated with that claimed identity in order to verify that the person is who they say that they are.
  • the identity of the caller may be determined by gathering a spoken utterance from the caller and comparing it with stored speech patterns for each of the individuals in turn, selecting the most likely candidate that matches with a certain degree of confidence.
  • the CLI may also be used to access a store relating speech recognition models to the origin of the call. These speech models may then be loaded into the stores used by the speech recogniser.
  • a call originating from a cellular telephone for example, may be dealt with using speech recognition models trained using cellular speech data.
  • a similar benefit may be derived for regional accents or different languages in a speech recognition system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Telephonic Communication Services (AREA)
  • Machine Translation (AREA)
  • Computer And Data Communications (AREA)
  • Navigation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
US09/930,395 1994-10-25 1995-10-25 Voice-operated services Expired - Lifetime USRE42868E1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP94307843 1994-10-25
AT94307843 1994-10-25
PCT/GB1995/002524 WO1996013030A2 (en) 1994-10-25 1995-10-25 Voice-operated services

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/817,673 Reissue US5940793A (en) 1994-10-25 1995-10-25 Voice-operated services

Publications (1)

Publication Number Publication Date
USRE42868E1 true USRE42868E1 (en) 2011-10-25

Family

ID=8217890

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/930,395 Expired - Lifetime USRE42868E1 (en) 1994-10-25 1995-10-25 Voice-operated services
US08/817,673 Ceased US5940793A (en) 1994-10-25 1995-10-25 Voice-operated services

Family Applications After (1)

Application Number Title Priority Date Filing Date
US08/817,673 Ceased US5940793A (en) 1994-10-25 1995-10-25 Voice-operated services

Country Status (14)

Country Link
US (2) USRE42868E1 (de)
EP (2) EP1172994B1 (de)
JP (1) JPH10507535A (de)
KR (1) KR100383352B1 (de)
CN (1) CN1249667C (de)
AU (1) AU707122B2 (de)
CA (3) CA2372671C (de)
DE (2) DE69535797D1 (de)
ES (1) ES2171558T3 (de)
FI (2) FI971748A0 (de)
MX (1) MX9702759A (de)
NO (1) NO971904D0 (de)
NZ (2) NZ294296A (de)
WO (1) WO1996013030A2 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110301955A1 (en) * 2010-06-07 2011-12-08 Google Inc. Predicting and Learning Carrier Phrases for Speech Input
US10068566B2 (en) 2005-02-04 2018-09-04 Vocollect, Inc. Method and system for considering information about an expected response when performing speech recognition

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6385312B1 (en) 1993-02-22 2002-05-07 Murex Securities, Ltd. Automatic routing and information system for telephonic services
CN1249667C (zh) * 1994-10-25 2006-04-05 英国电讯公司 声控服务
US5903864A (en) * 1995-08-30 1999-05-11 Dragon Systems Speech recognition
US5896444A (en) * 1996-06-03 1999-04-20 Webtv Networks, Inc. Method and apparatus for managing communications between a client and a server in a network
US5901214A (en) 1996-06-10 1999-05-04 Murex Securities, Ltd. One number intelligent call processing system
US5987408A (en) * 1996-12-16 1999-11-16 Nortel Networks Corporation Automated directory assistance system utilizing a heuristics model for predicting the most likely requested number
DE19709518C5 (de) * 1997-03-10 2006-05-04 Harman Becker Automotive Systems Gmbh Verfahren und Vorrichtung zur Spracheingabe einer Zieladresse in ein Zielführungssystem im Echtzeitbetrieb
GR1003372B (el) * 1997-09-23 2000-05-04 Συσκευη καταχωρησης ψηφιοποιημενων φωνητικων πληροφοριων και ανακτησης τους μεσω τηλεφωνου με αναγνωριση φωνης
US6404876B1 (en) 1997-09-25 2002-06-11 Gte Intelligent Network Services Incorporated System and method for voice activated dialing and routing under open access network control
KR100238189B1 (ko) * 1997-10-16 2000-01-15 윤종용 다중 언어 tts장치 및 다중 언어 tts 처리 방법
US6112172A (en) * 1998-03-31 2000-08-29 Dragon Systems, Inc. Interactive searching
US6629069B1 (en) 1998-07-21 2003-09-30 British Telecommunications A Public Limited Company Speech recognizer using database linking
US6778647B1 (en) * 1998-11-13 2004-08-17 Siemens Information And Communication Networks, Inc. Redundant database storage of selected record information for an automated interrogation device
US6502075B1 (en) * 1999-03-26 2002-12-31 Koninklijke Philips Electronics, N.V. Auto attendant having natural names database library
US6314402B1 (en) * 1999-04-23 2001-11-06 Nuance Communications Method and apparatus for creating modifiable and combinable speech objects for acquiring information from a speaker in an interactive voice response system
US6421672B1 (en) * 1999-07-27 2002-07-16 Verizon Services Corp. Apparatus for and method of disambiguation of directory listing searches utilizing multiple selectable secondary search keys
DE19944608A1 (de) * 1999-09-17 2001-03-22 Philips Corp Intellectual Pty Erkennung einer in buchstabierter Form vorliegenden Sprachäußerungseingabe
US6868385B1 (en) * 1999-10-05 2005-03-15 Yomobile, Inc. Method and apparatus for the provision of information signals based upon speech recognition
GB2362746A (en) * 2000-05-23 2001-11-28 Vocalis Ltd Data recognition and retrieval
US20020107918A1 (en) * 2000-06-15 2002-08-08 Shaffer James D. System and method for capturing, matching and linking information in a global communications network
US6748426B1 (en) * 2000-06-15 2004-06-08 Murex Securities, Ltd. System and method for linking information in a global computer network
DE10035523A1 (de) * 2000-07-21 2002-01-31 Deutsche Telekom Ag Virtuelles Testbett
JP4486235B2 (ja) * 2000-08-31 2010-06-23 パイオニア株式会社 音声認識装置
JP2002108389A (ja) * 2000-09-29 2002-04-10 Matsushita Electric Ind Co Ltd 音声による個人名称検索、抽出方法およびその装置と車載ナビゲーション装置
AU2002218274A1 (en) * 2000-11-03 2002-05-15 Voicecom Ag Robust voice recognition with data bank organisation
DE10100725C1 (de) 2001-01-10 2002-01-24 Philips Corp Intellectual Pty Automatisches Dialogsystem mit Datenbanksprachmodell
CA2440463C (en) * 2001-04-19 2010-02-02 Simon Nicholas Downey Speech recognition
DE10119677A1 (de) * 2001-04-20 2002-10-24 Philips Corp Intellectual Pty Verfahren zum Ermitteln von Datenbankeinträgen
US6671670B2 (en) 2001-06-27 2003-12-30 Telelogue, Inc. System and method for pre-processing information used by an automated attendant
GB2376335B (en) * 2001-06-28 2003-07-23 Vox Generation Ltd Address recognition using an automatic speech recogniser
US7124085B2 (en) * 2001-12-13 2006-10-17 Matsushita Electric Industrial Co., Ltd. Constraint-based speech recognition system and method
US7177814B2 (en) * 2002-02-07 2007-02-13 Sap Aktiengesellschaft Dynamic grammar for voice-enabled applications
DE10207895B4 (de) * 2002-02-23 2005-11-03 Harman Becker Automotive Systems Gmbh Verfahren zur Spracherkennung und Spracherkennungssystem
JP3799280B2 (ja) * 2002-03-06 2006-07-19 キヤノン株式会社 対話システムおよびその制御方法
US7242758B2 (en) * 2002-03-19 2007-07-10 Nuance Communications, Inc System and method for automatically processing a user's request by an automated assistant
KR20050056242A (ko) 2002-10-16 2005-06-14 코닌클리케 필립스 일렉트로닉스 엔.브이. 디렉토리 어시스턴트 방법 및 장치
US7603291B2 (en) 2003-03-14 2009-10-13 Sap Aktiengesellschaft Multi-modal sales applications
CN100353417C (zh) * 2003-09-23 2007-12-05 摩托罗拉公司 用于提供文本消息的方法和装置
US8200495B2 (en) * 2005-02-04 2012-06-12 Vocollect, Inc. Methods and systems for considering information about an expected response when performing speech recognition
CA2597803C (en) * 2005-02-17 2014-05-13 Loquendo S.P.A. Method and system for automatically providing linguistic formulations that are outside a recognition domain of an automatic speech recognition system
US8533485B1 (en) 2005-10-13 2013-09-10 At&T Intellectual Property Ii, L.P. Digital communication biometric authentication
KR101063607B1 (ko) * 2005-10-14 2011-09-07 주식회사 현대오토넷 음성인식을 이용한 명칭 검색 기능을 가지는 네비게이션시스템 및 그 방법
US8458465B1 (en) 2005-11-16 2013-06-04 AT&T Intellectual Property II, L. P. Biometric authentication
US8060367B2 (en) * 2007-06-26 2011-11-15 Targus Information Corporation Spatially indexed grammar and methods of use
DE102007033472A1 (de) 2007-07-18 2009-01-29 Siemens Ag Verfahren zur Spracherkennung
US20090210233A1 (en) * 2008-02-15 2009-08-20 Microsoft Corporation Cognitive offloading: interface for storing and composing searches on and navigating unconstrained input patterns
EP2096412A3 (de) * 2008-02-29 2009-12-02 Navigon AG Verfahren zum Betrieb eines Navigationssystems
JP5024154B2 (ja) * 2008-03-27 2012-09-12 富士通株式会社 関連付け装置、関連付け方法及びコンピュータプログラム
US8358747B2 (en) 2009-11-10 2013-01-22 International Business Machines Corporation Real time automatic caller speech profiling
US8645136B2 (en) * 2010-07-20 2014-02-04 Intellisist, Inc. System and method for efficiently reducing transcription error using hybrid voice transcription
US9412369B2 (en) * 2011-06-17 2016-08-09 Microsoft Technology Licensing, Llc Automated adverse drug event alerts
US9384731B2 (en) * 2013-11-06 2016-07-05 Microsoft Technology Licensing, Llc Detecting speech input phrase confusion risk
US9691384B1 (en) 2016-08-19 2017-06-27 Google Inc. Voice action biasing system
US10395649B2 (en) * 2017-12-15 2019-08-27 International Business Machines Corporation Pronunciation analysis and correction feedback

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2165969A (en) * 1984-10-19 1986-04-23 British Telecomm Dialogue system
US4701879A (en) 1984-07-05 1987-10-20 Standard Telephones And Cables Public Limited Co. Associative memory systems
EP0269233A1 (de) * 1986-10-24 1988-06-01 Smiths Industries Public Limited Company Vorrichtung und Verfahren zur Spracherkennung
US4763278A (en) 1983-04-13 1988-08-09 Texas Instruments Incorporated Speaker-independent word recognizer
EP0299572A2 (de) * 1987-07-11 1989-01-18 Philips Patentverwaltung GmbH Verfahren zur Erkennung von zusammenhängend gesprochenen Wörtern
EP0477688A2 (de) * 1990-09-28 1992-04-01 Texas Instruments Incorporated Rufnummerwahl mit Spracherkennung
EP0484070A2 (de) 1990-10-30 1992-05-06 International Business Machines Corporation Aufbereitung einer komprimierten Sprachinformation
WO1993005605A1 (en) 1991-09-12 1993-03-18 Bell Atlantic Network Services, Inc. Method and system for home incarceration
EP0533338A2 (de) 1991-08-16 1993-03-24 AT&T Corp. Vermittlungsmethode und Einrichtung für Informationsdienste
US5202952A (en) * 1990-06-22 1993-04-13 Dragon Systems, Inc. Large-vocabulary continuous speech prefiltering and processing system
US5267304A (en) 1991-04-05 1993-11-30 At&T Bell Laboratories Directory assistance system
EP0601710A2 (de) 1992-11-10 1994-06-15 AT&T Corp. Sprachinterpretierung auf Anfrage in einem Telekommunikationssystem
JPH06204952A (ja) 1992-09-21 1994-07-22 Internatl Business Mach Corp <Ibm> 電話回線利用の音声認識システムを訓練する方法
CA2091658A1 (en) 1993-03-15 1994-09-16 Matthew Lennig Method and apparatus for automation of directory assistance using speech recognition
US5355474A (en) 1991-09-27 1994-10-11 Thuraisngham Bhavani M System for multilevel secure database management using a knowledge base with release-based and other security constraints for query, response and update modification
EP0625758A1 (de) * 1993-04-21 1994-11-23 International Business Machines Corporation Natursprachenverarbeitungssystem
US5488652A (en) * 1994-04-14 1996-01-30 Northern Telecom Limited Method and apparatus for training speech recognition algorithms for directory assistance applications
WO1996013030A2 (en) * 1994-10-25 1996-05-02 British Telecommunications Public Limited Company Voice-operated services
US6018736A (en) * 1994-10-03 2000-01-25 Phonetic Systems Ltd. Word-containing database accessing system for responding to ambiguous queries, including a dictionary of database words, a dictionary searcher and a database searcher

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4763278A (en) 1983-04-13 1988-08-09 Texas Instruments Incorporated Speaker-independent word recognizer
US4701879A (en) 1984-07-05 1987-10-20 Standard Telephones And Cables Public Limited Co. Associative memory systems
GB2165969A (en) * 1984-10-19 1986-04-23 British Telecomm Dialogue system
EP0269233A1 (de) * 1986-10-24 1988-06-01 Smiths Industries Public Limited Company Vorrichtung und Verfahren zur Spracherkennung
EP0299572A2 (de) * 1987-07-11 1989-01-18 Philips Patentverwaltung GmbH Verfahren zur Erkennung von zusammenhängend gesprochenen Wörtern
US4947438A (en) * 1987-07-11 1990-08-07 U.S. Philips Corporation Process for the recognition of a continuous flow of spoken words
US5202952A (en) * 1990-06-22 1993-04-13 Dragon Systems, Inc. Large-vocabulary continuous speech prefiltering and processing system
EP0477688A2 (de) * 1990-09-28 1992-04-01 Texas Instruments Incorporated Rufnummerwahl mit Spracherkennung
EP0484070A2 (de) 1990-10-30 1992-05-06 International Business Machines Corporation Aufbereitung einer komprimierten Sprachinformation
US5267304A (en) 1991-04-05 1993-11-30 At&T Bell Laboratories Directory assistance system
EP0533338A2 (de) 1991-08-16 1993-03-24 AT&T Corp. Vermittlungsmethode und Einrichtung für Informationsdienste
WO1993005605A1 (en) 1991-09-12 1993-03-18 Bell Atlantic Network Services, Inc. Method and system for home incarceration
US5355474A (en) 1991-09-27 1994-10-11 Thuraisngham Bhavani M System for multilevel secure database management using a knowledge base with release-based and other security constraints for query, response and update modification
JPH06204952A (ja) 1992-09-21 1994-07-22 Internatl Business Mach Corp <Ibm> 電話回線利用の音声認識システムを訓練する方法
US5475792A (en) 1992-09-21 1995-12-12 International Business Machines Corporation Telephony channel simulator for speech recognition application
EP0601710A2 (de) 1992-11-10 1994-06-15 AT&T Corp. Sprachinterpretierung auf Anfrage in einem Telekommunikationssystem
CA2091658A1 (en) 1993-03-15 1994-09-16 Matthew Lennig Method and apparatus for automation of directory assistance using speech recognition
US5479488A (en) * 1993-03-15 1995-12-26 Bell Canada Method and apparatus for automation of directory assistance using speech recognition
EP0625758A1 (de) * 1993-04-21 1994-11-23 International Business Machines Corporation Natursprachenverarbeitungssystem
US5488652A (en) * 1994-04-14 1996-01-30 Northern Telecom Limited Method and apparatus for training speech recognition algorithms for directory assistance applications
US6018736A (en) * 1994-10-03 2000-01-25 Phonetic Systems Ltd. Word-containing database accessing system for responding to ambiguous queries, including a dictionary of database words, a dictionary searcher and a database searcher
WO1996013030A2 (en) * 1994-10-25 1996-05-02 British Telecommunications Public Limited Company Voice-operated services

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
K.E. Niebuhr et al., "N Ary Join for Processing Query by Example Nov. 1976", IBM Technical Disclosure Bulletin, vol. 19, No. 6, Nov. 1976, pp. 2377-2381, XP002081147 New York, US.
Yamada et al., "A Spoken Dialogue System with Active/Non-Active Word Control for CD-ROM Information Retrieval", Speech Communication, 15 (1994) 355-365. *
Young, "Use of Dialogue, Pragmatics and Semantics to Enhance Speech Recognition", 8308 Speech Communication 9(1990) Dec., Nos. 5/6 Amsterdam, Netherlands, pp. 551-564. *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10068566B2 (en) 2005-02-04 2018-09-04 Vocollect, Inc. Method and system for considering information about an expected response when performing speech recognition
US20110301955A1 (en) * 2010-06-07 2011-12-08 Google Inc. Predicting and Learning Carrier Phrases for Speech Input
US8738377B2 (en) * 2010-06-07 2014-05-27 Google Inc. Predicting and learning carrier phrases for speech input
US20140229185A1 (en) * 2010-06-07 2014-08-14 Google Inc. Predicting and learning carrier phrases for speech input
US9412360B2 (en) * 2010-06-07 2016-08-09 Google Inc. Predicting and learning carrier phrases for speech input
US10297252B2 (en) 2010-06-07 2019-05-21 Google Llc Predicting and learning carrier phrases for speech input
US11423888B2 (en) 2010-06-07 2022-08-23 Google Llc Predicting and learning carrier phrases for speech input

Also Published As

Publication number Publication date
NZ334083A (en) 2000-09-29
WO1996013030A3 (en) 1996-08-08
CA2372676A1 (en) 1996-05-02
WO1996013030A2 (en) 1996-05-02
CA2372671A1 (en) 1996-05-02
CA2202663C (en) 2002-08-13
EP1172994B1 (de) 2008-07-30
CA2372671C (en) 2007-01-02
FI981047A (fi) 1998-05-12
NO971904L (no) 1997-04-24
FI971748A (fi) 1997-04-24
FI981047A0 (fi) 1995-10-25
NZ294296A (en) 1999-04-29
EP0800698B1 (de) 2002-01-23
CA2202663A1 (en) 1996-05-02
US5940793A (en) 1999-08-17
ES2171558T3 (es) 2002-09-16
KR970706561A (ko) 1997-11-03
FI971748A0 (fi) 1997-04-24
AU3705795A (en) 1996-05-15
DE69525178D1 (de) 2002-03-14
NO971904D0 (no) 1997-04-24
CA2372676C (en) 2006-01-03
DE69525178T2 (de) 2002-08-29
AU707122B2 (en) 1999-07-01
EP0800698A2 (de) 1997-10-15
KR100383352B1 (ko) 2003-10-17
EP1172994A3 (de) 2002-07-03
EP1172994A2 (de) 2002-01-16
JPH10507535A (ja) 1998-07-21
CN1249667C (zh) 2006-04-05
MX9702759A (es) 1997-07-31
DE69535797D1 (de) 2008-09-11
CN1164292A (zh) 1997-11-05

Similar Documents

Publication Publication Date Title
USRE42868E1 (en) Voice-operated services
KR100574768B1 (ko) 음성 인식을 사용하는 자동화된 호텔 안내 시스템
US6208964B1 (en) Method and apparatus for providing unsupervised adaptation of transcriptions
US8285537B2 (en) Recognition of proper nouns using native-language pronunciation
US6671670B2 (en) System and method for pre-processing information used by an automated attendant
US20030149566A1 (en) System and method for a spoken language interface to a large database of changing records
US20040210438A1 (en) Multilingual speech recognition
US20040260543A1 (en) Pattern cross-matching
US20020111803A1 (en) Method and system for semantic speech recognition
JPH07210190A (ja) 音声認識方法及びシステム
US20050004799A1 (en) System and method for a spoken language interface to a large database of changing records
US7428491B2 (en) Method and system for obtaining personal aliases through voice recognition
CA2440463C (en) Speech recognition
Levin et al. Voice user interface design for automated directory assistance.
Kaspar et al. Faust-a directory assistance demonstrator.
EP1158491A2 (de) Spracheingabe und Wiederauffiden von Personendaten
Popovici et al. Directory assistance: learning user formulations for business listings
KR20050066805A (ko) 음절 음성인식기의 음성인식결과 전달 방법
EP1581927A2 (de) Spracherkennungssystem und -verfahren

Legal Events

Date Code Title Description
AS Assignment

Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CISCO RAVENSCOURT LLC;REEL/FRAME:017982/0976

Effective date: 20060710

Owner name: CISCO RAVENSCOURT L.L.C., DELAWARE

Free format text: CHANGE OF NAME;ASSIGNOR:BT RAVENSCOURT L.L.C.;REEL/FRAME:017982/0967

Effective date: 20050321

Owner name: BT RAVENSCOURT LLC, VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANY;REEL/FRAME:017982/0951

Effective date: 20041222