US7970610B2 - Speech recognition - Google Patents

Speech recognition Download PDF

Info

Publication number
US7970610B2
US7970610B2 US10/472,897 US47289703A US7970610B2 US 7970610 B2 US7970610 B2 US 7970610B2 US 47289703 A US47289703 A US 47289703A US 7970610 B2 US7970610 B2 US 7970610B2
Authority
US
United States
Prior art keywords
data items
category
data
stored
uncommon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/472,897
Other versions
US20040117182A1 (en
Inventor
Simon N Downey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Telecommunications PLC
Original Assignee
British Telecommunications PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications PLC filed Critical British Telecommunications PLC
Assigned to BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANY reassignment BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DOWNEY, SIMON N.
Publication of US20040117182A1 publication Critical patent/US20040117182A1/en
Application granted granted Critical
Publication of US7970610B2 publication Critical patent/US7970610B2/en
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4931Directory assistance systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/085Methods for reducing search complexity, pruning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4936Speech interaction details

Definitions

  • the present invention is concerned with speech recognition, particularly although not exclusively for use in automated voice-interactive services for use over a telephone network.
  • a typical application is an enquiry service where a user is asked a number of questions in order to elicit replies which, after recognition by a speech recogniser, permit access to one or more desired entries in an information bank.
  • An example of this is a directory enquiry system in which a user, requiring the telephone number of a customer, is asked to give the town name and road name of the subscriber's address, and the customer's surname.
  • a disadvantage of such a system is that if the correct town is not identified as being one of the closest matches then the enquiry is bound to result in failure.
  • a device having corresponding apparatus features to the method features of any of claims 1 to 5 .
  • a carrier medium as set out in claim 10 .
  • FIG. 1 illustrates an architecture for a directory enquiries system
  • FIG. 2 is a flow chart illustrating the operation of the directory enquiries system of FIG. 1 using the method according to the present invention
  • FIG. 3 is a second flowchart illustrating the operation of the directory enquiries system of FIG. 1 in using a second embodiment of a method according to the present invention
  • FIG. 4 is a flow chart illustrating a method of generating association between surnames which do not have an audio representation stored in the store 8 of FIG. 1 and surnames which do have an audio representation stored in the store 8 .
  • FIG. 5 is a flow chart illustrating a second method of generating association between surnames which do not have an audio representation stored in the store 8 of FIG. 1 and surnames which do have an audio representation stored in the store 8 .
  • a speech synthesiser 1 is provided for providing announcements to a user via a telephone line interface 2 , by reference to stored, fixed messages in a message data store 3 , or from variable information supplied to it by a main control unit 4 .
  • Incoming speech signals from the telephone line interface 2 are conducted to a speech recogniser 5 which is able to recognise spoken words by reference to, respectively, town name, road name or surname recognition data in recognition data stores of 6 , 7 , 8 .
  • a main directory database 9 contains, for each telephone customer in the area covered by the directory enquiry service, an entry containing the name, address and telephone number of that customer, in text form.
  • the town name recognition data store 6 contains, in text form, the names of all the towns included in the directory database 9 , along with stored data to enable the speech recogniser 5 to recognise those town names in the speech signal received from the telephone line interface 2 .
  • the recogniser 5 operates by recognising distinct phonemes in the input speech, which are decoded by reference to stored audio representations in the store 6 representing a tree structure constructed in advance from phonetic translations of the town names stored in the store 6 , decoded by means of a Viterbi algorithm.
  • the stores 7 , 8 for road name recognition data and surname recognition data are organised in the same manner.
  • the audio representation may equally well be stored in a separate store which is referenced via data in stores 6 , 7 and 8 .
  • the audio representation of each phoneme referenced by the stores 6 , 7 and 8 needs only to be stored once in said separate store
  • Each entry in the town data store 6 contains, as mentioned above, text corresponding to each of the town names appearing in the database 9 , to act as a label to link the entry in the store 6 to entries in the database 9 (though other kinds of label may be used if preferred).
  • the store 6 may contain an entry for every town name that the user might use to refer to geographical locations covered by the database, whether or not all these names are actually present in the database. Noting that some town names are not unique (there are four towns in the UK called Southend), and that some town names carry the same significance (e.g.
  • a vocabulary equivalence store 39 is also provided, containing such equivalents, which can be consulted following each recognition of a town name, to return additional possibilities to the set of town names considered to be recognised. For example if “Hammersmith” is recognised, London is added to the set; if “Southend” is recognised, then Southend-on-Sea, Southend (Campbeltown), Southend (Swansea) and Southend (Reading) are added.
  • the equivalence data store 39 could, if desired, contain similar information for roads and surnames, or first names if these are used; for example Dave and David are considered to represent the same name.
  • the vocabulary equivalence data store 39 may act as a translation between labels used in the name stores 6 , 7 , 8 and the labels used in the database (whether or not the labels are names in text form).
  • each leaf in the tree may have one or more textual labels attached to it.
  • Attaching several textual labels to a particular leaf in the tree is a known technique for dealing with equivalent ways of referring to the same item of data in a database as described above.
  • the technique may also be used for dealing with homophones (words which are pronounced in the same way but spelled differently) for example, “Smith” and “Smyth”.
  • the recognition data store 8 contains audio representations of about 50 thousand surnames which correspond to the surnames of about 90% of the population of the UK.
  • Several textual labels are associated with a particular audio representation by attaching textual labels to a particular leaf in a tree. These textual labels represent surnames which sound similar to said particular audio representation. Therefore a list of surnames are provided which sound similar to the surname which is represented by a particular audio representation, but which are not themselves represented by audio data in the store 8 . Therefore a greater number of surnames are represented by a smaller data structure, thus reducing the amount of memory required. Furthermore the amount of processing power required is much less and it is possible to perform the speech recognition in real time, using a less powerful processor.
  • the operation of the directory enquiry system of FIG. 1 is illustrated in the flow chart of FIG. 2 .
  • the process starts ( 10 ) upon receipt of an incoming telephone call signalled to the control unit 4 by the telephone line interface 2 ; the control unit responds by instructing the speech synthesiser 1 to play ( 11 ) a message stored in the message store 3 requesting the caller to give the required surname.
  • the caller's response is received ( 12 ) by the recogniser.
  • the recogniser 3 then performs its recognition process ( 13 ) with reference to the audio representations stored in the store 8 .
  • For common surnames which meet a prescribed threshold of similarity with the received reply any associated uncommon surnames are determined ( 14 ) by reference to the town recognition data store 6 . All of the common surnames which meet a prescribed threshold of similarity with the received reply, together with any uncommon surnames which are associated with the audio representations of these common surnames are then communicated to the control unit 4 .
  • the control unit 4 then instructs the speech synthesiser to play ( 15 ) a further message from the message data store 3 requesting the required street name.
  • a further response, relating to the street name, is received ( 17 ) from the caller and is processed by the recogniser 3 utilising the data store 7 and the recogniser then communicates to the control unit 4 a set of all of the road names which meet a prescribed threshold of similarity with the received reply.
  • the control unit 4 retrieves ( 20 ) from the database 9 a list of all customers having any of the surnames in the set of surnames received by the control unit at step 14 and residing in any of the street names received by the control unit at step 18 .
  • the speech signal received at step 12 is an utterance of the uncommon surname ‘Dobson’.
  • the set of words which meet the prescribed threshold of similarity with the received reply includes the common surname ‘Robson’.
  • ‘Robson’ is associated with similar sounding surnames ‘Hobson, Dobson and Fobson’.
  • the speech signal received at step 17 is an utterance of the street name ‘Dove Street’.
  • the set of words which meet the prescribed threshold of similarity with the received reply includes the street name ‘Dove Street’.
  • the database retrieval at step 22 retrieves the details for customer ‘Dobson’ in ‘Dove Street’ even though the name recognition data store 8 does not contain an audio representation for the name ‘Dobson’.
  • the directory enquiries system would operate as illustrated in FIG. 3 , where further information relating to the town name is requested from the caller at step 19 .
  • a further response, relating to the town name is received ( 20 ) from the caller and is processed ( 21 ) by the recogniser 3 utilising the data store 6 and the recogniser then communicates to the control unit 4 a set of all of the town names which meet a prescribed threshold of similarity with the received reply.
  • This set of town name data is then used, along with street name and surname data in the database retrieval step 22 . If data relating to more than one customer is retrieved from the database then further information may be elicited from the user (steps not shown).
  • the speech recogniser 5 provides a score as to how well each utterance matches each audio representation. This score is used to decide which customer data is more likely in the case where data relating to more than one customer is retrieved from the database. In the case of associated uncommon surname the score used can be weighted according to statistics relating to that surname such that the more uncommon a surname is the smaller the weighting factor applied to the score from the recogniser 5 .
  • FIG. 4 is a flow chart illustrating a method of generating associations between uncommon surnames and common surnames for use in this invention.
  • a speech utterance of a known uncommon surname is received by a speech recogniser, which may be any type of speech recogniser including a phoneme based speech recogniser as described earlier.
  • the received speech utterance is compared with audio representations of the common surnames at step 31 , and at step 32 an association is made between the known uncommon surname and the common surname to which the speech recogniser determines that the unknown surname is most similar.
  • FIG. 5 illustrates an alternative method of generating associations between uncommon and common surnames for use in the invention.
  • a textual representation of an uncommon surname is received.
  • this textual representation is converted into a phoneme sequence.
  • Such a conversion my be done using a large database associating text to phoneme sequences. The conversion also may be done using letter to sound rules for example as described in Klatt D, ‘Review of text-to-speech conversion for English’, J acoustic Soc Am 82, No. 3 pp 737-793. September 1987.
  • the phoneme sequence representing the uncommon surname is then compared to all the phoneme sequences for common surnames for example using a dynamic programming technique such as that described in “Predictive Assessment for Speaker Independent Isolated Word Recognisers” Alison Simons, ESCA EUROSPEECH 95 Madrid 1995 pp 1465-1467. Then at step 43 the uncommon surname is associated with the common surname for which the phonemic sequences are found to be most similar.
  • association may be recorded by associating a label representing the known uncommon surname to a leaf in the common surname recognition tree, if a tree based phoneme recogniser is to be used in the directory enquiries system, or by use of a vocabulary equivalence store as discussed previously.
  • An advantage of the second technique is that it is not necessary to collect speech data relating to all of the possible uncommon surnames in the database, which is a time consuming exercise. Instead all that is needed is a textual representation of such uncommon surnames.
  • a phoneme confusion matrix which records the likelihood of a particular recogniser confusing each phoneme with every other phoneme. Such a matrix is used in the comparison step 42 as described in the above referenced paper.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The vocabulary size of a speech recognizer for a large task is reduced by providing a recognizer only for the most common vocabulary items. Uncommon items are catered for by providing aliases from the common items. This allows accuracy to remain high while also allowing uncommon items to be recognized when necessary.

Description

This application is the US national phase of international application PCT/GB02/01748 filed 15 Apr. 2002 which designated the U.S.
BACKGROUND
1. Technical Field
The present invention is concerned with speech recognition, particularly although not exclusively for use in automated voice-interactive services for use over a telephone network.
2. Related Art
A typical application is an enquiry service where a user is asked a number of questions in order to elicit replies which, after recognition by a speech recogniser, permit access to one or more desired entries in an information bank. An example of this is a directory enquiry system in which a user, requiring the telephone number of a customer, is asked to give the town name and road name of the subscriber's address, and the customer's surname.
The problem with a system which is required to operate for a large number of customer entries, the whole of the UK which has about 500 thousand different surnames, for example, is that once the surname vocabulary becomes very large the recognition accuracy falls considerably. Additionally the amount of memory and processing power required to perform such a task in real time becomes prohibitive.
One way of overcoming this problem is described in our co-pending patent application EP 95934749.3 in which,
    • (i) the user speaks the name of a town;
    • (ii) a speech recogniser, by reference to stored town data identifies several towns as having the closest matches to the spoken town name, and produces a “score” or probability indicating the closeness of the match;
    • (iii) a list is compiled of all road names occurring in the identified towns;
    • (iv) the user speaks the name of a road;
    • (v) the speech recogniser identifies several road names, of the ones in the list, having the closest matches to the spoken road name, again with scores;
    • (vi) the road scores are each weighted accordingly to the score obtained for the town the road is located in, and the most likely “road” result considered to be the one with the best weighted score.
A disadvantage of such a system is that if the correct town is not identified as being one of the closest matches then the enquiry is bound to result in failure.
BRIEF SUMMARY
According to a first aspect of the present invention there is provided a method as set out in claim 1.
According to a second aspect of the present invention, there is provided a device as set out in claim 6.
According to a third aspect of the present invention, there is provided a device having corresponding apparatus features to the method features of any of claims 1 to 5.
According to a fourth aspect of the present invention, there is provided a method having corresponding method features to the apparatus features of any one of claims 6 to 9.
According to a fifth aspect of the present invention, there is provided a carrier medium as set out in claim 10.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the present in invention will now be described with reference to the accompanying drawings in which:
FIG. 1 illustrates an architecture for a directory enquiries system;
FIG. 2 is a flow chart illustrating the operation of the directory enquiries system of FIG. 1 using the method according to the present invention;
FIG. 3 is a second flowchart illustrating the operation of the directory enquiries system of FIG. 1 in using a second embodiment of a method according to the present invention;
FIG. 4 is a flow chart illustrating a method of generating association between surnames which do not have an audio representation stored in the store 8 of FIG. 1 and surnames which do have an audio representation stored in the store 8.
FIG. 5 is a flow chart illustrating a second method of generating association between surnames which do not have an audio representation stored in the store 8 of FIG. 1 and surnames which do have an audio representation stored in the store 8.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
An architecture of a directory enquiry system will be described with reference to FIG. 1. A speech synthesiser 1 is provided for providing announcements to a user via a telephone line interface 2, by reference to stored, fixed messages in a message data store 3, or from variable information supplied to it by a main control unit 4. Incoming speech signals from the telephone line interface 2 are conducted to a speech recogniser 5 which is able to recognise spoken words by reference to, respectively, town name, road name or surname recognition data in recognition data stores of 6, 7, 8.
A main directory database 9 contains, for each telephone customer in the area covered by the directory enquiry service, an entry containing the name, address and telephone number of that customer, in text form. The town name recognition data store 6 contains, in text form, the names of all the towns included in the directory database 9, along with stored data to enable the speech recogniser 5 to recognise those town names in the speech signal received from the telephone line interface 2. In principle, any type of speech recogniser may be used, in this embodiment of the invention the recogniser 5 operates by recognising distinct phonemes in the input speech, which are decoded by reference to stored audio representations in the store 6 representing a tree structure constructed in advance from phonetic translations of the town names stored in the store 6, decoded by means of a Viterbi algorithm. The stores 7, 8 for road name recognition data and surname recognition data are organised in the same manner.
The audio representation may equally well be stored in a separate store which is referenced via data in stores 6, 7 and 8. In this case the audio representation of each phoneme referenced by the stores 6, 7 and 8 needs only to be stored once in said separate store
Each entry in the town data store 6 contains, as mentioned above, text corresponding to each of the town names appearing in the database 9, to act as a label to link the entry in the store 6 to entries in the database 9 (though other kinds of label may be used if preferred). If desired, the store 6 may contain an entry for every town name that the user might use to refer to geographical locations covered by the database, whether or not all these names are actually present in the database. Noting that some town names are not unique (there are four towns in the UK called Southend), and that some town names carry the same significance (e.g. Hammersmith, which is a district of London, means the same as London as far as entries in that district are concerned), a vocabulary equivalence store 39 is also provided, containing such equivalents, which can be consulted following each recognition of a town name, to return additional possibilities to the set of town names considered to be recognised. For example if “Hammersmith” is recognised, London is added to the set; if “Southend” is recognised, then Southend-on-Sea, Southend (Campbeltown), Southend (Swansea) and Southend (Reading) are added.
The equivalence data store 39 could, if desired, contain similar information for roads and surnames, or first names if these are used; for example Dave and David are considered to represent the same name.
As an alternative to this structure, the vocabulary equivalence data store 39 may act as a translation between labels used in the name stores 6, 7, 8 and the labels used in the database (whether or not the labels are names in text form).
The use of text to define the basic vocabulary of the speech recogniser requires that the recogniser can relate one or more textual labels to a given pronunciation. That is to say in the case of a ‘recognition tree’, each leaf in the tree may have one or more textual labels attached to it.
Attaching several textual labels to a particular leaf in the tree is a known technique for dealing with equivalent ways of referring to the same item of data in a database as described above. The technique may also be used for dealing with homophones (words which are pronounced in the same way but spelled differently) for example, “Smith” and “Smyth”.
Surname data of the population of the UK, and probably many other areas, is skewed, in that all surnames are not equally likely. In fact of the approximately 500 thousand surnames used in the UK, about 50 thousand (i.e. 10%) are used by about 90% of the population. If a surname recogniser is used to recognise 500 thousand surnames then the recognition accuracy is reduced significantly for the benefit of the 10% of the population who have unusual names.
In this embodiment of the invention the recognition data store 8 contains audio representations of about 50 thousand surnames which correspond to the surnames of about 90% of the population of the UK. Several textual labels are associated with a particular audio representation by attaching textual labels to a particular leaf in a tree. These textual labels represent surnames which sound similar to said particular audio representation. Therefore a list of surnames are provided which sound similar to the surname which is represented by a particular audio representation, but which are not themselves represented by audio data in the store 8. Therefore a greater number of surnames are represented by a smaller data structure, thus reducing the amount of memory required. Furthermore the amount of processing power required is much less and it is possible to perform the speech recognition in real time, using a less powerful processor. Another advantage is that the recognition accuracy for these most popular 10% of names remains much higher than if the remaining 90% of names were also represented in the store 8. In the remainder of this description the most popular 10% of surnames will be referred to as ‘common surnames’ and the remaining 90% of surnames will be referred to as ‘uncommon surnames’. It will be understood that different percentages could be used, and that the percentages used may depend upon the characteristics of the particular data being modelled
The operation of the directory enquiry system of FIG. 1 is illustrated in the flow chart of FIG. 2. The process starts (10) upon receipt of an incoming telephone call signalled to the control unit 4 by the telephone line interface 2; the control unit responds by instructing the speech synthesiser 1 to play (11) a message stored in the message store 3 requesting the caller to give the required surname. The caller's response is received (12) by the recogniser. The recogniser 3 then performs its recognition process (13) with reference to the audio representations stored in the store 8. For common surnames which meet a prescribed threshold of similarity with the received reply any associated uncommon surnames are determined (14) by reference to the town recognition data store 6. All of the common surnames which meet a prescribed threshold of similarity with the received reply, together with any uncommon surnames which are associated with the audio representations of these common surnames are then communicated to the control unit 4.
The control unit 4 then instructs the speech synthesiser to play (15) a further message from the message data store 3 requesting the required street name. A further response, relating to the street name, is received (17) from the caller and is processed by the recogniser 3 utilising the data store 7 and the recogniser then communicates to the control unit 4 a set of all of the road names which meet a prescribed threshold of similarity with the received reply.
The control unit 4 retrieves (20) from the database 9 a list of all customers having any of the surnames in the set of surnames received by the control unit at step 14 and residing in any of the street names received by the control unit at step 18.
For example, the speech signal received at step 12 is an utterance of the uncommon surname ‘Dobson’. The set of words which meet the prescribed threshold of similarity with the received reply includes the common surname ‘Robson’. ‘Robson’ is associated with similar sounding surnames ‘Hobson, Dobson and Fobson’. The speech signal received at step 17 is an utterance of the street name ‘Dove Street’. The set of words which meet the prescribed threshold of similarity with the received reply includes the street name ‘Dove Street’. However there is no customer with the name ‘Robson’ living in ‘Dove Street’, but there is a customer named ‘Dobson’ living in ‘Dove Street’ therefore the database retrieval at step 22 retrieves the details for customer ‘Dobson’ in ‘Dove Street’ even though the name recognition data store 8 does not contain an audio representation for the name ‘Dobson’.
It is worth noting at this point that similar sounding names, for example Roberts and Doberts may both exist in the set of common surnames and may in fact each have an identical list of associated uncommon surnames as the other one.
In fact, in a practical application relating to a large area (for example the whole of the UK) the directory enquiries system would operate as illustrated in FIG. 3, where further information relating to the town name is requested from the caller at step 19. A further response, relating to the town name, is received (20) from the caller and is processed (21) by the recogniser 3 utilising the data store 6 and the recogniser then communicates to the control unit 4 a set of all of the town names which meet a prescribed threshold of similarity with the received reply. This set of town name data is then used, along with street name and surname data in the database retrieval step 22. If data relating to more than one customer is retrieved from the database then further information may be elicited from the user (steps not shown).
In another embodiment of the invention the speech recogniser 5 provides a score as to how well each utterance matches each audio representation. This score is used to decide which customer data is more likely in the case where data relating to more than one customer is retrieved from the database. In the case of associated uncommon surname the score used can be weighted according to statistics relating to that surname such that the more uncommon a surname is the smaller the weighting factor applied to the score from the recogniser 5.
FIG. 4 is a flow chart illustrating a method of generating associations between uncommon surnames and common surnames for use in this invention. At step 30 a speech utterance of a known uncommon surname is received by a speech recogniser, which may be any type of speech recogniser including a phoneme based speech recogniser as described earlier. The received speech utterance is compared with audio representations of the common surnames at step 31, and at step 32 an association is made between the known uncommon surname and the common surname to which the speech recogniser determines that the unknown surname is most similar.
FIG. 5 illustrates an alternative method of generating associations between uncommon and common surnames for use in the invention. At step 40 a textual representation of an uncommon surname is received. At step 41 this textual representation is converted into a phoneme sequence. Such a conversion my be done using a large database associating text to phoneme sequences. The conversion also may be done using letter to sound rules for example as described in Klatt D, ‘Review of text-to-speech conversion for English’, J acoustic Soc Am 82, No. 3 pp 737-793. September 1987. The phoneme sequence representing the uncommon surname is then compared to all the phoneme sequences for common surnames for example using a dynamic programming technique such as that described in “Predictive Assessment for Speaker Independent Isolated Word Recognisers” Alison Simons, ESCA EUROSPEECH 95 Madrid 1995 pp 1465-1467. Then at step 43 the uncommon surname is associated with the common surname for which the phonemic sequences are found to be most similar.
Using either of the above techniques (or any other) the association may be recorded by associating a label representing the known uncommon surname to a leaf in the common surname recognition tree, if a tree based phoneme recogniser is to be used in the directory enquiries system, or by use of a vocabulary equivalence store as discussed previously.
An advantage of the second technique is that it is not necessary to collect speech data relating to all of the possible uncommon surnames in the database, which is a time consuming exercise. Instead all that is needed is a textual representation of such uncommon surnames. In order to take into account the particular characteristics of a particular speech recogniser it is possible to use a phoneme confusion matrix which records the likelihood of a particular recogniser confusing each phoneme with every other phoneme. Such a matrix is used in the comparison step 42 as described in the above referenced paper.
It will be understood that the use of common and uncommon surnames in a directory enquiries system is merely an example of how this invention may be used. Application of the invention may be found in any voice operated database access system, where the frequency of certain items of data is much greater than the frequency of other items of data.
Furthermore the technique could be extended to cover other pattern matching areas such as image retrieval again where the frequency of requests for certain items of data are likely to be much greater than requests for other items of data.

Claims (10)

1. A device for retrieving a data record from a database storing a plurality of data records, each of a plurality of which includes a data item of a first category and a data item of a second or subsequent category, wherein the data items in the first category are designated as being either common or uncommon in dependence upon the frequency with which they appear in the data records stored in the database, the device comprising:
audio representation storage storing an audio representation in respect of each of the common data items in the first category but not in respect of the uncommon data items in the first category;
association storage storing associations between each of at least some common data items and a plurality of uncommon data items whose audio representations are similar to but different from the audio representation of the respective associated common data item;
a comparator which compares a signal derived from an unknown utterance with each of the audio representations of common data items stored in the audio representation storage, generating a measure of similarity at least in respect of one or more audio representations which are sufficiently similar to the compared signal to give rise to a measure of similarity above a predetermined threshold and designating as candidate first category data items both the common data items whose audio representations gave rise to a measure of similarity above the threshold and the uncommon data items associated with the designated common data items according to the association storage means;
a selector which selects one or more data items of a second or subsequent category; and
a data retriever which retrieves one or more data records including a first category data item equal to one of the candidate first data items designated by the comparator and a second or subsequent category data item selected by the selector.
2. A device according to claim 1 wherein the comparator includes a speech recognition device connected to a public switched telephone network for receiving the signal via the public switched telephone network from a user using a terminal connected to the network, said user uttering the unknown utterance.
3. A device according to claim 2 wherein the selector also includes a speech recognition device connected to a public switched telephone network for receiving the signal via the public switched telephone network from a user using a terminal connected to the network, said user uttering the unknown utterance.
4. A device as claimed in claim 1 wherein the database stores a plurality of records each of which includes the name of a customer as an item of data of the first category.
5. A computer implemented method of speech recognition, said method comprising:
a) comparing, using at least one computer processor, a representation of a first input unknown speech utterance with audio representations of a first set of stored data items of a first category to identify at least one first recognized data item candidate;
b) if said first recognized data item candidate has a previously stored association with one or more other data items of the first category having similar, but different, audio representations thereof, also identifying said associated other data items of the first category as additional first recognized data item candidates without comparison of an audio representation thereof with said representation of said first unknown speech utterance;
c) comparing a representation of a second input unknown speech utterance with audio representations of a second set of stored data items of a second category different than said first category to identify at least one second recognized data item candidate; and
d) outputting a speech recognition output selectively based on said first and second recognized data item candidates.
6. A method of speech recognition as in claim 5 wherein:
said first set of stored data items of the first category comprise names of persons;
said associated other data items of the first category comprise additional names of persons that are less frequently encountered than a respectively associated name in said first set; and
said second set of stored data items identify geographical locations.
7. A method as in claim 6, wherein said geographical locations are at least one of a group comprising: streets, roads, cities, counties and countries.
8. A method as in claim 5 wherein:
speech recognition step c) is repeated for at least one further unknown speech utterance using respectively corresponding further sets of stored data items and the results of all speech recognition steps are utilized in step d).
9. A method as in claim 5 wherein:
said first set of stored data items of the first category comprise common data items; and
said associated other data items of the first category uncommon data items.
10. A non-transitory computer readable medium tangibly storing computer readable instructions for causing a computer processing system to perform the method of claim 5.
US10/472,897 2001-04-19 2002-04-15 Speech recognition Expired - Fee Related US7970610B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP01303598 2001-04-19
EP01303598.5 2001-04-19
EP01303598 2001-04-19
PCT/GB2002/001748 WO2002086863A1 (en) 2001-04-19 2002-04-15 Speech recognition

Publications (2)

Publication Number Publication Date
US20040117182A1 US20040117182A1 (en) 2004-06-17
US7970610B2 true US7970610B2 (en) 2011-06-28

Family

ID=8181903

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/472,897 Expired - Fee Related US7970610B2 (en) 2001-04-19 2002-04-15 Speech recognition

Country Status (5)

Country Link
US (1) US7970610B2 (en)
EP (1) EP1397797B1 (en)
CA (1) CA2440463C (en)
DE (1) DE60222413T2 (en)
WO (1) WO2002086863A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248415A1 (en) * 2008-03-31 2009-10-01 Yap, Inc. Use of metadata to post process speech recognition output
US9583107B2 (en) 2006-04-05 2017-02-28 Amazon Technologies, Inc. Continuous speech transcription performance indication
US9973450B2 (en) 2007-09-17 2018-05-15 Amazon Technologies, Inc. Methods and systems for dynamically updating web service profile information by parsing transcribed message strings

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002086863A1 (en) 2001-04-19 2002-10-31 British Telecommunications Public Limited Company Speech recognition
EP2158540A4 (en) * 2007-06-18 2010-10-20 Geographic Services Inc Geographic feature name search system
US9484025B2 (en) 2013-10-15 2016-11-01 Toyota Jidosha Kabushiki Kaisha Configuring dynamic custom vocabulary for personalized speech recognition

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5258909A (en) * 1989-08-31 1993-11-02 International Business Machines Corporation Method and apparatus for "wrong word" spelling error detection and correction
US5488652A (en) * 1994-04-14 1996-01-30 Northern Telecom Limited Method and apparatus for training speech recognition algorithms for directory assistance applications
WO1996013030A2 (en) 1994-10-25 1996-05-02 British Telecommunications Public Limited Company Voice-operated services
US5805772A (en) * 1994-12-30 1998-09-08 Lucent Technologies Inc. Systems, methods and articles of manufacture for performing high resolution N-best string hypothesization
US5999902A (en) * 1995-03-07 1999-12-07 British Telecommunications Public Limited Company Speech recognition incorporating a priori probability weighting factors
US6108631A (en) * 1997-09-24 2000-08-22 U.S. Philips Corporation Input system for at least location and/or street names
US6112174A (en) * 1996-11-13 2000-08-29 Hitachi, Ltd. Recognition dictionary system structure and changeover method of speech recognition system for car navigation
US6192337B1 (en) * 1998-08-14 2001-02-20 International Business Machines Corporation Apparatus and methods for rejecting confusible words during training associated with a speech recognition system
US6208965B1 (en) * 1997-11-20 2001-03-27 At&T Corp. Method and apparatus for performing a name acquisition based on speech recognition
US20020049588A1 (en) * 1993-03-24 2002-04-25 Engate Incorporated Computer-aided transcription system using pronounceable substitute text with a common cross-reference library
US6405172B1 (en) * 2000-09-09 2002-06-11 Mailcode Inc. Voice-enabled directory look-up based on recognized spoken initial characters
US20020107689A1 (en) * 2001-02-08 2002-08-08 Meng-Hsien Liu Method for voice and speech recognition
WO2002086863A1 (en) 2001-04-19 2002-10-31 British Telecommunications Public Limited Company Speech recognition
US6483896B1 (en) * 1998-02-05 2002-11-19 At&T Corp. Speech recognition using telephone call parameters
US6937982B2 (en) * 2000-07-21 2005-08-30 Denso Corporation Speech recognition apparatus and method using two opposite words
US6983244B2 (en) * 2003-08-29 2006-01-03 Matsushita Electric Industrial Co., Ltd. Method and apparatus for improved speech recognition with supplementary information

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5258909A (en) * 1989-08-31 1993-11-02 International Business Machines Corporation Method and apparatus for "wrong word" spelling error detection and correction
US20020049588A1 (en) * 1993-03-24 2002-04-25 Engate Incorporated Computer-aided transcription system using pronounceable substitute text with a common cross-reference library
US5488652A (en) * 1994-04-14 1996-01-30 Northern Telecom Limited Method and apparatus for training speech recognition algorithms for directory assistance applications
WO1996013030A2 (en) 1994-10-25 1996-05-02 British Telecommunications Public Limited Company Voice-operated services
US5940793A (en) * 1994-10-25 1999-08-17 British Telecommunications Public Limited Company Voice-operated services
US5805772A (en) * 1994-12-30 1998-09-08 Lucent Technologies Inc. Systems, methods and articles of manufacture for performing high resolution N-best string hypothesization
US5999902A (en) * 1995-03-07 1999-12-07 British Telecommunications Public Limited Company Speech recognition incorporating a priori probability weighting factors
US6112174A (en) * 1996-11-13 2000-08-29 Hitachi, Ltd. Recognition dictionary system structure and changeover method of speech recognition system for car navigation
US6108631A (en) * 1997-09-24 2000-08-22 U.S. Philips Corporation Input system for at least location and/or street names
US6208965B1 (en) * 1997-11-20 2001-03-27 At&T Corp. Method and apparatus for performing a name acquisition based on speech recognition
US6483896B1 (en) * 1998-02-05 2002-11-19 At&T Corp. Speech recognition using telephone call parameters
US6192337B1 (en) * 1998-08-14 2001-02-20 International Business Machines Corporation Apparatus and methods for rejecting confusible words during training associated with a speech recognition system
US6937982B2 (en) * 2000-07-21 2005-08-30 Denso Corporation Speech recognition apparatus and method using two opposite words
US6405172B1 (en) * 2000-09-09 2002-06-11 Mailcode Inc. Voice-enabled directory look-up based on recognized spoken initial characters
US20020107689A1 (en) * 2001-02-08 2002-08-08 Meng-Hsien Liu Method for voice and speech recognition
WO2002086863A1 (en) 2001-04-19 2002-10-31 British Telecommunications Public Limited Company Speech recognition
US6983244B2 (en) * 2003-08-29 2006-01-03 Matsushita Electric Industrial Co., Ltd. Method and apparatus for improved speech recognition with supplementary information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9583107B2 (en) 2006-04-05 2017-02-28 Amazon Technologies, Inc. Continuous speech transcription performance indication
US9973450B2 (en) 2007-09-17 2018-05-15 Amazon Technologies, Inc. Methods and systems for dynamically updating web service profile information by parsing transcribed message strings
US20090248415A1 (en) * 2008-03-31 2009-10-01 Yap, Inc. Use of metadata to post process speech recognition output
US8676577B2 (en) * 2008-03-31 2014-03-18 Canyon IP Holdings, LLC Use of metadata to post process speech recognition output

Also Published As

Publication number Publication date
DE60222413T2 (en) 2008-06-12
CA2440463A1 (en) 2002-10-31
WO2002086863A1 (en) 2002-10-31
CA2440463C (en) 2010-02-02
EP1397797B1 (en) 2007-09-12
DE60222413D1 (en) 2007-10-25
EP1397797A1 (en) 2004-03-17
US20040117182A1 (en) 2004-06-17

Similar Documents

Publication Publication Date Title
KR100383352B1 (en) Voice-operated service
KR100574768B1 (en) An automated hotel attendant using speech recognition
US6570964B1 (en) Technique for recognizing telephone numbers and other spoken information embedded in voice messages stored in a voice messaging system
US6208964B1 (en) Method and apparatus for providing unsupervised adaptation of transcriptions
US6243680B1 (en) Method and apparatus for obtaining a transcription of phrases through text and spoken utterances
US20040153306A1 (en) Recognition of proper nouns using native-language pronunciation
US9286887B2 (en) Concise dynamic grammars using N-best selection
US20030191643A1 (en) Automatic multi-language phonetic transcribing system
US20050004799A1 (en) System and method for a spoken language interface to a large database of changing records
Lamel et al. Identifying non-linguistic speech features.
Kamm et al. Speech recognition issues for directory assistance applications
US7970610B2 (en) Speech recognition
Imperl et al. Clustering of triphones using phoneme similarity estimation for the definition of a multilingual set of triphones
KR20000005278A (en) Automatic speech recognition
EP1158491A2 (en) Personal data spoken input and retrieval
Georgila et al. A speech-based human-computer interaction system for automating directory assistance services
JP2002532763A (en) Automatic inquiry system operated by voice
Nouza A large Czech vocabulary recognition system for real-time applications
Langmann et al. FRESCO: the French telephone speech data collection-part of the European Speechdat (M) project
Petek Identification of Regional Variants in the Standard Slovenian Speech
EP1103954A1 (en) Digital speech acquisition, transmission, storage and search system and method

Legal Events

Date Code Title Description
AS Assignment

Owner name: BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANY,

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DOWNEY, SIMON N.;REEL/FRAME:015070/0658

Effective date: 20020424

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230628