US7403941B2 - System, method and technique for searching structured databases - Google Patents
System, method and technique for searching structured databases Download PDFInfo
- Publication number
- US7403941B2 US7403941B2 US11/115,070 US11507005A US7403941B2 US 7403941 B2 US7403941 B2 US 7403941B2 US 11507005 A US11507005 A US 11507005A US 7403941 B2 US7403941 B2 US 7403941B2
- Authority
- US
- United States
- Prior art keywords
- node
- data
- data structure
- tree
- nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims description 83
- 230000008569 process Effects 0.000 claims description 37
- 238000012545 processing Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 claims description 5
- 230000000875 corresponding effect Effects 0.000 description 39
- 238000013518 transcription Methods 0.000 description 25
- 230000035897 transcription Effects 0.000 description 25
- 241000220479 Acacia Species 0.000 description 11
- 235000010643 Leucaena leucocephala Nutrition 0.000 description 11
- 238000013138 pruning Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 238000012217 deletion Methods 0.000 description 4
- 230000037430 deletion Effects 0.000 description 4
- 241000220010 Rhode Species 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- CIWBSHSKHKDKBQ-JLAZNSOCSA-N Ascorbic acid Chemical compound OC[C@H](O)[C@H]1OC(=O)C(O)=C1O CIWBSHSKHKDKBQ-JLAZNSOCSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003467 diminishing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2452—Query translation
- G06F16/24522—Translation of natural language queries to structured queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2228—Indexing structures
- G06F16/2246—Trees, e.g. B+trees
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99941—Database schema or data structure
- Y10S707/99943—Generating database or data structure, e.g. via user interface
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99941—Database schema or data structure
- Y10S707/99944—Object-oriented database structure
- Y10S707/99945—Object-oriented database structure processing
Definitions
- the present invention relates to database searching.
- search request to be processed may not exactly match data that is stored in the database.
- search request data is based on human speech then natural variations between different users' pronunciation of search terms can make it difficult to match the input with database records.
- Embodiments of the system described herein are intended to address these problems.
- Embodiments of the invention provide a method or technique for searching and enabling search of a database and records contained in a database.
- a database (or its records contained therein) may be configured, organized or structured so to include one or more tree data structures, where each tree structure includes a root node and at least one child node. Each child node may be associated with match data that corresponds to a data value of a field of a database record.
- the leaf child nodes of the first tree data structure may also include a link to another tree data structure.
- a first tree data structure is traversed until at least one path is identified between its root node and at least one of its said leaf child nodes. Each path may be associated with a score reflecting a level of matching between the search request and the match data of the nodes in the path.
- Embodiments such as described herein enable the use of a database or its records in search and/or record retrieval applications that are prone to receiving many variations in the manner search or selection criteria is entered.
- data used in white or yellow page applications may be interfaced with phonetic recognisers to enable a user to request name and address information by pronouncing a name or other identifier. It is often the case that people will use different identifier information (e.g. name or address) and possibly different sequences of presenting the identifier information.
- One or more embodiments described herein enable the robust searching of database records in response to input that is inconsistent in form, sequence, completeness etc. In the context of phonetic recognition of address/information, such a database is more likely to return a reliable result in the presence of the various conventions and inputs that may be used by numerous users seeking such database records.
- the match data of a said child node of the first tree data structure may correspond to a said data value of a said record of a first set of said database records and/or the match data of a said child node of a said further tree data structure may correspond to a said data value of a said record of another set of said database records.
- the at least one child node of the first tree data structure may correspond to at least one different data value of a first common field, the first common field being a field amongst the fields of at least some of the database records having a least number of different data values.
- the match data of a said child node of the first tree data structure may correspond to a data value of the first common field.
- the at least one child node of a said further tree data structure may correspond to at least one different data value of a further common field, a said further common field being selected from amongst the fields of at least some of the database records based on a number of different data values the field contains.
- the match data of a said child node of a said further tree data structure may correspond to a data value of a said further common field.
- At least some of the tree data structures created may be stored on a storage medium external to the processing device performing the method and data relating to a said tree data structure may be transferred to the processing device when it is to be traversed.
- the traversing of a said tree data structure may include a node determination process may include: (i) checking the data content of a node of the tree data structure, and (ii) if the node contains data identifying another tree data structure (or group of nodes) then performing the traversal on that other tree data structure, or (iii) if the node does not contain data identifying another tree data structure then, for each child node of that node, computing a score based on a match between the search request and the match data associated with the child node.
- the traversing of a tree data structure may include: (i) placing the root node of the tree onto a queue data structure; (ii) popping the node from the queue data structure; (iii) performing the node determination process on the popped node, wherein data relating to the node and the score computed for the node is placed on the queue data structure, and (iv) the scores of nodes in the queue are used to determine the selection of a said further tree data structure for the traversal.
- the computation of the score may use a Dynamic Programming technique (which may use a confusion matrix) to score the degree of match between the search request and a hypothesis based on the match data of a said path.
- a Dynamic Programming technique which may use a confusion matrix
- Data describing a score computed for a portion of a said hypothesis can be stored and retrieved to avoid re-computation of the score.
- a said database record may include fields including data values representing an address and contact details (e.g. a telephone number) associated with the address.
- the record may further include a field including a data value representing a name associated with the address and/or contact details.
- the record may further include a field including a data value representing a state and/or a ZIP/postal code.
- the first common field may be the field representing the state or the ZIP/postal code.
- an apparatus for programmatically performing steps for searching a database such as described above.
- Such an apparatus may comprise a first tree creator for creating a first tree data structure having a root node and at least one child node, where each said child node being associated with match data corresponding to a data value of a field of a database record.
- the leaf child nodes of the first tree data structure may include a link to another tree data structure.
- a further tree creator may be provided for creating at least one further tree data structure having a root node and at least one child node.
- the child node may be associated with match data corresponding to a data value of a database record.
- the leaf child nodes of the further tree data structure include a link to a said database record.
- a tree traversal mechanism may be implemented for traversing the first tree data structure to find at least one path between its root node and at least one of its said leaf child nodes. Each path may be associated with a score reflecting a level of matching between the search request and the match data of the nodes in the path.
- a further tree traversal mechanism for traversing at least one of the further tree data structures may be identified by the link of the leaf node of at least one said path. The traversal of the at least one further tree data structure finding at least one path between its root node and at least one of its said leaf child nodes.
- Each of said paths may be associated with a score reflecting a level of matching between the search request and the match data of the nodes in the path.
- An output generator may be included with the apparatus for outputting data relating to a said database record identified by the link of the leaf child node of the paths with the best scores.
- the apparatus may further include an input converter for converting an input signal based on an audible signal to produce data describing the search request.
- the apparatus may further include an output converter for converting the output data relating to the said database record into an audible signal.
- the apparatus may further include an interface for receiving an input signal over a network and/or transmitting an output over a network.
- the network may include, for example, the Internet, and/or one or more public data networks, including cellular networks and Public Switch Telephony Networks (PSTN).
- PSTN Public Switch Telephony Networks
- Another embodiment includes a method or technique of analyzing a database comprising a plurality of records. Specifically, an embodiment may provide for analyzing at least some of the records to identify a first field set amongst fields of the records; analyzing at least some of the records to identify at least one further field set amongst the fields; and producing a specification for a first tree data structure and at least one further tree data structure based on the first field set and the at least one further field set identified.
- the first field set may be identified as a first common field amongst the fields, the first common field being a field amongst the fields of at least some of the database records having a least number of different data values, and the at least one further field set may be identified as at least one further common field amongst the fields, a said further common field being selected from amongst the fields of the database records based on a number of different data values the field contains.
- the method may further include creating a first tree data structure having a root node and at least one child node, the at least one child node corresponding to at least one different data value of the first common field of the specification, each said child node being associated with match data corresponding to its respective data value, wherein leaf said child nodes include a link to another tree data structure; and creating at least one further tree data structure having a root node and at least one child node, the at least one child node of a said further tree data structure corresponding to at least one different data value of a said further common field of the specification, each said child node being associated with match data corresponding to its respective data value of the further common field, wherein leaf child nodes of the further tree data structure include a link to a said database record.
- a computer program product comprising: a computer usable medium having computer readable program code and computer readable system code embodied on said medium for processing data, said computer program product including: computer program code configured to make the computer execute a procedure to search a database comprising a plurality of records for at least one said record that at least approximately matches a search request.
- Data values in each of the records are organized in at least one field, the procedure including: creating a first tree data structure having a root node and at least one child node, each said child node being associated with match data corresponding to a data value of a field of a database record, wherein leaf child nodes of the first tree data structure include a link to another tree data structure; and creating at least one further tree data structure having a root node and at least one child node, each said child node being associated with match data corresponding to a data value of a database record, wherein leaf child nodes of the further tree data structure include a link to a said database record; traversing the first tree data structure to find at least one path between its root node and at least one of its said leaf child nodes, each said path being associated with a score reflecting a level of matching between the search request and the match data of the nodes in the path; traversing at least one of the further tree data structures identified by the link of the leaf node of at least one said path, the traversal of the at least one
- the phonetic transcription of the utterance may be a lattice of possible transcriptions.
- the phonetic transcription of the hypothesis may be lattices of their possible transcriptions.
- An acoustic score can be calculated for each part of the lattice.
- Pronunciations scores can be associated with each part of the lattice.
- the search method employed may be best-first-search or A* search. Branch-and-bound may be used to prune poorly scoring hypotheses.
- the database of phonetic transcriptions may be stored as an affix network, which may be at the word level.
- a set of sub-networks that together comprise the whole network can be stored separately on a storage device and read into main memory when needed. Extra records can be added to the database by storing them in a separate sub-network that is searched first.
- the direction of the network (prefix or suffix) can be selected according to the structure in the database of phonetic transcriptions.
- the direction of the network for each sub-network can be selected separately according to the structure in the corresponding part of the database of phonetic transcriptions.
- Dynamic programming may be used to score the degree of match between the transcription of the utterance and the hypothesis. Scores for portions of transcriptions of hypotheses which are common can be cached and not re-computed. A bound on the remaining portion of the score may be obtained by taking the best score for each of the tokens in the transcription of the utterance corresponding to the remaining portion. An estimate of the remaining portion of the score can be obtained by multiplying the bound by a certain factor. An estimate of the remaining portion of the score may be obtained by taking a weighted sum of the scores for each of the tokens in the transcription of the utterance corresponding to the remaining portion.
- An improved estimate of the remaining portion of the score may be obtained by multiplying the estimate by a certain factor.
- Hypotheses corresponding to part of the utterance, which have a score bound or score, estimate that is a certain amount worse than the currently best scoring hypothesis that corresponds to the whole utterance may be pruned.
- Hypotheses corresponding to part of the utterance which have a score bound or score estimate that is a certain amount worse than the currently best scoring hypothesis within the sub-network under consideration that corresponds to the whole utterance may be pruned.
- Hypotheses corresponding to part of the utterance which have a score bound or score estimate that is a certain amount worse than the currently best scoring hypothesis within the sub-network under consideration, whether that hypothesis corresponds to the whole utterance or part of the utterance can be pruned.
- the dynamic programming algorithm may be constrained to only consider a subset of the set of all possible alignments between the transcription of the utterance and the transcription o the hypothesis. Alignments may not be permitted for which, at any point in the alignment, the proportion of tokens explained in the transcription of the hypothesis exceeds the proportion of tokens explained in the transcription of the utterance by more than a certain amount. Alignments may not be permitted for which, at any point in the alignment, the proportion of tokens explained in the transcription of the utterance exceeds the proportion of tokens explained in the transcription of the hypothesis by more than a certain amount. Alignments may not be permitted for which the first token in the transcription of the hypothesis is matched against a token in the transcription of the utterance, which is more than a certain number from the beginning. Alignments may not be permitted for which the first token in the transcription of the utterance is matched against a token in the transcription of the hypothesis, which is more than a certain number from the beginning.
- the search may be terminated should the time elapsed exceed a certain amount.
- the best scoring hypothesis that corresponds to the complete utterance may score better than the other hypotheses that correspond to the complete utterance by more than a certain amount.
- the score for the hypotheses corresponding to the complete utterance, other than the best scoring one, may be approximated by the score of the second best scoring hypothesis.
- a computer program product having machine-readable program code recorded thereon for performing phonetic transcription of an utterance and a search of a database of phonetic transcriptions in order that the record corresponding to the closest match, or matches, may be retrieved.
- the product can be used for accessing a database of addresses.
- the program product may be used for accessing a database of names and addresses.
- the product may be used for accessing a database of names and/or personal information.
- the program product can be used for accessing a database of phrases.
- the program product can be used for accessing the translation into another language of a database of phrases.
- the product may be used over the telephone.
- the product can be used on a personal digital assistant or pocket computer.
- a module may include a program, a subroutine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions.
- a module can exist on a hardware component such as a server independently of other modules, or a module can be a shared element or process of other modules, programs or machines.
- a module may reside on one machine, such as on a client or on a server, or a module may be distributed amongst multiple machines, such as on multiple clients or server machines.
- one or more embodiments described herein may be implemented through the use of instructions that are executable by one or more processors. These instructions may be carried on a computer-readable medium.
- Machines shown in figures below provide examples of processing resources and computer-readable mediums on which instructions for implementing embodiments of the invention can be carried and/or executed.
- the numerous machines shown with embodiments of the invention include processor(s) and various forms of memory for holding data and instructions.
- Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers.
- Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash memory (such as carried on many cell phones and personal digital assistants (PDAs)), and magnetic memory.
- Computers, terminals, network enabled devices e.g. mobile devices such as cell phones) are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable mediums.
- FIG. 1 illustrates schematically a computer system configured to execute an application for searching a database
- FIG. 2 illustrates schematically an overview of steps performed by the database searching application, including steps of creating a database access structure and traversing the structure to access a particular database record;
- FIG. 3 illustrates schematically some example records of the data
- FIG. 4 illustrates schematically an example of a specification template for the access structure
- FIG. 5 illustrates schematically an example of an access structure including links to records of the database
- FIG. 6 illustrates schematically steps involved in creating the access structure using the structure specification template and records from the database
- FIGS. 7A and 7B illustrate schematically an example of a partially constructed access structure
- FIG. 8 illustrates schematically an example of steps involved in the process of traversing the access structure, including a step of computing a score for a node within the access structure;
- FIGS. 9A-9D illustrate schematically the content of queues/lists used by the access structure traversal process
- FIG. 10 illustrates schematically an example of partial score computation based on an input
- FIG. 11A illustrates a portion of another example of a database tree/access structure
- FIG. 11B is a Dynamic Programming matrix showing caching of hypotheses.
- FIG. 11C is another Dynamic Programming matrix illustrating a path.
- a computing system 100 includes a processor 102 in communication with a memory 104 .
- the memory 104 is shown as storing a database 106 and a database searching application 108 .
- the searching application 108 operates by creating a database access structure 110 using structure specification template data 112 . It will be apparent to the person skilled in the art that although the data/instructions 106 - 112 are shown as being contained within memory 104 in FIG. 1 , they could be stored on more than one data storage device and the various data structures/software modules could be distributed over a plurality of separate computers.
- the database searching application 108 can be held in the internal Random Access Memory (RAM) of a computing device during execution, whereas the database 106 may be stored on an external storage device, such as one or more hard drives, with portions of the database being copied into the RAM as required.
- RAM Random Access Memory
- the computing system 100 is shown as being connected to an interface device 114 .
- the interface device can take one of many forms.
- it could comprise a display unit and input devices (e.g. keyboards, mouse) connected directly to the processor 102 that executes the database searching application 108 , or the interface 114 may be part of a another device that is in communication with the processor 102 over a network.
- the interface component 114 may include a conventional mobile or landline telephone device that is connected to a service linked to the computing system 100 .
- the user can connect to the service using the telephone device and, when prompted, speaks the name and address for which the corresponding telephone number is desired into the phone device 114 and this audio signal is converted (either by a pre-processing module 115 within computing system 100 , or by another component, e.g. an Analogue to Digital Converter and suitable software within the phone device 114 ) into symbolic data suitable for comparison with sound tokens associated with the access structure 110 .
- This speech to symbolic data conversion can be performed using existing techniques such as so-called phonetic decoding, using, for instance, Hidden Markov Models as described in published international patent application WO2004/090866 and the “Hidden Markov Model Toolkit (HTK) Book”, currently available from the website http://htk.eng.cam.ac.uk.
- Hidden Markov Models as described in published international patent application WO2004/090866 and the “Hidden Markov Model Toolkit (HTK) Book”, currently available from the website http://htk.eng.cam.ac.uk.
- the symbolic data is used by the database searching application 108 as an input/search request and the application seeks to find the database record corresponding to the request in order to produce output data relating to the desired telephone number.
- the telephone number data may be converted into a synthesised voice that relays the requested telephone number to the user of the phone device 114 .
- FIG. 2 illustrates schematically an overview of the steps performed by the searching application 108 .
- the application creates the access structure 110 based on data contained in the database 106 and the structure specification template 112 .
- the access structure After the access structure has been built (typically upon invoking the search application 108 ), it can be used for processing as many database access requests as required.
- the database is updated, e.g. when new records are added to it, then another access structure covering the updated/new records may be built that can be used to search in addition to a previously built access structure covering the database in its previous state.
- a further list of deleted records may also be built and used to edit the hypotheses returned by the search process before outputting information to the user.
- a replacement/new access structure covering the entire database in its updated form can be built.
- a request to access data from the database is received, e.g. the name and address of a person whose telephone number is desired.
- the request can take the form of data representing an utterance processed into sound tokens as in known conventional speech processing algorithms, although it will be understood that other forms of input may be used.
- the system is particularly suitable for forms of input that may only be able to approximately match data within a database.
- Other examples include an input based on handwriting, or text produced using a mobile phone keypad as an input device.
- the searching application 108 traverses the database access structure 110 to attempt to retrieve data relating to the database records that best match the search request and at step 208 data relating to the retrieved data, e.g. the desired telephone number, is output to the user.
- the database comprises a set of records 300 arranged as lines in a table, with each record comprising a plurality of fields; however, it will be appreciated that any suitable data structure capable of representing database records can be used. Only a small number of example records are shown for ease of illustration, although it will be appreciated that the system described herein is capable of dealing with large databases (e.g. ones containing hundreds of millions of records, requiring around 10 Gigabytes or more of storage) that are too large to be stored in their entirety within the internal memory (RAM) of a computer.
- RAM internal memory
- each of the database records 300 represents an address, the name of the owner/occupier of that address, as well as a telephone number associated with the address.
- the fields of the records 300 in the example include a state field 302 ; a city field 304 ; a street address field 306 ; a house number field 308 ; a last/family name field 310 a first name field 312 and a telephone number field 314 .
- pointers to data may be associated with one or more of the fields rather than the actual data being stored in each field and that fields representing other types of information, e.g. a ZIP/postal code, could be included.
- FIG. 4 illustrates an example of a template for an access structure specification 112 for use with the example database of FIG. 3 .
- the template is shown as a line 400 of symbols/text, although it will be understood that any suitable data structure could be used to represent the specification template.
- the line 400 is divided into three major groups 402 , 404 , 406 (separated by the “
- the access structure 110 comprises a tree-like data structure comprising a hierarchy of node groups, with each node group itself including a hierarchical tree data structure of nodes.
- Leaf nodes within a group of nodes at the terminal level of the hierarchy will contain links to records in the database.
- the search application will traverse nodes/node groups of the access structure to attempt to reach a node that contains a link to the database record that best matches the search request.
- the path of nodes selected for traversal will depend on the level of matching between the input/search request and match pattern data associated with arcs between the nodes/groups of the access structure.
- the number of major groups in the structure specification template determines the number of levels of node groups in the hierarchy of the access structure. For instance, as there are three major groups in the structure specification template of FIG. 4 , the access structure it defines will have three hierarchical levels of node groups, with the root level node group corresponding to the first major group 402 of the structure specification line 400 , the second level node group(s) corresponding to major group 404 and the third/terminal level node group(s) corresponding to major group 406 . It should be noted that the specific fields that are recited as being a major group are a matter of implementation and design choice.
- the first major group 402 corresponds to the state field 302 of the database.
- data values from appropriate fields of a record 300 of the database will be substituted into the corresponding fields of the structure specification template to create a current structure specification that is used to create a node/node group within the access structure.
- the second major group 404 of the template corresponds to the city field 304 of the database.
- the third major group 406 is divided into two sub-groups: a first sub-group 408 and a second sub-group 410 (the sub-groups are separated by the “:” symbol in the example).
- Sub-group 408 corresponds to the street name field 306 of the database
- sub-group 410 corresponds to three database fields: house number field 308 , last name field 310 and first name field 312 .
- any of the major groups of a structure specification could be divided into a plurality of sub-groups and, in turn, at least one such sub-group may be divided into a further sub-group (e.g. using other separation symbols) and so on to provide further hierarchical levels within the resulting access structure.
- An access structure template will be created for each database that is to be used with the searching application.
- the selection of which fields of the database records to include in the template and how the selected fields should be included as a major group/sub-group in the template will depend upon the structure of the database (either in its entirety, or based upon a sample of the database). Various factors may be taken into account when analyzing the fields to create the template. For instance, a field that contains relatively few different data values amongst the database records can be a good candidate for inclusion as a major group that defines a high level group of nodes in the access structure tree. Having such a node group at a high level in the tree can help to more quickly eliminate nodes that are unlikely to contain relevant match patterns from traversal. How the fields can be used to create nodes/group nodes that improve the efficiency of data transfer from an external storage medium into the internal memory of the computer for processing at the appropriate time may also be taken into account.
- the corresponding node group of the resulting access structure will contain a tree having a root node and one lower level of child nodes, the child nodes being at the terminal level of the tree.
- the corresponding node group of the resulting access structure will contain a tree having a root node and the number of levels of child nodes depending from the root will correspond to the number of sub-groups in the major group.
- the node group corresponding to structure specification major group 406 will contain a tree with a root level node, a second level of nodes corresponding to sub-group 408 and a third/terminal level of nodes corresponding to sub-group 410 .
- FIG. 5 shows diagrammatically an example of an access structure created using the structure specification 100 .
- Reference numeral 500 identifies a group/tree-like structure (also denoted by the triangular outline) of groups of nodes.
- Group 500 can be thought of as a trivial tree-like data structure/group with a single node, that node itself containing a hierarchical tree-like structure of groups of nodes 501 - 503 .
- Reference numeral 501 identifies a group of nodes (also denoted by the rectangular outline) at the first/highest level of the node groups that is defined by the first major group 402 of the structure specification. Reference numerals starting with 502 identify a second level of node groups within the hierarchy (i.e. children of the root level node group 501 ). Node groups 502 are defined by the second major group 404 of the structure specification. In the example there are three node groups 502 A- 502 C at this second hierarchical level.
- Reference numerals starting with 503 identify a third/terminal level of node groups within the hierarchy (i.e. children of the second level node groups 502 ). In the example there are twelve node groups 503 A- 503 L at this third level. Node groups 503 are defined by the third major group 406 of the structure specification. Node groups 503 are on the terminal level of the group/tree 500 that contains the node groups 501 - 503 and hold links to objects outside the group/tree 500 .
- the nodes within group 501 include a root node 5011 .
- This root node 5011 has three child nodes 5012 A, 5012 B, 5012 C in the example.
- Nodes 5012 are shown unfilled in the diagram of FIG. 5 , denoting that they are on the terminal level of the tree structure of their node group 501 and hold links to objects outside the group 501 , namely one of the second level node groups 502 .
- the link is to the second level node group 502 A.
- the root node 5021 of the group 502 A has four child nodes, 5022 A, 5022 B, 5022 C, 5022 D.
- Nodes 5022 are again shown unfilled, denoting that they are on the terminal level of the tree structure of their node group 502 and hold links to objects outside the group 502 , namely one of the third level node groups 503 .
- a similar set of nodes are contained within the other second level node groups 502 B- 502 C, although these are not detailed for brevity.
- the terminal level node 5022 A of group 502 A contain a link to node group 503 A.
- the root node 5031 of node group 503 A has two child nodes 5032 A, 5032 B at the second level of the tree within the group.
- Each of the nodes 5032 A and 5032 B has two child nodes 5033 A, 5033 B and 5033 C, 5033 D, respectively, on the third/terminal level of the tree of group 503 A.
- nodes 5033 are shown unfilled, denoting that they are on the terminal level of the tree structure of their node group 503 and hold links to objects outside the group 502 , in this case records 300 of the database 106 .
- node 5033 A contains a link to record 300 A of the database.
- a similar set of nodes are contained within node groups 503 B- 503 L, although these are not detailed for brevity.
- FIG. 5 Although the example access structure of FIG. 5 is relatively uniform/symmetrical in appearance, it will be appreciated that the number and arrangement of nodes/node groups can vary depending on the particular database being processed.
- FIG. 6 illustrates an example of steps that is used by the searching application 108 to create (process 202 of FIG. 2 ) the access structure 100 from data contained in the database 106 and the structure specification template 112 .
- An access structure defines a complete path from the root node group of the structure to leaf nodes of the structure, with each leaf node containing a link to each record in the database.
- An intention of creating such an access structure is that a path can be found from the root node to a leaf node that contains a link to the database record that best matches the search request.
- the structure specification 112 template is read.
- the root level node group of the access structure is created. This group is named “root”.
- Step 603 is the start of a loop of steps that will be performed for each record 300 in the database.
- the record being processed is parsed into fields and data values corresponding to the relevant field names (i.e. those included in the structure specification template 112 ) are substituted for the field names in the template to create a current structure specification.
- Step 605 is the start of a loop of steps that will be performed for each major group in the current structure specification.
- the current structure specification is divided into three portions: a Left Hand Side (LHS) portion, a Group portion and a Right Hand Side (RHS) portion.
- the portions are delineated by the “
- the LHS portion corresponds to the major group or groups that come(s) before (i.e. located to the left in the specification template) the major group being processed. If the major group is the first one in the structure specification, e.g. corresponding to major group 402 of the template 112 , then the LHS portion will be empty.
- the Group portion corresponds to the major group currently being processed.
- the RHS corresponds to the major group or groups that come(s) after (i.e. located to the right in the specification) the major group currently being processed. If the major group is the last one in the structure specification, e.g. corresponding to major group 406 of the template, then the RHS portion will be empty.
- a name for a node/group is created by concatenating the data values in the LHS portion of the current structure specification. If the LHS portion is empty then the name created is “root”.
- a question is asked whether a node/group having the name created at step 607 already exists. If the node/group does not exist then at step 609 a node/group for the structure specification is created and given the name produced at step 607 .
- An initial root node for the newly created group is created at step 610 .
- An initial match pattern that is convenient for implementers of the system can be associated with the node group. The initial match pattern of the newly created node/group is set to the value of the LHS portion at step 611 , before control is passed on to step 612 .
- step 612 the current node being processed is set to the root node of the current node group.
- Step 613 is the first of a loop of steps that will be performed for each sub-group defined within the major group currently being processed. If no sub-groups are defined using the “:” symbol then the major group is treated as the sole sub-group.
- step 614 data representing an arc from the current node, decorated with the sub-group, is created. Data corresponding to the name of the sub-group is associated with the arc.
- the match pattern data may comprise a sound token.
- a question is asked whether such an arc already exists in the access structure and if it does not then the arc is created within the access structure.
- the current node is set to the node at the end of the arc.
- the current node being processed will be an exit from the node group.
- a question is asked whether the RHS portion is empty. If it is empty then data representing a link from the current node to the corresponding database record (or desired content to be retrieved relating to that record) is created and stored within the current node. If the question asked at step 618 is answered in the negative then at step 619 data representing a link from the current node to a node group having a name derived from a concatenation of the data values of the LHS and Group portions is created and stored within the current node. It will be appreciated that the process described above can be adapted to create further hierarchical tree-like structures of node groups/nodes within a group of nodes.
- Database record 300 A of FIG. 3 is used for the example.
- process 202 will continue to process all the records 300 of the database to build an access structure.
- FIG. 7B A further illustration of the partially constructed access structure following further execution of the process 202 is shown in FIG. 7B .
- This Figure gives an indication of how data of other records 300 in the database can be arranged in the access structure.
- the number of node groups/nodes at a particular level in the hierarchy will depend upon the number of different data values in the field(s) corresponding to the major/minor group that defines the nodes/node groups at that level. For instance, in the small sample database used in the example, if there are three different states in the “state” field 302 (corresponding to major group 402 that defines node groups at the second level) then there will be three nodes groups 502 at the second level of the node group hierarchy.
- the second level node groups 502 generally contain nodes relating to different states covered by the telephone directory/book database.
- the third level node group(s) 503 linked to a particular second level node group will generally contain nodes relating to different cities within the state corresponding to that second level group.
- the tree structures within the third level node groups contain three levels of nodes.
- the nodes at the second level of the tree i.e. children of the root node
- the nodes at the third level of the tree generally contain nodes relating to different house numbers within the street of their parent node at the second level.
- the access structure 110 is intended to be traversed from its root level node group to one or more of the exit nodes that link to a record of the database (the nodes traversed in this way can be thought of as a “path” or “route” down the access structure).
- the path selected will depend upon the matching between (at least a portion) of the input and the sound tokens of the arcs between nodes/node groups.
- a score is computed that represents the level of matching between the input and the match pattern data (e.g. sound token) associated with each arc traversed.
- the node/node group at the end of that arc is selected, which, in turn, leads to a selection of one or more other nodes/node groups, depending upon the score computer for the appropriate arc(s). If the selected node contains a link to a record in the database 106 then the score computed for the route of nodes/node groups traversed to lead to that particular database record will be recorded by the process, along an indication of the database record (or the data contained in the record).
- node groups can be used to attempt to identify which nodes within groups should be traversed, so that only the most relevant nodes within the most relevant groups are selected for subsequent traversal. This can greatly reduce the number of comparisons/score computations based on the input and sound tokens.
- the higher level node groups contain a portion of the overall database and the grouping means that nodes/data can be transferred from a storage medium that is relatively slow to access (an optical storage device or data accessed via the internet, for example) into memory that can be accessed more quickly (a hard drive or RAM, for example) when that data is required for processing by the search application.
- FIG. 8 illustrates steps that may be involved in process 206 of FIG. 2 to traverse the access structure.
- the process of searching a node begins at step 800 (the input for this process will be an identifier of the node that is to be searched).
- the root of the node to be searched is placed onto a queue data structure referred to as the priority queue.
- a question is asked whether the priority queue is empty. If this question is answered in the negative then control passes on to step 803 , where the best node from the priority queue is popped.
- the type of the popped node is determined by asking whether it is a group containing nodes in a hierarchical structure (a tree).
- step 800 is called using the popped node as the input so that a process of searching the popped node is invoked with that process having its own, initially-empty priority queue, pruning threshold, etc (it will be appreciated that this may be conveniently achieved using recursive programming techniques).
- step 804 a is answered in the negative then control passes on to 804 b , where the scores for the children of the popped node are computed. Computation of the scores may involve a partial matching technique based on a comparison between the input/search request and the match pattern data associated with arcs between the nodes being traversed, as described below.
- Step 805 a a question is asked whether there is a scored child is to be processed.
- Step 805 a defines the start of a looped procedure that effectively iterates through the scored children to be processed. The loop starts when the question asked at step 805 a is answered in the affirmative and control is then passed on to step 806 a , where a question is asked whether the scored child being processed is an object outside the tree containing the popped node. If this question is answered in the affirmative then control passes on to 807 a , where the scored child is added to a data structure referred to as the output list and control then passes back to 805 a .
- step 806 b If the question asked at step 806 a is answered in the negative then control passes on to step 806 b , which indicates that the child is a node within the tree and will be processed accordingly. Control then passes on to step 807 b where the scored child is added to the priority queue. Control passes back to step 805 a from steps 807 a and 807 b.
- step 808 a variable referred to as the pruning threshold is updated according to scores contained in the priority queue.
- the pruning threshold can be initially defined as the best score computed so far.
- step 809 the pruning threshold is further updated according to scores stored in the output list.
- step 810 the priority queue is pruned against the pruning threshold and control then passes back to step 802 .
- step 802 If the question asked at step 802 indicates that the priority queue is empty then control passes on to step 811 where the process terminates by returning the output list of nodes and associated scores.
- this output list would be the result of the search/traversal process 206 as it points to entries in the database 106 .
- FIG. 10 illustrates schematically an example of a comparison between an input sequence and tokens of a partial theory that may be performed as part of the score-computation process.
- the first part may be an exact score since both the explanation and the input sequence are available (producing the ‘partial score’).
- the second part cannot be an exact score since, while the input sequence is available, the explanation is missing (producing the ‘estimated remaining score’). That is, the score for the partial theory is the sum of two parts: an exact score for the partial theory against the part of the input that has been matched so far and an approximate score for the unknown continuation of the partial hypothesis against the rest of the input.
- a complication is that it is not clear to which piece of the input sequence the explanation given by the theory should relate because, for example, timescales (in human utterances) are variable, or because of insertions or deletions (a null input token which corresponds to part of the explanation).
- a “cache” can be computed, which is the (exact) score for the partial theory against the input at all feasible ending points.
- calculating the score for child node(s) of a particular node being processed (as described with reference to FIG. 8 above) can be thought of as calculating the cache for a partial theory against the input at all feasible ending points.
- the scores computed for nodes traversed in this way can be used for two purposes:
- the resulting score is used to determine the best way of dividing the input between the partial theory and the other theories computed in this way (with a score being a sum of dissimilarities between the input and match data associated with the arcs between the nodes traversed).
- the score for a sequence of portions of a partial theory can be computed.
- These partial results can be re-used by making a stack of caches, which can be stored when partial hypotheses are tested in a suitable order, such as in the hierarchical access structure. If, when computing the scores in such an order, it is found that all the scores in a cache are worse than the best complete theory found so far, then all continuations built on that cache can be abandoned (this has similarities to the known A* search technique). If a relatively small number of partial theories, e.g. only a few tens of thousands, are to be tested then this may be considered sufficient.
- the database of entries to be searched by a lexical interpreter may be organized in such a way that a tree of nodes may be created—each node containing entries that share a word-level affix (sequences of words at the beginning or end of the entry).
- a tree may be created using affixes in a straightforward manner if the tree is built by reading from the end of the address back to the beginning. For example, selected portions of the database might proceed thus:
- the non-terminal node for Massachusetts might proceed thus:
- FIG. 11A A portion of the whole database tree is shown in FIG. 11A .
- Which parts of the database tree are best grouped into nodes and which are stored separately as files may depend on: the size and structure of the database; the resultant size of the files; the speed of the disk access; and the size of the memory of the host computer.
- Dynamic programming can be arranged to push hypotheses forward so as to consume extra symbols of the reference while maintaining hypotheses at all positions in the output of the phonetic recognizer.
- a score indicating how likely the hypothesis is. Log probabilities can be used for these scores which means that, since probabilities cannot be greater than 1, these scores will be negative with values closer to zero being better.
- row A in FIG. 11B contains all the hypotheses which explain the shared part of the reference for any amount of the recognition sequence. It constitutes the cache, which is all that is required to extend the shared part of the reference with many specific parts.
- Affix cacheing can yield two benefits. Firstly, the DP calculations for all hypothesised matches between the utterance and the affix cache will remain valid for all records sharing the affix. These hypotheses may be extended with further DP calculations once information more specific to a record is considered. Secondly, the best partial score constitutes a guaranteed optimistic bound on the partial DP score for the best complete hypothesis. By “optimistic bound” it is meant that the actual score cannot be any closer to zero.
- affix cacheing may be continued to more levels (for example last names of families living at the same address).
- cacheing of smaller and smaller blocks will be subject to the principle of diminishing returns.
- the database may be regarded as a tree with some nodes being treated as separate files and other nodes being treated as caches within a file.
- There is an established discipline of tree searching and many algorithms exist for this task for example—the known depth first search, breadth first search, best first search or A* search techniques).
- A* search has a proven track record in cases where large trees need to be searched efficiently.
- A* search expands the tree-node that has the best combination of the partial score and the remaining score.
- the partial score and the remaining score may be either an estimate or a guaranteed optimistic bound.
- a method of providing a bound and/or estimate of the score for the remaining piece may also be required.
- the estimates are primarily useful for allowing us to explore the most promising hypotheses first, whereas the bounds can be used to eliminate hypotheses entirely.
- a node may be excluded from consideration (pruned) when the score associated with it is sufficiently worse than a reference score (the reference score minus a margin).
- This reference score may be from a node at the same level as the node being pruned or at a terminal level (i.e. a leaf node). If bounds for both the partial score and the remaining score are used and the margin is zero then the search is optimal in the sense that the best scoring entry is guaranteed to be evaluated—the algorithm is a so-called admissible algorithm. Use of estimates and margins may significantly speed up the search whilst incurring additional errors. However, if the increase in error rate is small relative to the error rate without pruning or if the error rate after pruning is considered still small enough, then this approach may be the method of choice.
- FIG. 11C a score matrix used to record the progress of some DP calculations as illustrated in FIG. 11C (a DP matrix for the alignment of a partial reference sequence against the complete output of the phonetic recogniser).
- Row A in the Figure contains the scores for paths matching a portion of a database record (city, state and ZIP) against the phonetic recogniser output.
- cell B contains the best score in row A.
- This score is certainly an optimistic bound on the score for a complete match of the reference sequence against the decoder output but may be highly optimistic.
- cell C contains the actual cost of traversing the white region along the overall best path, this score may be much worse than either that at B or D.
- a more accurate score estimate may be obtained by only examining those matches which consume a number of symbols from the phonetic recogniser which is commensurate with the length of the partial reference (cell D in the shaded interval in FIG. 11C ). In this case the score is not a guaranteed bound since it is possible that the partial score of the best overall match has a better score than the one selected.
- any partial hypothesis corresponding to a cell (such as cell B in FIG. 11C ) an estimate and/or a bound of the score for the remaining part of the recognizer output can be obtained, without needing to examine specific complete hypotheses.
- One method for obtaining a score for the remaining part of the utterance is to accumulate values from the confusion matrix. For each recognition symbol remaining, the best possible score for consuming the symbol in the accumulation can be incorporated. This provides a guaranteed optimistic bound (a score which is known to be no worse than the actual score). If the expected score is used instead of the minimum, the remaining score may be more accurate but is not guaranteed to be optimistic. The expected score may be calculated by forming the sum of scores for all reference symbols versus that recognition symbol, weighted by the probability that that recognition symbol will correspond to each reference symbol. This method of determining a score for the remaining part of the output of the phonetic recognizer requires no information about the contents of the reference other than the set of symbols it may contain. This is a highly desirable property since it allows remaining score calculation before the file relating to the remaining part is opened.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- 601 Read
structure specification template 112 - 602 “root”
node group 501 created - 603 For
database record 300A:- 604 The three major groups of the current structure specification are:
- Massachusetts|Boston|Main:100DaviesBrian
- 605 1 For “Massachusetts” major group:
- 606 1 LHS=<empty>; Group=Massachusetts; RHS=Boston
- 607 1 As LHS is empty, name of group is “root”
- 608 1 Node group named “root” already exists
- 612 1 Current node set to root
node 5011 - 613 1 For “Massachusetts” group (sole sub-group):
- 614 1 Create
data representing arc 702 fromroot node 5011, decorated with “Massachusetts” (“sub-group” being processed) - 615 1
Arc 702 does not already exist and so is added to access structure, along withnode 5012A at the end of the arc - 616 1 The current node us set to
node 5012A
- 614 1 Create
- 617 1
Current node 5012A is an exit node ofgroup 501 - 619 1 RHS is not null (=“Boston”), store data representing a link (704) from
current node 5012A to a node group named “Massachusetts” (=(empty) LHS+Group) (this node group will be created atstep 609 2 below)
- 605 2 For “Boston” major group:
- 606 2 LHS=Massachusetts; Group=Boston; RHS=Main:100DaviesBrian
- 607 2 Name “Massachusetts” constructed for node group (based on LHS)
- 608 2 Node group named “Massachusetts” does not exist
- 609 2
Create node group 502A named “Massachusetts” - 610 2 Establish
initial root node 5021 ofgroup 502A - 611 2 Set initial match pattern of group to “Massachusetts”
- 612 2 Set current node to root
node 5021 - 613 2 For “Boston” group (sole sub-group):
- 614 2 Create
data representing arc 706 fromroot node 5021, decorated with “Boston” (“sub-group” being processed) - 615 2
Arc 706 does not already exist and so is added to access structure, along withnode 5022A at the end of thearc 706 - 616 2 Current node set to
node 5022A at end ofarc 706
- 614 2 Create
- 617 2
Current node 5022A is an exit node ofgroup 502A - 619 2 RHS is not null (=“Main:100DaviesBrian”), store data representing a link (708) from
current node 5022A to a node group named “MassachusettsBoston” (=LHS+Group) (this node group will be created atstep 609 3 below)
- 605 3 For “Main:100DaviesBrian” major group:
- 606 3 LHS=MassachusettsBoston; Group=Main:100DaviesBrian; RHS=<empty>
- 607 3 Name “MassachusettsBoston” constructed for node group (based on LHS)
- 608 3 Node group named “MassachusettsBoston” does not exist
- 609 3
Create node group 503A named “MassachusettsBoston” - 610 3 Establish
initial root node 5031 ofgroup 503A - 611 3 Set initial match pattern of group to “MassachusettsBoston”
- 612 3 Set current node to root
node 5031 - 613 3-1 For “Main” sub-group:
- 614 3-1 Create data representing arc 710 from
root node 5031, decorated with “Main” (sub-group being processed) - 615 3-1 Arc 710 does not already exist and so is added to access structure, along with a
node 5032A at the end of the arc 710 - 616 3-1 Current node set to
node 5032A
- 614 3-1 Create data representing arc 710 from
- 613 3-2 For “100DaviesBrian” sub-group:
- 614 3-2 Create data representing arc 712 from
node 5032A, decorated with “100DaviesBrian” (sub-group being processed) - 615 3-2 Arc 712 does not already exist and so is added to access structure, along with
node 5033A at the end of the arc 712 - 616 3-2 Current node set to
node 5033A
- 614 3-2 Create data representing arc 712 from
- 617 3
Current node 5033A is an exit fromgroup 503A - 618 3 RHS is empty and so store data representing a link (714) from
node 5033A todatabase record 300A
- 604 The three major groups of the current structure specification are:
- 603 For
database record 300B . . .
- 800. The process is called with
node 500 - 801. The root of tree 500 (node 501) is placed on the priority queue
- 802. The priority queue is not empty (it contains node 501)
- 803.
Node 501 is popped from the priority queue - 804 a.
Node 501 is a tree/node group and so the tree is searched by recursively calling the same procedure to create a list of scored children:- 800.
Search node 501 - 801. Root (5011) of
tree 501 is placed on the priority queue (seeFIG. 9A ) - 802. The priority queue is not empty (it contains node 5011 (only))
- 803.
Node 5011 is popped from the priority queue - 804 b.
Node 5011 is not a tree and so compute scores for children (5012A, 5012B, 5012C) ofnode 5011 - 805 a For scored
child 502A:- 806 b.
Child 5012A is withintree 501 - 807 b. Add
child 5012A to priority queue
- 806 b.
- 804. For scored
child 5012B:- 806 b.
Child 5012B is withintree 501 - 807 b. Add
child 5012B to priority queue
- 806 b.
- 804. For scored
child 5012C:- 806 b.
Child 5012C is withintree 501 - 807 b. Add
child 5012C to priority queue
- 806 b.
- . . .
- 802. The priority queue is not empty . . .
- . . .
- 803.
Node 5012A is popped from the priority queue - 804 b.
Node 5012A is not a tree and so compute score (e.g. −100) for child (502A) ofnode 5012A - 805. For scored
child 502A: - 806 a.
Child 502A isoutside tree 501 - 807 a. Add
child 502A to output list - . . .
- 811. Return the output
list containing nodes
- 800.
- 805.
Object 502A is examined first - 806 b.
Object 502A is a node within thetree 500 - 807 b.
Node 502A and its score of −100 is put on the priority queue - 805.
Object 502B is examined next - 806 b.
Object 502B is a node within thetree 500 - 807 b.
Node 502B and its score of −50 is put on the priority queue - 808. The threshold is set according to scores in the priority queue, say, −50−20=−70
- 809. The output list is empty and so the threshold is not changed again
- 810.
Node 502A is removed from the priority queue since −100<−70 (seeFIG. 9B ) - 802. The priority queue is not empty (it now contains
node 502B) - 803.
Node 502B is popped from the priority queue - 804 a.
Node 502B is searched by recursively calling the same procedure to create a list of scored children . . . - 805. Object 503B is examined first
- 806 b. Object 503B is a node within the
tree 500 - 807 b. Node 503B and its score of −45 is put on the priority queue
- 805.
Object 503C is examined next - 806 b.
Object 503C is a node within thetree 500 - 807 b.
Node 503C and its score of −30 is put on the priority queue - 805.
Object 503D is examined next - 806 b.
Object 503D is a node within the tree - 807 b.
Node 503D and its score of −20 is put on the priority queue - . . .
- 808. The threshold is set according to scores in the priority queue, −20−20=−40
- 809. The output list is empty so the threshold is not changed again
- 810.
Node 503C (and others) are removed from the priority queue since −45<−40 (seeFIG. 9C ) - 802. The priority queue is not empty (it now contains
nodes - 803.
Node 503E is popped from the priority queue - 804 a.
Node 503E is searched by recursively calling the same procedure to create a list of scored children (database record objects 300A, 300B, 300C and 300D with their scores, say, −20, −15, −10 and −5, respectively) - 805. Object 300 a is examined first
- 806 a. Object 300 a is an object outside the
tree 500 - 807 a. Object 300 a and its score of −20 is put on the output list
- 805. Object 300 b is examined first
- 806 a. Object 300 b is an object outside the
tree 500 - 807 a. Object 300 b and its score of −15 is put on the output list
- 805. Object 300 c is examined first
- 806 a. Object 300 c is an object outside the
tree 500 - 807 a. Object 300 c and its score of −10 is put on the output list
- 805. Object 300 d is examined first
- 806 a. Object 300 d is an object outside the
tree 500 - 807 a. Object 300 d and its score of −5 is put on the output list
- 808. The threshold is set according to scores in the priority queue, −30−20=−50
- 809. The threshold is set according to scores in the output list, −5−23=−28
- 810.
Node 503C is removed from the priority queue since −30<−28 (seeFIG. 9D ) - 802. The priority queue is empty
- 811. The output list is returned with objects 300 a, 300 b, 300 c and 300 d and their associated scores −20, −15, −10, −5.
-
- 1) As a starting point for computing the equivalent sequence when the partial hypothesis is extended in some particular way (by traversing one of the arcs in the access structure), and
- 2) To compute the score for the partial hypothesis, by adding together an exact partial score from the above sequence and a fill-in score for the missing part.
-
- 1. The estimated score depends only on the number of unexplained input tokens. Each token adds a fixed amount. The value of the score for each token may be set manually by the user or may be calculated from training data as the average score per token for correct theories;
- 2. The estimated score again depends on the number of unexplained input tokens but the value for each is the average score in the part of the input sequence explained by the theory;
- 3. The estimated score depends on not only the number of unexplained input tokens but also on which tokens they are. The value is the average score for the particular input token for correct theories in some training data.
- . . .
- Anne Smith 1 Acacia Avenue Boston, Mass. 02112
- Brian Davies 2 Acacia Avenue Boston, Mass. 02112
- . . .
- Irene Baker 1 Charlbury Drive Boston, Mass. 02112
- . . .
- James Brown 73 Main Street Cambridge, Mass. 02140
- . . .
- Kurt Kimble 27 Kerry Drive Worcester, Mass. 01655
- . . .
- Gary Baldwin 24 Kissimmee Avenue Lakeland, Fla. 33809
- . . .
- Harry Dupont 4427 1st Avenue Orlando, Fla. 32811
- . . .
- A tree with three levels may be created—a root node containing a list of States and ZIP codes, a set of non-terminal nodes (for each State) containing a list of City, State and ZIP codes, and a set of terminal nodes (for each unique combination of City, State and ZIP code) containing a list of complete database entries. Thus, the root node might proceed thus:
Entry - Alabama 35950
- Alabama 35951
- . . .
- Connecticut 06120
- Connecticut 06447
- . . .
- Florida 33809
- Florida 32169
- . . .
- Massachusetts 01002
- Massachusetts 01101
- . . .
- Rhode Island 02908
- Rhode Island 02861
- . . .
- Wyoming 82901
- Wyoming 82844
- . . .
- Acushnet, Mass. 02743
- Allston, Mass. 02134
- . . .
- Boston, Mass. 01002
- . . .
- Cambridge, Mass. 02140
- Centerville, Mass. 02632
- . . .
- Newton, Mass. 02458
- . . .
- Woburn, Mass. 01801
- Worcester, Mass. 01655
- Anne Smith 1 Acacia Avenue Boston, Mass. 01002
- Brian Davies 2 Acacia Avenue Boston, Mass. 01002
- Charlotte Brown 3 Acacia Avenue Boston, Mass. 01002
- David Jones 4 Acacia Avenue Boston, Mass. 01002
- Enid Green 1 Bond Street Boston, Mass. 01002
- Fred Martin 2 Bond Street Boston, Mass. 01002
- Georgette Smith 3 Bond Street Boston, Mass. 01002
- Harry Dupont 4 Bond Street Boston, Mass. 01002
- Irene Baker 1 Charlbury Drive Boston, Mass. 01002
- John Flynn 2 Charlbury Drive Boston, Mass. 01002
- Karen Cooper 3 Charlbury Drive Boston, Mass. 01002
Remaining entry | First-level affix | ||
Anne Smith 1 Acacia Avenue | Boston Massachusetts 01002 | ||
Brian Davies 2 Acacia Avenue | |||
Charlotte Brown 3 Acacia Avenue | |||
David Jones 4 Acacia Avenue | |||
Enid Green 1 Bond Street | |||
Fred Martin 2 Bond Street | |||
Georgette Smith 3 Bond Street | |||
Harry Dupont 4 Bond Street | |||
Irene Baker 1 Charlbury Drive | |||
John Flynn 2 Charlbury Drive | |||
Karen Cooper 3 Charlbury Drive | |||
. . . | |||
Remaining entry | Second-level affix | First-level affix |
Anne Smith 1 | Acacia Avenue | Boston Massachusetts 01002 |
Brian Davies 2 | ||
Charlotte Brown 3 | ||
David Jones 4 | ||
Enid Green 1 | Bond Street | |
Fred Martin 2 | ||
Georgette Smith 3 | ||
Harry Dupont 4 | ||
Irene Baker 1 | Charlbury Drive | |
John Flynn 2 | ||
Karen Cooper 3 | ||
. . . | ||
Depending on the structure of the database such affix cacheing may be continued to more levels (for example last names of families living at the same address). However, cacheing of smaller and smaller blocks will be subject to the principle of diminishing returns.
Database Search
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/115,070 US7403941B2 (en) | 2004-04-23 | 2005-04-25 | System, method and technique for searching structured databases |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US56514604P | 2004-04-23 | 2004-04-23 | |
US11/115,070 US7403941B2 (en) | 2004-04-23 | 2005-04-25 | System, method and technique for searching structured databases |
Publications (2)
Publication Number | Publication Date |
---|---|
US20060004721A1 US20060004721A1 (en) | 2006-01-05 |
US7403941B2 true US7403941B2 (en) | 2008-07-22 |
Family
ID=34968888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/115,070 Active 2026-04-21 US7403941B2 (en) | 2004-04-23 | 2005-04-25 | System, method and technique for searching structured databases |
Country Status (3)
Country | Link |
---|---|
US (1) | US7403941B2 (en) |
EP (1) | EP1738291A1 (en) |
WO (1) | WO2005103951A1 (en) |
Cited By (147)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060277030A1 (en) * | 2005-06-06 | 2006-12-07 | Mark Bedworth | System, Method, and Technique for Identifying a Spoken Utterance as a Member of a List of Known Items Allowing for Variations in the Form of the Utterance |
US20070143112A1 (en) * | 2005-12-20 | 2007-06-21 | Microsoft Corporation | Time asynchronous decoding for long-span trajectory model |
US20080276241A1 (en) * | 2007-05-04 | 2008-11-06 | Ratan Bajpai | Distributed priority queue that maintains item locality |
WO2010067118A1 (en) | 2008-12-11 | 2010-06-17 | Novauris Technologies Limited | Speech recognition involving a mobile device |
US20110265066A1 (en) * | 2010-04-21 | 2011-10-27 | Salesforce.Com | Methods and systems for evaluating bytecode in an on-demand service environment including translation of apex to bytecode |
US20120016891A1 (en) * | 2010-07-16 | 2012-01-19 | Jiri Pechanec | Hierarchical registry federation |
US20120130983A1 (en) * | 2010-11-24 | 2012-05-24 | Microsoft Corporation | Efficient string pattern matching for large pattern sets |
US8583659B1 (en) * | 2012-07-09 | 2013-11-12 | Facebook, Inc. | Labeling samples in a similarity graph |
US20140149464A1 (en) * | 2012-11-29 | 2014-05-29 | International Business Machines Corporation | Tree traversal in a memory device |
US20140163981A1 (en) * | 2012-12-12 | 2014-06-12 | Nuance Communications, Inc. | Combining Re-Speaking, Partial Agent Transcription and ASR for Improved Accuracy / Human Guided ASR |
US8768768B1 (en) * | 2007-09-05 | 2014-07-01 | Google Inc. | Visitor profile modeling |
US8839088B1 (en) | 2007-11-02 | 2014-09-16 | Google Inc. | Determining an aspect value, such as for estimating a characteristic of online entity |
US20140298243A1 (en) * | 2013-03-29 | 2014-10-02 | Alcatel-Lucent Usa Inc. | Adjustable gui for displaying information from a database |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10607140B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100845428B1 (en) * | 2006-08-25 | 2008-07-10 | 한국전자통신연구원 | Speech recognition system of mobile terminal |
US20080154608A1 (en) * | 2006-12-26 | 2008-06-26 | Voice Signal Technologies, Inc. | On a mobile device tracking use of search results delivered to the mobile device |
US20080154870A1 (en) * | 2006-12-26 | 2008-06-26 | Voice Signal Technologies, Inc. | Collection and use of side information in voice-mediated mobile search |
US20080154612A1 (en) * | 2006-12-26 | 2008-06-26 | Voice Signal Technologies, Inc. | Local storage and use of search results for voice-enabled mobile communications devices |
US20080153465A1 (en) * | 2006-12-26 | 2008-06-26 | Voice Signal Technologies, Inc. | Voice search-enabled mobile device |
GB2452760A (en) | 2007-09-14 | 2009-03-18 | Data Connection Ltd | Storing and searching data in a database tree structure for use in data packet routing applications. |
US8799315B2 (en) * | 2009-01-30 | 2014-08-05 | International Business Machines Corporation | Selective construction of data search result per search request specifying path information |
US8683498B2 (en) * | 2009-12-16 | 2014-03-25 | Ebay Inc. | Systems and methods for facilitating call request aggregation over a network |
US20110314028A1 (en) * | 2010-06-18 | 2011-12-22 | Microsoft Corporation | Presenting display characteristics of hierarchical data structures |
US8533225B2 (en) * | 2010-09-27 | 2013-09-10 | Google Inc. | Representing and processing inter-slot constraints on component selection for dynamic ads |
US8713056B1 (en) * | 2011-03-30 | 2014-04-29 | Open Text S.A. | System, method and computer program product for efficient caching of hierarchical items |
US9881063B2 (en) * | 2011-06-14 | 2018-01-30 | International Business Machines Corporation | Systems and methods for using graphical representations to manage query results |
EP2856344A1 (en) * | 2012-05-24 | 2015-04-08 | IQser IP AG | Generation of queries to a data processing system |
US9436681B1 (en) * | 2013-07-16 | 2016-09-06 | Amazon Technologies, Inc. | Natural language translation techniques |
CN103617199B (en) * | 2013-11-13 | 2016-08-17 | 北京京东尚科信息技术有限公司 | A kind of method and system operating data |
US9703830B2 (en) * | 2014-10-09 | 2017-07-11 | International Business Machines Corporation | Translation of a SPARQL query to a SQL query |
US10019514B2 (en) * | 2015-03-19 | 2018-07-10 | Nice Ltd. | System and method for phonetic search over speech recordings |
CN106021374A (en) * | 2016-05-11 | 2016-10-12 | 百度在线网络技术(北京)有限公司 | Underlay recall method and device for query result |
CN106503265A (en) * | 2016-11-30 | 2017-03-15 | 北京赛迈特锐医疗科技有限公司 | Structured search system and its searching method based on weights |
CN112313691B (en) * | 2018-06-25 | 2024-06-28 | 株式会社工程师论坛 | Matching score calculating device |
CN110895784A (en) * | 2018-08-23 | 2020-03-20 | 京东数字科技控股有限公司 | Data processing method and device |
CN110471916B (en) * | 2019-07-03 | 2023-05-26 | 平安科技(深圳)有限公司 | Database query method, device, server and medium |
US11797770B2 (en) * | 2020-09-24 | 2023-10-24 | UiPath, Inc. | Self-improving document classification and splitting for document processing in robotic process automation |
CN112632065A (en) * | 2020-12-18 | 2021-04-09 | 北京锐安科技有限公司 | Data storage method and device, storage medium and server |
CN112836030B (en) * | 2021-01-29 | 2023-04-25 | 成都视海芯图微电子有限公司 | Intelligent dialogue system and method |
US11977531B1 (en) * | 2023-01-25 | 2024-05-07 | Dell Products L.P. | Systems and methods of delta-snapshots for storage cluster with log-structured metadata |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0175503A1 (en) | 1984-09-06 | 1986-03-26 | BRITISH TELECOMMUNICATIONS public limited company | Method and apparatus for use in interactive dialogue |
US5261088A (en) * | 1990-04-26 | 1993-11-09 | International Business Machines Corporation | Managing locality in space reuse in a shadow written B-tree via interior node free space list |
US5528701A (en) * | 1994-09-02 | 1996-06-18 | Panasonic Technologies, Inc. | Trie based method for indexing handwritten databases |
US5621859A (en) * | 1994-01-19 | 1997-04-15 | Bbn Corporation | Single tree method for grammar directed, very large vocabulary speech recognizer |
US5768423A (en) * | 1994-09-02 | 1998-06-16 | Panasonic Technologies Inc. | Trie structure based method and apparatus for indexing and searching handwritten databases with dynamic search sequencing |
US5799276A (en) | 1995-11-07 | 1998-08-25 | Accent Incorporated | Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals |
US5950159A (en) * | 1996-04-01 | 1999-09-07 | Hewlett-Packard Company | Word spotting using both filler and phone recognition |
US6009439A (en) * | 1996-07-18 | 1999-12-28 | Matsushita Electric Industrial Co., Ltd. | Data retrieval support apparatus, data retrieval support method and medium storing data retrieval support program |
US6104344A (en) * | 1999-03-24 | 2000-08-15 | Us Wireless Corporation | Efficient storage and fast matching of wireless spatial signatures |
EP1083545A2 (en) | 1999-09-09 | 2001-03-14 | Xanavi Informatics Corporation | Voice recognition of proper names in a navigation apparatus |
US6343270B1 (en) | 1998-12-09 | 2002-01-29 | International Business Machines Corporation | Method for increasing dialect precision and usability in speech recognition and text-to-speech systems |
US20020196911A1 (en) | 2001-05-04 | 2002-12-26 | International Business Machines Corporation | Methods and apparatus for conversational name dialing systems |
US6501833B2 (en) | 1995-05-26 | 2002-12-31 | Speechworks International, Inc. | Method and apparatus for dynamic adaptation of a large vocabulary speech recognition system and for use of constraints from a database in a large vocabulary speech recognition system |
US20030069730A1 (en) | 2001-10-09 | 2003-04-10 | Michael Vanhilst | Meaning token dictionary for automatic speech recognition |
US20030115289A1 (en) * | 2001-12-14 | 2003-06-19 | Garry Chinn | Navigation in a voice recognition system |
US6584459B1 (en) * | 1998-10-08 | 2003-06-24 | International Business Machines Corporation | Database extender for storing, querying, and retrieving structured documents |
US6678692B1 (en) * | 2000-07-10 | 2004-01-13 | Northrop Grumman Corporation | Hierarchy statistical analysis system and method |
US6678675B1 (en) * | 2000-03-30 | 2004-01-13 | Portal Software, Inc. | Techniques for searching for best matches in tables of information |
US6684185B1 (en) | 1998-09-04 | 2004-01-27 | Matsushita Electric Industrial Co., Ltd. | Small footprint language and vocabulary independent word recognizer using registration by word spelling |
US20040044638A1 (en) * | 2002-08-30 | 2004-03-04 | Wing Andrew William | Localization of generic electronic registration system |
US6744861B1 (en) | 2000-02-07 | 2004-06-01 | Verizon Services Corp. | Voice dialing methods and apparatus implemented using AIN techniques |
WO2004086357A2 (en) | 2003-03-24 | 2004-10-07 | Sony Electronics Inc. | System and method for speech recognition utilizing a merged dictionary |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5987408A (en) * | 1996-12-16 | 1999-11-16 | Nortel Networks Corporation | Automated directory assistance system utilizing a heuristics model for predicting the most likely requested number |
-
2005
- 2005-04-25 WO PCT/GB2005/001589 patent/WO2005103951A1/en active Application Filing
- 2005-04-25 US US11/115,070 patent/US7403941B2/en active Active
- 2005-04-25 EP EP20050746378 patent/EP1738291A1/en not_active Ceased
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0175503A1 (en) | 1984-09-06 | 1986-03-26 | BRITISH TELECOMMUNICATIONS public limited company | Method and apparatus for use in interactive dialogue |
US5261088A (en) * | 1990-04-26 | 1993-11-09 | International Business Machines Corporation | Managing locality in space reuse in a shadow written B-tree via interior node free space list |
US5621859A (en) * | 1994-01-19 | 1997-04-15 | Bbn Corporation | Single tree method for grammar directed, very large vocabulary speech recognizer |
US5528701A (en) * | 1994-09-02 | 1996-06-18 | Panasonic Technologies, Inc. | Trie based method for indexing handwritten databases |
US5768423A (en) * | 1994-09-02 | 1998-06-16 | Panasonic Technologies Inc. | Trie structure based method and apparatus for indexing and searching handwritten databases with dynamic search sequencing |
US6501833B2 (en) | 1995-05-26 | 2002-12-31 | Speechworks International, Inc. | Method and apparatus for dynamic adaptation of a large vocabulary speech recognition system and for use of constraints from a database in a large vocabulary speech recognition system |
US5799276A (en) | 1995-11-07 | 1998-08-25 | Accent Incorporated | Knowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals |
US5950159A (en) * | 1996-04-01 | 1999-09-07 | Hewlett-Packard Company | Word spotting using both filler and phone recognition |
US6009439A (en) * | 1996-07-18 | 1999-12-28 | Matsushita Electric Industrial Co., Ltd. | Data retrieval support apparatus, data retrieval support method and medium storing data retrieval support program |
US6684185B1 (en) | 1998-09-04 | 2004-01-27 | Matsushita Electric Industrial Co., Ltd. | Small footprint language and vocabulary independent word recognizer using registration by word spelling |
US6584459B1 (en) * | 1998-10-08 | 2003-06-24 | International Business Machines Corporation | Database extender for storing, querying, and retrieving structured documents |
US6343270B1 (en) | 1998-12-09 | 2002-01-29 | International Business Machines Corporation | Method for increasing dialect precision and usability in speech recognition and text-to-speech systems |
US6104344A (en) * | 1999-03-24 | 2000-08-15 | Us Wireless Corporation | Efficient storage and fast matching of wireless spatial signatures |
EP1083545A2 (en) | 1999-09-09 | 2001-03-14 | Xanavi Informatics Corporation | Voice recognition of proper names in a navigation apparatus |
US6744861B1 (en) | 2000-02-07 | 2004-06-01 | Verizon Services Corp. | Voice dialing methods and apparatus implemented using AIN techniques |
US6678675B1 (en) * | 2000-03-30 | 2004-01-13 | Portal Software, Inc. | Techniques for searching for best matches in tables of information |
US6678692B1 (en) * | 2000-07-10 | 2004-01-13 | Northrop Grumman Corporation | Hierarchy statistical analysis system and method |
US20020196911A1 (en) | 2001-05-04 | 2002-12-26 | International Business Machines Corporation | Methods and apparatus for conversational name dialing systems |
US6925154B2 (en) * | 2001-05-04 | 2005-08-02 | International Business Machines Corproation | Methods and apparatus for conversational name dialing systems |
US20030069730A1 (en) | 2001-10-09 | 2003-04-10 | Michael Vanhilst | Meaning token dictionary for automatic speech recognition |
US20030115289A1 (en) * | 2001-12-14 | 2003-06-19 | Garry Chinn | Navigation in a voice recognition system |
US20040044638A1 (en) * | 2002-08-30 | 2004-03-04 | Wing Andrew William | Localization of generic electronic registration system |
WO2004086357A2 (en) | 2003-03-24 | 2004-10-07 | Sony Electronics Inc. | System and method for speech recognition utilizing a merged dictionary |
Non-Patent Citations (2)
Title |
---|
Internet page by Paul Black http://web.archive.org/web/20010620044514/http://www.nist.gov/dads/HTML/dynamicprog.html, 2001. * |
Riley, Michael D. and Ljolje, Andrej, Automatic Generation of Detailed Pronunciation Lexicons, Automatic Speech and Speaker Recognition: Advanced Topics, 1995, XP- 002409248, 17 pages. |
Cited By (216)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US7725309B2 (en) * | 2005-06-06 | 2010-05-25 | Novauris Technologies Ltd. | System, method, and technique for identifying a spoken utterance as a member of a list of known items allowing for variations in the form of the utterance |
US20060277030A1 (en) * | 2005-06-06 | 2006-12-07 | Mark Bedworth | System, Method, and Technique for Identifying a Spoken Utterance as a Member of a List of Known Items Allowing for Variations in the Form of the Utterance |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US20070143112A1 (en) * | 2005-12-20 | 2007-06-21 | Microsoft Corporation | Time asynchronous decoding for long-span trajectory model |
US7734460B2 (en) * | 2005-12-20 | 2010-06-08 | Microsoft Corporation | Time asynchronous decoding for long-span trajectory model |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US8484651B2 (en) * | 2007-05-04 | 2013-07-09 | Avaya Inc. | Distributed priority queue that maintains item locality |
US20080276241A1 (en) * | 2007-05-04 | 2008-11-06 | Ratan Bajpai | Distributed priority queue that maintains item locality |
US8768768B1 (en) * | 2007-09-05 | 2014-07-01 | Google Inc. | Visitor profile modeling |
US8839088B1 (en) | 2007-11-02 | 2014-09-16 | Google Inc. | Determining an aspect value, such as for estimating a characteristic of online entity |
US11023513B2 (en) | 2007-12-20 | 2021-06-01 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10643611B2 (en) | 2008-10-02 | 2020-05-05 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11348582B2 (en) | 2008-10-02 | 2022-05-31 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US20180218735A1 (en) * | 2008-12-11 | 2018-08-02 | Apple Inc. | Speech recognition involving a mobile device |
WO2010067118A1 (en) | 2008-12-11 | 2010-06-17 | Novauris Technologies Limited | Speech recognition involving a mobile device |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10607141B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10984326B2 (en) | 2010-01-25 | 2021-04-20 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10984327B2 (en) | 2010-01-25 | 2021-04-20 | New Valuexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US11410053B2 (en) | 2010-01-25 | 2022-08-09 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10607140B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US10692504B2 (en) | 2010-02-25 | 2020-06-23 | Apple Inc. | User profiling for voice input processing |
US9104484B2 (en) * | 2010-04-21 | 2015-08-11 | Salesforce.Com, Inc. | Methods and systems for evaluating bytecode in an on-demand service environment including translation of apex to bytecode |
US20110265066A1 (en) * | 2010-04-21 | 2011-10-27 | Salesforce.Com | Methods and systems for evaluating bytecode in an on-demand service environment including translation of apex to bytecode |
US9996323B2 (en) | 2010-04-21 | 2018-06-12 | Salesforce.Com, Inc. | Methods and systems for utilizing bytecode in an on-demand service environment including providing multi-tenant runtime environments and systems |
US10452363B2 (en) | 2010-04-21 | 2019-10-22 | Salesforce.Com, Inc. | Methods and systems for evaluating bytecode in an on-demand service environment including translation of apex to bytecode |
US20120016891A1 (en) * | 2010-07-16 | 2012-01-19 | Jiri Pechanec | Hierarchical registry federation |
US8725765B2 (en) * | 2010-07-16 | 2014-05-13 | Red Hat, Inc. | Hierarchical registry federation |
US8407245B2 (en) * | 2010-11-24 | 2013-03-26 | Microsoft Corporation | Efficient string pattern matching for large pattern sets |
US20120130983A1 (en) * | 2010-11-24 | 2012-05-24 | Microsoft Corporation | Efficient string pattern matching for large pattern sets |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10417405B2 (en) | 2011-03-21 | 2019-09-17 | Apple Inc. | Device access using voice authentication |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11350253B2 (en) | 2011-06-03 | 2022-05-31 | Apple Inc. | Active transport based notifications |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US11069336B2 (en) | 2012-03-02 | 2021-07-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US8583659B1 (en) * | 2012-07-09 | 2013-11-12 | Facebook, Inc. | Labeling samples in a similarity graph |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US20140149464A1 (en) * | 2012-11-29 | 2014-05-29 | International Business Machines Corporation | Tree traversal in a memory device |
US9064030B2 (en) * | 2012-11-29 | 2015-06-23 | International Business Machines Corporation | Tree traversal in a memory device |
US20140163981A1 (en) * | 2012-12-12 | 2014-06-12 | Nuance Communications, Inc. | Combining Re-Speaking, Partial Agent Transcription and ASR for Improved Accuracy / Human Guided ASR |
US9117450B2 (en) * | 2012-12-12 | 2015-08-25 | Nuance Communications, Inc. | Combining re-speaking, partial agent transcription and ASR for improved accuracy / human guided ASR |
US20140298243A1 (en) * | 2013-03-29 | 2014-10-02 | Alcatel-Lucent Usa Inc. | Adjustable gui for displaying information from a database |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10769385B2 (en) | 2013-06-09 | 2020-09-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US11048473B2 (en) | 2013-06-09 | 2021-06-29 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US11314370B2 (en) | 2013-12-06 | 2022-04-26 | Apple Inc. | Method for extracting salient dialog usage from live data |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US10657966B2 (en) | 2014-05-30 | 2020-05-19 | Apple Inc. | Better resolution when referencing to concepts |
US10714095B2 (en) | 2014-05-30 | 2020-07-14 | Apple Inc. | Intelligent assistant for home automation |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10417344B2 (en) | 2014-05-30 | 2019-09-17 | Apple Inc. | Exemplar-based natural language processing |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10699717B2 (en) | 2014-05-30 | 2020-06-30 | Apple Inc. | Intelligent assistant for home automation |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10438595B2 (en) | 2014-09-30 | 2019-10-08 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10390213B2 (en) | 2014-09-30 | 2019-08-20 | Apple Inc. | Social reminders |
US10453443B2 (en) | 2014-09-30 | 2019-10-22 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US11231904B2 (en) | 2015-03-06 | 2022-01-25 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10529332B2 (en) | 2015-03-08 | 2020-01-07 | Apple Inc. | Virtual assistant activation |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US11127397B2 (en) | 2015-05-27 | 2021-09-21 | Apple Inc. | Device voice control |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10356243B2 (en) | 2015-06-05 | 2019-07-16 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10354652B2 (en) | 2015-12-02 | 2019-07-16 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10942702B2 (en) | 2016-06-11 | 2021-03-09 | Apple Inc. | Intelligent device arbitration and control |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10580409B2 (en) | 2016-06-11 | 2020-03-03 | Apple Inc. | Application integration with a digital assistant |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10553215B2 (en) | 2016-09-23 | 2020-02-04 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10332518B2 (en) | 2017-05-09 | 2019-06-25 | Apple Inc. | User interface for correcting recognition errors |
US10755703B2 (en) | 2017-05-11 | 2020-08-25 | Apple Inc. | Offline personal assistant |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10847142B2 (en) | 2017-05-11 | 2020-11-24 | Apple Inc. | Maintaining privacy of personal information |
US10789945B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Low-latency intelligent automated assistant |
US10410637B2 (en) | 2017-05-12 | 2019-09-10 | Apple Inc. | User-specific acoustic models |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10482874B2 (en) | 2017-05-15 | 2019-11-19 | Apple Inc. | Hierarchical belief states for digital assistants |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US11217255B2 (en) | 2017-05-16 | 2022-01-04 | Apple Inc. | Far-field extension for digital assistant services |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US10684703B2 (en) | 2018-06-01 | 2020-06-16 | Apple Inc. | Attention aware virtual assistant dismissal |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11495218B2 (en) | 2018-06-01 | 2022-11-08 | Apple Inc. | Virtual assistant operation in multi-device environments |
US10403283B1 (en) | 2018-06-01 | 2019-09-03 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10984798B2 (en) | 2018-06-01 | 2021-04-20 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US11009970B2 (en) | 2018-06-01 | 2021-05-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
US10944859B2 (en) | 2018-06-03 | 2021-03-09 | Apple Inc. | Accelerated task performance |
Also Published As
Publication number | Publication date |
---|---|
EP1738291A1 (en) | 2007-01-03 |
WO2005103951A1 (en) | 2005-11-03 |
US20060004721A1 (en) | 2006-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7403941B2 (en) | System, method and technique for searching structured databases | |
KR101543992B1 (en) | Intra-language statistical machine translation | |
JP4888996B2 (en) | Conversation control device | |
US7831428B2 (en) | Speech index pruning | |
US8090738B2 (en) | Multi-modal search wildcards | |
US7636657B2 (en) | Method and apparatus for automatic grammar generation from data entries | |
US10019514B2 (en) | System and method for phonetic search over speech recordings | |
US7415406B2 (en) | Speech recognition apparatus, speech recognition method, conversation control apparatus, conversation control method, and programs for therefor | |
Bacchiani et al. | Fast vocabularyindependent audio search using path-based graph indexing | |
US20160275196A1 (en) | Semantic search apparatus and method using mobile terminal | |
US8805686B2 (en) | Melodis crystal decoder method and device for searching an utterance by accessing a dictionary divided among multiple parallel processors | |
US20060265222A1 (en) | Method and apparatus for indexing speech | |
US20080281806A1 (en) | Searching a database of listings | |
US7401019B2 (en) | Phonetic fragment search in speech data | |
WO2003010754A1 (en) | Speech input search system | |
JP2007115145A (en) | Conversation controller | |
JP4289715B2 (en) | Speech recognition apparatus, speech recognition method, and tree structure dictionary creation method used in the method | |
JP2015125499A (en) | Voice interpretation device, voice interpretation method, and voice interpretation program | |
US9704482B2 (en) | Method and system for order-free spoken term detection | |
Pan et al. | Analytical comparison between position specific posterior lattices and confusion networks based on words and subword units for spoken document indexing | |
TWI731921B (en) | Speech recognition method and device | |
JP2011014130A (en) | Method for converting set of words to corresponding set of particles | |
JP4521631B2 (en) | Storage medium recording tree structure dictionary and language score table creation program for tree structure dictionary | |
Švec et al. | Semantic entity detection from multiple ASR hypotheses within the WFST framework | |
Liu et al. | The effect of pruning and compression on graphical representations of the output of a speech recognizer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NOVAURIS LABORATORIES UK, LTD., UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BEDWORTH, MARK D.;COOK, GARY D.;REEL/FRAME:016807/0588 Effective date: 20050822 |
|
AS | Assignment |
Owner name: NOVAURIS TECHNOLOGIES LTD, UNITED KINGDOM Free format text: CHANGE OF NAME;ASSIGNOR:NOVAURIS LABORATORIES UK LTD;REEL/FRAME:016843/0802 Effective date: 20041115 |
|
AS | Assignment |
Owner name: NOVAURIS TECHNOLOGIES LIMITED, UNITED KINGDOM Free format text: CHANGE OF NAME;ASSIGNOR:NOVAURIS LABORATORIES UK LIMITED;REEL/FRAME:018041/0287 Effective date: 20041115 |
|
AS | Assignment |
Owner name: NOVAURIS TECHNOLOGIES LTD, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COOK, GARY D;BEDWORTH, MARK D;REEL/FRAME:018307/0294;SIGNING DATES FROM 20060920 TO 20060921 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVAURIS TECHNOLOGIES LIMITED;REEL/FRAME:034093/0772 Effective date: 20131030 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |