US20170286837A1 - Method of automated discovery of new topics - Google Patents

Method of automated discovery of new topics Download PDF

Info

Publication number
US20170286837A1
US20170286837A1 US15/489,560 US201715489560A US2017286837A1 US 20170286837 A1 US20170286837 A1 US 20170286837A1 US 201715489560 A US201715489560 A US 201715489560A US 2017286837 A1 US2017286837 A1 US 2017286837A1
Authority
US
United States
Prior art keywords
topic
computer
topics
new
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/489,560
Inventor
Scott Lightner
Franz Weckesser
Sanjay BODDHU
Robert FLAGG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qbase LLC
Original Assignee
Qbase LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qbase LLC filed Critical Qbase LLC
Priority to US15/489,560 priority Critical patent/US20170286837A1/en
Assigned to Qbase, LLC reassignment Qbase, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BODDHU, SANJAY, FLAGG, ROBERT, LIGHTNER, SCOTT, WECKESSER, FRANZ
Publication of US20170286837A1 publication Critical patent/US20170286837A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/93Document management systems
    • G06F17/20
    • G06F17/2705
    • G06F17/30011
    • G06F17/30598
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N99/005

Definitions

  • the present disclosure relates in general to data storage and more specifically to a method for performing automated discovery of new topics in a corpus.
  • Information can have great value. Assembling and maintaining a database to store information involves real costs, such as the costs to acquire information, the costs associated with physical assets used to house, secure, and make the information available, and labor costs to manage the information.
  • Embodiments of the present disclosure provide a method for performing automated discovery of new topics from unlimited documents related to any subject domain, employing a multi-component extension of Latent Dirichlet Allocation (MC-LDA) topic models, to discover related topics in a corpus.
  • the resulting data may contain millions of term vectors from any subject domain identifying the most distinguished co-occurring topics that users may be interested in, which may be employed to create a Master Topic Model.
  • MC-LDA Latent Dirichlet Allocation
  • the method for automated discovery of new topics may include multiple topic identification models with different number of term vectors and other parameters.
  • a topic identification model with 64 term vectors may provide a broader topic scope, while models with 256, 1024, or 16K term vectors may provide more specific fine-grained topics.
  • a new data may contain a large number of entities/topics in a database, which may be used periodically to parse and extract data from topics that users may be interested in.
  • This method may identify term vectors to change detection using term vector differences with no correlation in the Master Topic Model to compare and measure the significance of these changes, based on established thresholds to identify the similarity of the topics found by comparing one by one with topics from Periodic New model.
  • the present disclosure may provide a method for automated discovery of new topics in a corpus, using new content and comparing it to the existing model for periodically building new topic ID model database compressed into the smallest memory footprint possible, for providing fuzzy indexing to allow query-time linking on massive data sets, providing reliability and fault-tolerance through data, which may prevent software and hardware redundancy.
  • a method comprises automatically extracting, by a database source computer, from a document corpus, data associated with a plurality of co-occurring topics; in response to automatically extracting the plurality of co-occurring topics, extracting, by a synchronizing framework computer, a plurality of topic identifies from the plurality of co-occurring topics; creating, by the synchronizing framework computer, a master topic computer model for the document corpus from a first plurality of term vectors; creating, by the synchronizing framework computer, a periodic new topic computer model by comparing topic significance among the plurality of topic identifiers, the periodic new topic computer model including a second plurality of term vectors; and selecting, by the synchronizing framework computer, one or more new topics by identifying one or more term vectors from the second plurality of term vectors in the periodic new topic computer model that have no correlation with the first plurality of term vectors in the master topic computer model.
  • a system comprises a database source computer module configured to extract data associated with a plurality of co-occurring topics in a document corpus; and a synchronizing framework computer module configured to: (a) extract a plurality of topic identifies from the plurality of co-occurring topics; (b) create a master topic computer model for the document corpus from a first plurality of term vectors; (c) create a periodic new topic computer model by comparing topic significance among the plurality of topic identifiers, the periodic new topic computer model including a second plurality of term vectors; and (d) select one or more new topics by identifying one or more term vectors from the second plurality of term vectors in the periodic new topic computer model that have no correlation with the first plurality of term vectors in the master topic computer model.
  • a non-transitory computer readable medium having stored thereon computer executable instructions executed by a processor comprises automatically extracting, by a processor executing a database source computer module, from a document corpus data associated with a plurality of co-occurring topics; in response to automatically extracting the plurality of co-occurring topics, extracting, by the processor executing a synchronizing framework computer module, a plurality of topic identifies from the plurality of co-occurring topics; creating, by the processor executing the synchronizing framework computer, a master topic computer model for the document corpus from a first plurality of term vectors; creating, by the processor executing the synchronizing framework computer, a periodic new topic computer model by comparing topic significance among the plurality of topic identifiers, the periodic new topic computer model including a second plurality of term vectors; and selecting, by the processor executing the synchronizing framework computer, one or more new topics by identifying one or more term vectors from the second plurality of term vectors in the periodic new topic
  • FIG. 1 is a diagram illustrating a system for automated discovery of new topics, according to an exemplary embodiment.
  • FIG. 2 is an exemplary flowchart of a computer executed method for automated discovery of new topics, according to an exemplary embodiment.
  • FIG. 3 is a diagram illustrating an embodiment of a directed graphical representation of a multi-component, conditionally-independent Latent Dirichlet Allocation (MC-LDA) topic model executed by one or more special purpose computer modules of FIG. 1 , according to an exemplary embodiment.
  • MC-LDA Latent Dirichlet Allocation
  • Parse refers to analyzing the source code of a computer program to make sure that it is structurally correct before it is compiled and turned into machine code.
  • Term vector refers to an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing, and relevancy rankings.
  • Database refers to any system including any combination of clusters and modules suitable for storing one or more collections and suitable to process one or more queries.
  • Document refers to a discrete electronic representation of information having a start and end.
  • Multi-Document refers to a document with its tokens, different types of named entities, and key phrases organized into separate “bag-of-surface-forms” components.
  • Corpus refers to a collection of one or more documents.
  • Feature refers to any information which is at least partially derived from a document.
  • Cluster refers to a collection of features.
  • Memory refers to any hardware component suitable for storing information and retrieving said information at a sufficiently high speed.
  • Module refers to a computer software and/or hardware component suitable for carrying out one or more defined tasks.
  • Topic refers to a set of thematic information which is at least partially derived from a corpus.
  • Query refers to a request to retrieve information from one or more suitable databases.
  • Various aspects of the present disclosure describe a system and method for automated discovery of new topics in a corpus based on a concept of co-occurring topics from different pre-built topic models. These different topic models are built with different levels of granularity of topics, vocabulary and converging parameters, thus providing a vertical hierarchy/scalability over a specific domain of interest.
  • Embodiments of the present disclosure extend the conventional LDA topic modeling to support multi component LDA, where each component is treated as conditionally-independent, given document topic proportions. These components can include features like terms, key phrases, entities, facts, among others. Thus, this approach provides a concept of horizontal scalability of the topic models over a specific domain.
  • the combination of the vertical vocabulary and horizontal feature selection in the pre-built topic models provides varied dimensions of co-occurring topics, which on appropriate clustering and differential training via MEMDB can produce new topics. These new topics would not exists in the pre-built topic models to begin with, but could be discovered by running the documents in parallel across all the pre-built topic models.
  • Embodiments of the present disclosure describe a computer executed method for automated discovery of new topics that may facilitate the automated determination of relationships of corresponding term vectors from any subject domain identifying the most distinguished co-occurring topics that users may be interested in, which may be employed to create a Master Topic Model.
  • a term vector component may be a search component configured to return information about documents.
  • the documents are modeled as vectors in a high-dimensional space of millions of terms. The terms are derived from words and phrases in the document, which are weighted by their importance within the document and within the corpus of documents.
  • Each document's vector seeks to represent the document in a “vector space,” allowing comparison with vectors derived from other sources, for example, queries, or other documents.
  • Term vectors may be used as the basis of successful algorithms for document ranking, document filtering, document clustering, and relevance feedback.
  • the embodiments recite a procedure for automated discovery of new topics in a corpus based on a concept of co-occurring topics from different pre-built topic models. These different topic models are built with different levels of granularity of topics, vocabulary, and converging parameters, thereby providing a vertical hierarchy/scalability over a specific domain of interest.
  • the embodiments can extend LDA topic modeling to support multi-component LDA, where each component is treated as conditionally-independent, given document topic proportions. These components can include features, such as terms, key phrases, entities, facts, etc. Thus, this approach can provide a concept of horizontal scalability of the topic models over a specific domain.
  • the combination of the vertical vocabulary and horizontal feature selection in the pre-built topic models provides varied dimensions of co-occurring topics, which on appropriate clustering and differential training via an in-memory database can produce new topics. These new topics would not exist in the pre-built topic models, to begin with, but could be discovered by running the documents in parallel across all the pre-built topic models.
  • FIG. 1 illustrates a simplified block diagram of a system architecture 100 configured for automated discovery of new topics, from millions of documents related to any subject domain utilizing a Multi-Component Latent Dirichlet Allocation (MC-LDA) topic computer model, or similar suitable process to discover related topics in a corpus for periodically building new topic ID models, using new content and comparing it to the existing model.
  • MC-LDA Multi-Component Latent Dirichlet Allocation
  • the system for automated discovery of new topics may include one or more central servers having a plurality of special purpose software and hardware computer modules, including a database source module 102 which may contain a large number of entities/topics that users may be interested in.
  • the resulting data may contain a large number of term vectors from any subject domain identifying the most distinguished co-occurring topics that users may be interested in, which may be employed to implement a Master Topic Model computer module 104 .
  • system architecture 100 includes a single database source module 102 and a single destination in-memory database module 112 , it is to be understood and appreciated that the novel functionality of a system and method for automatic discovery of new topics may be employed with any number of sources and/or destination components, which may be remotely located and accessed.
  • Embodiments of the present disclosure may be directed to a system and method for automated discovery of new topics, which may include multiple topic identification models with different numbers of term vectors and other parameters.
  • a topic identification model with 64 term vectors may provide a broader topic scope, while models with 256, 1024, or 16K term vectors may provide more specific fine-grained topics.
  • Each topic or document may be analyzed on co-occurring topics across models to discover related topics characterized by a particular set of term vectors, making each individual word exchangeable, having good probabilities of generating new term vectors facilitating the automated discovery of new topics.
  • the system and method for automated discovery of new topics periodically may use new data in the database source module 102 to select data of interest or item feature.
  • This information may be used periodically to parse and extract data from topics that users may be interested in, to compare all term vectors from Master Topic Model module 104 with no correlation with term vectors of Periodic New Model module 106 employing a Detector of Term Vector Differences module 108 .
  • the system measures the significances of the changes by comparing each term vector one by one, selecting the more specific term vectors that do not correlate or have similarities with Master Topic Model 104 , employing different methods or any suitable method existing for this type of comparison.
  • An embodiment of the present disclosure may include a synchronization frame work computer module 110 which may be a framework of data collection interfaces that may communicate with database source computer module 102 and pull data items that may contain relevant information to a project.
  • Employing this process may generate a new set of topics to produce from zero to unlimited number of topics, which may be added to Master Topic Model 104 for periodical updates of automated discovery of new topics in a corpus, using the new content and comparing it to the existing model for periodical building of new topic ID model in-memory database 112 .
  • the in-memory database 112 may be compressed into the smallest memory footprint possible for providing fuzzy indexing to allow query-time linking on massive data-sets, providing reliability and fault tolerance through data for automated discovery of new topics in a corpus.
  • FIG. 2 illustrates a flowchart 200 of an embodiment of the methodology for automated discovery of new topics in accordance with one aspect of the present disclosure.
  • one or more methodologies shown in the form of a flowchart may be described as a series of steps. It is to be understood and appreciated that the subject disclosure is not limited by the order of the steps, as some steps may occur in accordance with the present disclosure or in a different order and/or concurrency with other steps shown and described here.
  • those skilled in the art may understand and appreciate the methodology which may be represented as a series of interrelated states or events, such as in a state diagram.
  • not all illustrated acts may be required to implement a methodology in accordance with the present disclosure.
  • the method for automatic discovery of new topics may initiate data extraction in step 202 , which may be configured to allow for custom entity extraction workflows for automated discovery of new topics.
  • a database source module 102 may be used to parse and extract data 204 of most distinguished concurring topics that a user may be interested in, employing LDA or similar suitable method to discover topics in a corpus, which, in step 206 , may be employed by the synchronizing framework module 110 ( FIG. 1 ) to create a Master Topic Model.
  • Term vectors may be used as the basis of successful algorithms for document ranking and filtering.
  • the method may periodically run a new set of data to select topics of interest from a very large collection of co-occurring entities extracted from a document corpus of the targeted domain. This new data may be analyzed to discover a relationship between data elements.
  • topic identifiers may be extracted to improve precision for creation of a Periodic New Model, step 210 , using a Detector of Term Vectors Differences module 108 of the synchronizing framework module 110 to compare and measure the significance of topics based on established thresholds, for periodically building new topic ID models using new content to identify the similarity of topics found.
  • step 212 term vectors from Periodic New Model having no correlation with term vectors of Master Topic Model are identified, where all term vectors are compared one by one with topics from Master Topic Model.
  • all differences may change detection of term vector differences.
  • the next step 216 involves the addition of selected topics to Master Topic Model, which, in step 218 , may be used to periodically build a new topics ID model to compress data into smallest memory possible configured to fit into in-memory database 112 .
  • the in-memory database 112 may have an advanced searching and imbedded record linking capabilities to provide fuzzy indexing, matching and match scores and non-exclusionary searching to provide in-database analytics and to allow query-time linking on massive data-set for automated discovery of new topics.
  • FIG. 3 illustrates an embodiment of a multi-component, conditionally-independent Latent Dirichlet Allocation (MC-LDA) topic model executed by a special purpose computer module, such as Topic Modules 104 , 106 discussed above in connection with FIG. 1 , and initialized in accordance with the set forth parameters.
  • MC-LDA model computer module provides a computer executed framework for horizontal scalability to add different components based on varied features, including entities, facts, key-phrases, and terms.
  • process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the steps in the foregoing embodiments may be performed in any order. Words such as “then,” “next,” etc. may not be intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods.
  • process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function.
  • Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof.
  • a code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents.
  • Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
  • the functions When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium.
  • the steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a computer-readable or processor-readable storage medium.
  • a non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another.
  • a non-transitory processor-readable storage media may be any available media that may be accessed by a computer.
  • non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor.
  • Disk and disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.

Abstract

The present disclosure relates to a method for performing automated discovery of new topics from unlimited documents related to any subject domain, employing a multi-component extension of Latent Dirichlet Allocation (MC-LDA) topic models, to discover related topics in a corpus. The resulting data may contain millions of term vectors from any subject domain identifying the most distinguished co-occurring topics that users may be interested in, for periodically building new topic ID models using new content, which may be employed to compare one by one with existing model to measure the significance of changes, using term vectors differences with no correlation with a Periodic New Model, for periodic updates of automated discovery of new topics, which may be used to build a new topic ID model in-memory database to allow query-time linking on massive data-set for automated discovery of new topics.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 14/919,631, entitled “Method of Automated Discovery of New Topics,” filed Oct. 21, 2015, which is a continuation of U.S. patent application Ser. No. 14/873,635, entitled “Method of Automated Discovery of New Topics,” filed Oct. 2, 2015, which is a continuation of U.S. patent application Ser. No. 14/558,076, entitled “Method for Automated Discovery of New Topics,” filed on Dec. 2, 2014, now U.S. Pat. No. 9,177,262, issued on Nov. 3, 2015, which is a non-provisional patent application that claims the benefit of U.S. Provisional Application No. 61/910,763, entitled “Method for Automated Discovery of New Topics,” filed Dec. 2, 2013, each of which are hereby incorporated by reference herein in their entirety.
  • This application is related to U.S. application Ser. No. 14/557,794, entitled “Method for Disambiguating Features in Unstructured Text,” filed Dec. 2, 2014; U.S. application Ser. No. 14/558,300, entitled “Event Detection Through Text Analysis Using Trained Event Template Models,” filed Dec. 2, 2014; and U.S. application Ser. No. 14/557,906, entitled “Method of Automated Discovery of Topic Relatedness,” filed Dec. 2, 2014; each of which are hereby incorporated by reference in their entirety.
  • FIELD OF THE DISCLOSURE
  • The present disclosure relates in general to data storage and more specifically to a method for performing automated discovery of new topics in a corpus.
  • BACKGROUND
  • As storage and availability of data grows, a large amount of time is spent identifying data relationships for discovery of new topics. Conventionally, the discovery of new topics is oftentimes performed manually by repetitive work leading to wasting valuable time of users.
  • Information can have great value. Assembling and maintaining a database to store information involves real costs, such as the costs to acquire information, the costs associated with physical assets used to house, secure, and make the information available, and labor costs to manage the information.
  • As computer processors are becoming more powerful, it would be particularly useful to save the time that an individual conventionally spends discovering new topics and identifying relationship criteria with existing models, or between the source and the target.
  • Oftentimes there are simple transformations, or complex topic identification across a large corpus of documents from any subject domain, requiring a lot of user's time for discovery of relationships associated with existing data.
  • Thus, there is a need for a simple and flexible method which assists users in connection with performing automated discovery of new topics, employing a new topic database for comparison with the existing topics for new application environments.
  • SUMMARY
  • Embodiments of the present disclosure provide a method for performing automated discovery of new topics from unlimited documents related to any subject domain, employing a multi-component extension of Latent Dirichlet Allocation (MC-LDA) topic models, to discover related topics in a corpus. The resulting data may contain millions of term vectors from any subject domain identifying the most distinguished co-occurring topics that users may be interested in, which may be employed to create a Master Topic Model.
  • In accordance with one aspect of the present disclosure, the method for automated discovery of new topics may include multiple topic identification models with different number of term vectors and other parameters. For example a topic identification model with 64 term vectors may provide a broader topic scope, while models with 256, 1024, or 16K term vectors may provide more specific fine-grained topics.
  • According to another embodiment, a new data may contain a large number of entities/topics in a database, which may be used periodically to parse and extract data from topics that users may be interested in. This method may identify term vectors to change detection using term vector differences with no correlation in the Master Topic Model to compare and measure the significance of these changes, based on established thresholds to identify the similarity of the topics found by comparing one by one with topics from Periodic New model.
  • The present disclosure may provide a method for automated discovery of new topics in a corpus, using new content and comparing it to the existing model for periodically building new topic ID model database compressed into the smallest memory footprint possible, for providing fuzzy indexing to allow query-time linking on massive data sets, providing reliability and fault-tolerance through data, which may prevent software and hardware redundancy.
  • In one embodiment, a method comprises automatically extracting, by a database source computer, from a document corpus, data associated with a plurality of co-occurring topics; in response to automatically extracting the plurality of co-occurring topics, extracting, by a synchronizing framework computer, a plurality of topic identifies from the plurality of co-occurring topics; creating, by the synchronizing framework computer, a master topic computer model for the document corpus from a first plurality of term vectors; creating, by the synchronizing framework computer, a periodic new topic computer model by comparing topic significance among the plurality of topic identifiers, the periodic new topic computer model including a second plurality of term vectors; and selecting, by the synchronizing framework computer, one or more new topics by identifying one or more term vectors from the second plurality of term vectors in the periodic new topic computer model that have no correlation with the first plurality of term vectors in the master topic computer model.
  • In another embodiment, a system comprises a database source computer module configured to extract data associated with a plurality of co-occurring topics in a document corpus; and a synchronizing framework computer module configured to: (a) extract a plurality of topic identifies from the plurality of co-occurring topics; (b) create a master topic computer model for the document corpus from a first plurality of term vectors; (c) create a periodic new topic computer model by comparing topic significance among the plurality of topic identifiers, the periodic new topic computer model including a second plurality of term vectors; and (d) select one or more new topics by identifying one or more term vectors from the second plurality of term vectors in the periodic new topic computer model that have no correlation with the first plurality of term vectors in the master topic computer model.
  • In another embodiment, a non-transitory computer readable medium having stored thereon computer executable instructions executed by a processor comprises automatically extracting, by a processor executing a database source computer module, from a document corpus data associated with a plurality of co-occurring topics; in response to automatically extracting the plurality of co-occurring topics, extracting, by the processor executing a synchronizing framework computer module, a plurality of topic identifies from the plurality of co-occurring topics; creating, by the processor executing the synchronizing framework computer, a master topic computer model for the document corpus from a first plurality of term vectors; creating, by the processor executing the synchronizing framework computer, a periodic new topic computer model by comparing topic significance among the plurality of topic identifiers, the periodic new topic computer model including a second plurality of term vectors; and selecting, by the processor executing the synchronizing framework computer, one or more new topics by identifying one or more term vectors from the second plurality of term vectors in the periodic new topic computer model that have no correlation with the first plurality of term vectors in the master topic computer model.
  • Numerous other aspects, features, and benefits of the present disclosure may be made apparent from the following detailed description taken together with the drawing features.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure can be better understood by referring to the following figures. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure. In the figures, reference numerals designate corresponding parts throughout the different views.
  • FIG. 1 is a diagram illustrating a system for automated discovery of new topics, according to an exemplary embodiment.
  • FIG. 2 is an exemplary flowchart of a computer executed method for automated discovery of new topics, according to an exemplary embodiment.
  • FIG. 3 is a diagram illustrating an embodiment of a directed graphical representation of a multi-component, conditionally-independent Latent Dirichlet Allocation (MC-LDA) topic model executed by one or more special purpose computer modules of FIG. 1, according to an exemplary embodiment.
  • DETAILED DESCRIPTION
  • The present disclosure is here described in detail with reference to embodiments illustrated in the drawings, which form a part hereof. Other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the present disclosure. The illustrative embodiments described in the detailed description are not meant to be limiting of the subject matter presented herein.
  • Glossary of Terms
  • As used herein, the following terms have the following definitions:
  • “Parse” refers to analyzing the source code of a computer program to make sure that it is structurally correct before it is compiled and turned into machine code.
  • “Term vector” refers to an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing, and relevancy rankings.
  • “Database” refers to any system including any combination of clusters and modules suitable for storing one or more collections and suitable to process one or more queries.
  • “Document” refers to a discrete electronic representation of information having a start and end.
  • “Multi-Document” refers to a document with its tokens, different types of named entities, and key phrases organized into separate “bag-of-surface-forms” components.
  • “Corpus” refers to a collection of one or more documents.
  • “Feature” refers to any information which is at least partially derived from a document.
  • “Cluster” refers to a collection of features.
  • “Memory” refers to any hardware component suitable for storing information and retrieving said information at a sufficiently high speed.
  • “Module” refers to a computer software and/or hardware component suitable for carrying out one or more defined tasks.
  • “Topic” refers to a set of thematic information which is at least partially derived from a corpus.
  • “Query” refers to a request to retrieve information from one or more suitable databases.
  • Description of Exemplary Embodiments
  • Various aspects of the present disclosure describe a system and method for automated discovery of new topics in a corpus based on a concept of co-occurring topics from different pre-built topic models. These different topic models are built with different levels of granularity of topics, vocabulary and converging parameters, thus providing a vertical hierarchy/scalability over a specific domain of interest. Embodiments of the present disclosure extend the conventional LDA topic modeling to support multi component LDA, where each component is treated as conditionally-independent, given document topic proportions. These components can include features like terms, key phrases, entities, facts, among others. Thus, this approach provides a concept of horizontal scalability of the topic models over a specific domain. The combination of the vertical vocabulary and horizontal feature selection in the pre-built topic models, provides varied dimensions of co-occurring topics, which on appropriate clustering and differential training via MEMDB can produce new topics. These new topics would not exists in the pre-built topic models to begin with, but could be discovered by running the documents in parallel across all the pre-built topic models.
  • Embodiments of the present disclosure describe a computer executed method for automated discovery of new topics that may facilitate the automated determination of relationships of corresponding term vectors from any subject domain identifying the most distinguished co-occurring topics that users may be interested in, which may be employed to create a Master Topic Model.
  • According to an embodiment, a term vector component may be a search component configured to return information about documents. In the term vector space model of information retrieval, the documents are modeled as vectors in a high-dimensional space of millions of terms. The terms are derived from words and phrases in the document, which are weighted by their importance within the document and within the corpus of documents. Each document's vector seeks to represent the document in a “vector space,” allowing comparison with vectors derived from other sources, for example, queries, or other documents. Term vectors may be used as the basis of successful algorithms for document ranking, document filtering, document clustering, and relevance feedback.
  • The embodiments recite a procedure for automated discovery of new topics in a corpus based on a concept of co-occurring topics from different pre-built topic models. These different topic models are built with different levels of granularity of topics, vocabulary, and converging parameters, thereby providing a vertical hierarchy/scalability over a specific domain of interest. The embodiments can extend LDA topic modeling to support multi-component LDA, where each component is treated as conditionally-independent, given document topic proportions. These components can include features, such as terms, key phrases, entities, facts, etc. Thus, this approach can provide a concept of horizontal scalability of the topic models over a specific domain. The combination of the vertical vocabulary and horizontal feature selection in the pre-built topic models provides varied dimensions of co-occurring topics, which on appropriate clustering and differential training via an in-memory database can produce new topics. These new topics would not exist in the pre-built topic models, to begin with, but could be discovered by running the documents in parallel across all the pre-built topic models.
  • A System for Automated Discovery of New Topics
  • FIG. 1 illustrates a simplified block diagram of a system architecture 100 configured for automated discovery of new topics, from millions of documents related to any subject domain utilizing a Multi-Component Latent Dirichlet Allocation (MC-LDA) topic computer model, or similar suitable process to discover related topics in a corpus for periodically building new topic ID models, using new content and comparing it to the existing model.
  • In accordance with one aspect of the present disclosure, the system for automated discovery of new topics may include one or more central servers having a plurality of special purpose software and hardware computer modules, including a database source module 102 which may contain a large number of entities/topics that users may be interested in. The resulting data may contain a large number of term vectors from any subject domain identifying the most distinguished co-occurring topics that users may be interested in, which may be employed to implement a Master Topic Model computer module 104.
  • Although the system architecture 100 includes a single database source module 102 and a single destination in-memory database module 112, it is to be understood and appreciated that the novel functionality of a system and method for automatic discovery of new topics may be employed with any number of sources and/or destination components, which may be remotely located and accessed.
  • Embodiments of the present disclosure may be directed to a system and method for automated discovery of new topics, which may include multiple topic identification models with different numbers of term vectors and other parameters. For example, a topic identification model with 64 term vectors may provide a broader topic scope, while models with 256, 1024, or 16K term vectors may provide more specific fine-grained topics. Each topic or document may be analyzed on co-occurring topics across models to discover related topics characterized by a particular set of term vectors, making each individual word exchangeable, having good probabilities of generating new term vectors facilitating the automated discovery of new topics.
  • According to principles of the present disclosure, the system and method for automated discovery of new topics periodically may use new data in the database source module 102 to select data of interest or item feature.
  • This information may be used periodically to parse and extract data from topics that users may be interested in, to compare all term vectors from Master Topic Model module 104 with no correlation with term vectors of Periodic New Model module 106 employing a Detector of Term Vector Differences module 108. The system measures the significances of the changes by comparing each term vector one by one, selecting the more specific term vectors that do not correlate or have similarities with Master Topic Model 104, employing different methods or any suitable method existing for this type of comparison.
  • An embodiment of the present disclosure may include a synchronization frame work computer module 110 which may be a framework of data collection interfaces that may communicate with database source computer module 102 and pull data items that may contain relevant information to a project. Employing this process may generate a new set of topics to produce from zero to unlimited number of topics, which may be added to Master Topic Model 104 for periodical updates of automated discovery of new topics in a corpus, using the new content and comparing it to the existing model for periodical building of new topic ID model in-memory database 112. The in-memory database 112 may be compressed into the smallest memory footprint possible for providing fuzzy indexing to allow query-time linking on massive data-sets, providing reliability and fault tolerance through data for automated discovery of new topics in a corpus.
  • The actual software code or specialized control hardware used to implement these systems, modules and methods are not limiting the invention. Thus, the operation and behavior of the systems, modules and methods were described without reference to the specific software code, being understood that software and control hardware may be designed to implement the systems, modules and methods based on the description herein.
  • A Method for Automated Discovery of New Topics
  • FIG. 2 illustrates a flowchart 200 of an embodiment of the methodology for automated discovery of new topics in accordance with one aspect of the present disclosure. For purposes of simplicity of explanation, one or more methodologies shown in the form of a flowchart may be described as a series of steps. It is to be understood and appreciated that the subject disclosure is not limited by the order of the steps, as some steps may occur in accordance with the present disclosure or in a different order and/or concurrency with other steps shown and described here. For example, those skilled in the art may understand and appreciate the methodology which may be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the present disclosure.
  • As may be seen in FIG. 2 the method for automatic discovery of new topics, may initiate data extraction in step 202, which may be configured to allow for custom entity extraction workflows for automated discovery of new topics. In an embodiment, a database source module 102 may be used to parse and extract data 204 of most distinguished concurring topics that a user may be interested in, employing LDA or similar suitable method to discover topics in a corpus, which, in step 206, may be employed by the synchronizing framework module 110 (FIG. 1) to create a Master Topic Model. Term vectors may be used as the basis of successful algorithms for document ranking and filtering.
  • In step 208, the method may periodically run a new set of data to select topics of interest from a very large collection of co-occurring entities extracted from a document corpus of the targeted domain. This new data may be analyzed to discover a relationship between data elements. In addition, topic identifiers may be extracted to improve precision for creation of a Periodic New Model, step 210, using a Detector of Term Vectors Differences module 108 of the synchronizing framework module 110 to compare and measure the significance of topics based on established thresholds, for periodically building new topic ID models using new content to identify the similarity of topics found. In step 212, term vectors from Periodic New Model having no correlation with term vectors of Master Topic Model are identified, where all term vectors are compared one by one with topics from Master Topic Model. In step 214, all differences may change detection of term vector differences.
  • The next step 216 involves the addition of selected topics to Master Topic Model, which, in step 218, may be used to periodically build a new topics ID model to compress data into smallest memory possible configured to fit into in-memory database 112. In embodiments, the in-memory database 112 may have an advanced searching and imbedded record linking capabilities to provide fuzzy indexing, matching and match scores and non-exclusionary searching to provide in-database analytics and to allow query-time linking on massive data-set for automated discovery of new topics.
  • FIG. 3 illustrates an embodiment of a multi-component, conditionally-independent Latent Dirichlet Allocation (MC-LDA) topic model executed by a special purpose computer module, such as Topic Modules 104, 106 discussed above in connection with FIG. 1, and initialized in accordance with the set forth parameters. In the illustrated embodiment, the MC-LDA model computer module provides a computer executed framework for horizontal scalability to add different components based on varied features, including entities, facts, key-phrases, and terms.
  • In FIG. 3,
    • J=number of multi-document components.
    • V(j)=number of terms in the vocabulary of the jth component.
    • D=number of documents.
    • N=number of tokens in (a component) of a document (actually depends on both j and d).
    • K=number of topics.
    • ∝=hyperparameter on the mixing proportions (K-vector or scalar if symmetric).
    • n(j)=hyperparameter on the mixture proportions (V(j)-vector or scalar if symmetric).
    • θd=the topic mixture proportion for document d.
    • Øk (j)=mixture component for the jth component of the kth topic.
    • z(j)=mixture indicator that chooses the topic for the nth word in jth component of document d.
    • d,n
    • ωd,n (j)=term indicator for the nth word in jth component of document d d n
  • The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the steps in the foregoing embodiments may be performed in any order. Words such as “then,” “next,” etc. may not be intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function.
  • The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
  • Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
  • The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the invention. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein.
  • When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
  • The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed.

Claims (20)

What is claimed is:
1. A computer-implemented method comprising:
extracting, by a computer, from a document corpus, data associated with a plurality of co-occurring topics;
in response to extracting the data associated with the plurality of co-occurring topics, extracting, by the computer, a plurality of topic identifiers from the plurality of co-occurring topics;
generating, by the computer, a periodic topic model comprising a set of one or more term vectors by comparing topic significance among the plurality of topic identifiers;
periodically creating, by the computer, new topic ID models using data content in the periodic topic model by identifying a similarity of topics;
generating, by the computer, fuzzy keys for the new topic ID models; and
indexing, by the computer, data in the new topic ID models based on the fuzzy keys for non-exclusionary searching and automated discovery of new topics.
2. The computer-implemented method of claim 1, further comprising identifying, by the computer, in one or more document corpora of a data source, a topic of interest based upon one or more concurring topics identified in the one or more document corpora.
3. The computer-implemented method of claim 2, further comprising automatically extracting, by the computer, from the document corpus, data associated with the plurality of co-occurring topics based on the topic of interest.
4. The computer-implemented method of claim 1, further comprising determining, by the computer, a relationship of corresponding term vectors from the plurality of co-occurring topics, each co-occurring topic of the plurality of co-occurring topics containing one or more term vectors.
5. The computer-implemented method of claim 4, further comprising generating, by the computer, a master topic computer model comprising a first set of one or more term vectors identified in text of the document corpus upon determining the relationship of the corresponding term vectors from the plurality of co-occurring topics.
6. The computer-implemented method of claim 5, further comprising selecting, by the computer, one or more new topics by identifying one or more term vectors from the set of the one or more term vectors in the periodic new topic computer model that has no correlation with the first set of one or more term vectors in the master topic computer model.
7. The computer-implemented method of claim 5, further comprising adding, via the computer, the one or more new topics to the master topic computer model.
8. The computer-implemented method of claim 1, wherein comparing the topic significance among the plurality of topic identifiers is based on a predetermined significance threshold.
9. The computer-implemented method of claim 5, wherein the master topic computer model is a multi-component extension of a Latent Dirichlet Allocation (MC-LDA) topic model.
10. The computer-implemented method of claim 1, wherein the periodic new topic computer model is a multi-component extension of a Latent Dirichlet Allocation (MC-LDA) topic model.
11. The computer-implemented method of claim 1, wherein the set of the one or more term vectors in the periodic new topic computer model corresponds to a second set of the one or more term vectors.
12. A system comprising:
a database source computer module configured to store a document corpus; and
one or more computers comprising one or more processors configured to:
extract from the document corpus, data associated with a plurality of co-occurring topics;
extract a plurality of topic identifiers from the plurality of co-occurring topics in response to extracting the data associated with the plurality of co-occurring topics;
generate a periodic topic model comprising a set of one or more term vectors by comparing topic significance among the plurality of topic identifiers;
periodically create new topic ID models using data content in the periodic topic model by identifying a similarity of topics;
generate fuzzy keys for the new topic ID models; and
index data in the new topic ID models based on the fuzzy keys for non-exclusionary searching and automated discovery of new topics.
13. The system of claim 12, wherein the one or more computers are further configured to identify in one or more document corpora of a data source, a topic of interest based upon one or more concurring topics identified in the one or more document corpora.
14. The system of claim 13, wherein the one or more computers are further configured to automatically extract from the document corpus, data associated with the plurality of co-occurring topics based on the topic of interest.
15. The system of claim 12, wherein the one or more computers are further configured to determine a relationship of corresponding term vectors from the plurality of co-occurring topics, each co-occurring topic of the plurality of co-occurring topics containing one or more term vectors.
16. The system of claim 15, wherein the one or more computers are further configured to generate a master topic computer model comprising a first set of one or more term vectors identified in text of the document corpus upon determining the relationship of the corresponding term vectors from the plurality of co-occurring topics.
17. The system of claim 16, wherein the one or more computers are further configured to select one or more new topics by identifying one or more term vectors from the set of the one or more term vectors in the periodic new topic computer model that has no correlation with the first set of one or more term vectors in the master topic computer model.
18. The system of claim 16, wherein the one or more computers are further configured to add the one or more new topics to the master topic computer model.
19. The system of claim 12, wherein comparing the topic significance among the plurality of topic identifiers is based on a predetermined significance threshold.
20. The system of claim 16, wherein the master topic computer model is a multi-component extension of a Latent Dirichlet Allocation (MC-LDA) topic model, and wherein the periodic new topic computer model is a multi-component extension of a Latent Dirichlet Allocation (MC-LDA) topic model.
US15/489,560 2013-12-02 2017-04-17 Method of automated discovery of new topics Abandoned US20170286837A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/489,560 US20170286837A1 (en) 2013-12-02 2017-04-17 Method of automated discovery of new topics

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201361910763P 2013-12-02 2013-12-02
US14/558,076 US9177262B2 (en) 2013-12-02 2014-12-02 Method of automated discovery of new topics
US201514873635A 2015-10-02 2015-10-02
US14/919,631 US9626623B2 (en) 2013-12-02 2015-10-21 Method of automated discovery of new topics
US15/489,560 US20170286837A1 (en) 2013-12-02 2017-04-17 Method of automated discovery of new topics

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/919,631 Continuation US9626623B2 (en) 2013-12-02 2015-10-21 Method of automated discovery of new topics

Publications (1)

Publication Number Publication Date
US20170286837A1 true US20170286837A1 (en) 2017-10-05

Family

ID=53265458

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/558,076 Active US9177262B2 (en) 2013-12-02 2014-12-02 Method of automated discovery of new topics
US14/919,631 Active US9626623B2 (en) 2013-12-02 2015-10-21 Method of automated discovery of new topics
US15/489,560 Abandoned US20170286837A1 (en) 2013-12-02 2017-04-17 Method of automated discovery of new topics

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/558,076 Active US9177262B2 (en) 2013-12-02 2014-12-02 Method of automated discovery of new topics
US14/919,631 Active US9626623B2 (en) 2013-12-02 2015-10-21 Method of automated discovery of new topics

Country Status (1)

Country Link
US (3) US9177262B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520008A (en) * 2018-03-15 2018-09-11 链家网(北京)科技有限公司 The construction method and construction device of data warehouse model
CN109509110A (en) * 2018-07-27 2019-03-22 福州大学 Method is found based on the hot microblog topic for improving BBTM model
US11410644B2 (en) 2019-10-18 2022-08-09 Invoca, Inc. Generating training datasets for a supervised learning topic model from outputs of a discovery topic model

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9424524B2 (en) 2013-12-02 2016-08-23 Qbase, LLC Extracting facts from unstructured text
US10354188B2 (en) 2016-08-02 2019-07-16 Microsoft Technology Licensing, Llc Extracting facts from unstructured information
US10318564B2 (en) 2015-09-28 2019-06-11 Microsoft Technology Licensing, Llc Domain-specific unstructured text retrieval
CN107133226B (en) * 2016-02-26 2021-12-07 阿里巴巴集团控股有限公司 Method and device for distinguishing themes
US10275444B2 (en) * 2016-07-15 2019-04-30 At&T Intellectual Property I, L.P. Data analytics system and methods for text data
US10614043B2 (en) * 2016-09-30 2020-04-07 Adobe Inc. Document replication based on distributional semantics
US10885065B2 (en) * 2017-10-05 2021-01-05 International Business Machines Corporation Data convergence
US11640420B2 (en) * 2017-12-31 2023-05-02 Zignal Labs, Inc. System and method for automatic summarization of content with event based analysis
US11755915B2 (en) 2018-06-13 2023-09-12 Zignal Labs, Inc. System and method for quality assurance of media analysis
US10970595B2 (en) 2018-06-20 2021-04-06 Netapp, Inc. Methods and systems for document classification using machine learning
CN109597875B (en) * 2018-11-02 2022-08-23 广东工业大学 Word embedding-based Gaussian LDA optimization solution mode
CN110134958B (en) * 2019-05-14 2021-05-18 南京大学 Short text topic mining method based on semantic word network
US11366845B2 (en) * 2019-05-20 2022-06-21 Accenture Global Solutions Limited Facilitating merging of concept hierarchies
JP7353940B2 (en) * 2019-11-26 2023-10-02 株式会社日立製作所 Transferability determination device, transferability determination method, and transferability determination program
US11817086B2 (en) * 2020-03-13 2023-11-14 Xerox Corporation Machine learning used to detect alignment and misalignment in conversation
US11386164B2 (en) * 2020-05-13 2022-07-12 City University Of Hong Kong Searching electronic documents based on example-based search query

Family Cites Families (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2343097A (en) 1996-03-21 1997-10-10 Mpath Interactive, Inc. Network match maker for selecting clients based on attributes of servers and communication links
US6178529B1 (en) 1997-11-03 2001-01-23 Microsoft Corporation Method and system for resource monitoring of disparate resources in a server cluster
US6353926B1 (en) 1998-07-15 2002-03-05 Microsoft Corporation Software update notification
US6266781B1 (en) 1998-07-20 2001-07-24 Academia Sinica Method and apparatus for providing failure detection and recovery with predetermined replication style for distributed applications in a network
US6338092B1 (en) 1998-09-24 2002-01-08 International Business Machines Corporation Method, system and computer program for replicating data in a distributed computed environment
US6959300B1 (en) 1998-12-10 2005-10-25 At&T Corp. Data compression method and apparatus
US7099898B1 (en) 1999-08-12 2006-08-29 International Business Machines Corporation Data access system
US6691108B2 (en) 1999-12-14 2004-02-10 Nec Corporation Focused search engine and method
JP3524846B2 (en) 2000-06-29 2004-05-10 株式会社Ssr Document feature extraction method and apparatus for text mining
US6738759B1 (en) 2000-07-07 2004-05-18 Infoglide Corporation, Inc. System and method for performing similarity searching using pointer optimization
US7813915B2 (en) 2000-09-25 2010-10-12 Fujitsu Limited Apparatus for reading a plurality of documents and a method thereof
US8692695B2 (en) 2000-10-03 2014-04-08 Realtime Data, Llc Methods for encoding and decoding data
US6832373B2 (en) 2000-11-17 2004-12-14 Bitfone Corporation System and method for updating and distributing information
US6691109B2 (en) 2001-03-22 2004-02-10 Turbo Worx, Inc. Method and apparatus for high-performance sequence comparison
GB2374687A (en) 2001-04-19 2002-10-23 Ibm Managing configuration changes in a data processing system
US7082478B2 (en) 2001-05-02 2006-07-25 Microsoft Corporation Logical semantic compression
US6961723B2 (en) 2001-05-04 2005-11-01 Sun Microsystems, Inc. System and method for determining relevancy of query responses in a distributed network search mechanism
US20030028869A1 (en) 2001-08-02 2003-02-06 Drake Daniel R. Method and computer program product for integrating non-redistributable software applications in a customer driven installable package
US6954456B2 (en) 2001-12-14 2005-10-11 At & T Corp. Method for content-aware redirection and content renaming
US6829606B2 (en) 2002-02-14 2004-12-07 Infoglide Software Corporation Similarity search engine for use with relational databases
US7421478B1 (en) 2002-03-07 2008-09-02 Cisco Technology, Inc. Method and apparatus for exchanging heartbeat messages and configuration information between nodes operating in a master-slave configuration
US6817558B1 (en) 2002-04-23 2004-11-16 Uop Llc Parallel sizing, dosing and transfer assembly and method of use
US8015143B2 (en) 2002-05-22 2011-09-06 Estes Timothy W Knowledge discovery agent system and method
US20040010502A1 (en) 2002-07-12 2004-01-15 Bomfim Joanes Depaula In-memory database for high performance, parallel transaction processing
US7570262B2 (en) 2002-08-08 2009-08-04 Reuters Limited Method and system for displaying time-series data and correlated events derived from text mining
US7249312B2 (en) 2002-09-11 2007-07-24 Intelligent Results Attribute scoring for unstructured content
US8090717B1 (en) 2002-09-20 2012-01-03 Google Inc. Methods and apparatus for ranking documents
US20100100437A1 (en) 2002-09-24 2010-04-22 Google, Inc. Suggesting and/or providing ad serving constraint information
US7058846B1 (en) 2002-10-17 2006-06-06 Veritas Operating Corporation Cluster failover for storage management services
US20040205064A1 (en) 2003-04-11 2004-10-14 Nianjun Zhou Adaptive search employing entropy based quantitative information measurement
US7139752B2 (en) 2003-05-30 2006-11-21 International Business Machines Corporation System, method and computer program product for performing unstructured information management and automatic text analysis, and providing multiple document views derived from different document tokenizations
US7543174B1 (en) 2003-09-24 2009-06-02 Symantec Operating Corporation Providing high availability for an application by rapidly provisioning a node and failing over to the node
US9009153B2 (en) 2004-03-31 2015-04-14 Google Inc. Systems and methods for identifying a named entity
US7818615B2 (en) 2004-09-16 2010-10-19 Invensys Systems, Inc. Runtime failure management of redundantly deployed hosts of a supervisory process control data acquisition facility
US20080077570A1 (en) 2004-10-25 2008-03-27 Infovell, Inc. Full Text Query and Search Systems and Method of Use
US7403945B2 (en) 2004-11-01 2008-07-22 Sybase, Inc. Distributed database system providing data and space management methodology
US7739270B2 (en) 2004-12-07 2010-06-15 Microsoft Corporation Entity-specific tuned searching
US20060179026A1 (en) 2005-02-04 2006-08-10 Bechtel Michael E Knowledge discovery tool extraction and integration
US20070174167A1 (en) 2005-05-20 2007-07-26 Stefano Natella Derivative relationship news event reporting
US20070005654A1 (en) 2005-05-20 2007-01-04 Avichai Schachar Systems and methods for analyzing relationships between entities
US20060294071A1 (en) 2005-06-28 2006-12-28 Microsoft Corporation Facet extraction and user feedback for ranking improvement and personalization
US7630977B2 (en) 2005-06-29 2009-12-08 Xerox Corporation Categorization including dependencies between different category systems
US7849048B2 (en) 2005-07-05 2010-12-07 Clarabridge, Inc. System and method of making unstructured data available to structured data analysis tools
US8386463B2 (en) 2005-07-14 2013-02-26 International Business Machines Corporation Method and apparatus for dynamically associating different query execution strategies with selective portions of a database table
US7681075B2 (en) 2006-05-02 2010-03-16 Open Invention Network Llc Method and system for providing high availability to distributed computer applications
US20070250501A1 (en) 2005-09-27 2007-10-25 Grubb Michael L Search result delivery engine
US20070073708A1 (en) 2005-09-28 2007-03-29 Smith Adam D Generation of topical subjects from alert search terms
US7447940B2 (en) 2005-11-15 2008-11-04 Bea Systems, Inc. System and method for providing singleton services in a cluster
US8341622B1 (en) 2005-12-15 2012-12-25 Crimson Corporation Systems and methods for efficiently using network bandwidth to deploy dependencies of a software package
JP2009521029A (en) 2005-12-22 2009-05-28 インターナショナル・ビジネス・マシーンズ・コーポレーション Method and system for automatically generating multilingual electronic content from unstructured data
US7899871B1 (en) 2006-01-23 2011-03-01 Clearwell Systems, Inc. Methods and systems for e-mail topic classification
US7519613B2 (en) 2006-02-28 2009-04-14 International Business Machines Corporation Method and system for generating threads of documents
US8726267B2 (en) 2006-03-24 2014-05-13 Red Hat, Inc. Sharing software certification and process metadata
US8190742B2 (en) 2006-04-25 2012-05-29 Hewlett-Packard Development Company, L.P. Distributed differential store with non-distributed objects and compression-enhancing data-object routing
US20070282959A1 (en) 2006-06-02 2007-12-06 Stern Donald S Message push with pull of information to a communications computing device
US8615800B2 (en) 2006-07-10 2013-12-24 Websense, Inc. System and method for analyzing web content
US7624118B2 (en) 2006-07-26 2009-11-24 Microsoft Corporation Data processing over very large databases
US8122026B1 (en) 2006-10-20 2012-02-21 Google Inc. Finding and disambiguating references to entities on web pages
US7783640B2 (en) 2006-11-03 2010-08-24 Oracle International Corp. Document summarization
US7853611B2 (en) 2007-02-26 2010-12-14 International Business Machines Corporation System and method for deriving a hierarchical event based database having action triggers based on inferred probabilities
US7734641B2 (en) * 2007-05-25 2010-06-08 Peerset, Inc. Recommendation systems and methods using interest correlation
US9535911B2 (en) 2007-06-29 2017-01-03 Pulsepoint, Inc. Processing a content item with regard to an event
US20090043792A1 (en) 2007-08-07 2009-02-12 Eric Lawrence Barsness Partial Compression of a Database Table Based on Historical Information
US10762080B2 (en) 2007-08-14 2020-09-01 John Nicholas and Kristin Gross Trust Temporal document sorter and method
GB2453174B (en) 2007-09-28 2011-12-07 Advanced Risc Mach Ltd Techniques for generating a trace stream for a data processing apparatus
KR100898339B1 (en) 2007-10-05 2009-05-20 한국전자통신연구원 Autonomous fault processing system in home network environments and operation method thereof
US8396838B2 (en) 2007-10-17 2013-03-12 Commvault Systems, Inc. Legal compliance, electronic discovery and electronic document handling of online and offline copies of data
US8594996B2 (en) 2007-10-17 2013-11-26 Evri Inc. NLP-based entity recognition and disambiguation
US8375073B1 (en) 2007-11-12 2013-02-12 Google Inc. Identification and ranking of news stories of interest
US8294763B2 (en) 2007-12-14 2012-10-23 Sri International Method for building and extracting entity networks from video
CA2710421A1 (en) 2007-12-21 2009-07-09 Marc Light Entity, event, and relationship extraction
US20090216734A1 (en) 2008-02-21 2009-08-27 Microsoft Corporation Search based on document associations
US8326847B2 (en) 2008-03-22 2012-12-04 International Business Machines Corporation Graph search system and method for querying loosely integrated data
WO2009117835A1 (en) 2008-03-27 2009-10-01 Hotgrinds Canada Search system and method for serendipitous discoveries with faceted full-text classification
US8712926B2 (en) 2008-05-23 2014-04-29 International Business Machines Corporation Using rule induction to identify emerging trends in unstructured text streams
US8358308B2 (en) 2008-06-27 2013-01-22 Microsoft Corporation Using visual techniques to manipulate data
CA2686796C (en) 2008-12-03 2017-05-16 Trend Micro Incorporated Method and system for real time classification of events in computer integrity system
US8150813B2 (en) 2008-12-18 2012-04-03 International Business Machines Corporation Using relationships in candidate discovery
US8874576B2 (en) 2009-02-27 2014-10-28 Microsoft Corporation Reporting including filling data gaps and handling uncategorized data
US20100235311A1 (en) 2009-03-13 2010-09-16 Microsoft Corporation Question and answer search
US8972396B1 (en) 2009-03-16 2015-03-03 Guangsheng Zhang System and methods for determining relevance between text contents
US8213725B2 (en) 2009-03-20 2012-07-03 Eastman Kodak Company Semantic event detection using cross-domain knowledge
US8161048B2 (en) 2009-04-24 2012-04-17 At&T Intellectual Property I, L.P. Database analysis using clusters
US8055933B2 (en) 2009-07-21 2011-11-08 International Business Machines Corporation Dynamic updating of failover policies for increased application availability
US9727842B2 (en) 2009-08-21 2017-08-08 International Business Machines Corporation Determining entity relevance by relationships to other relevant entities
US9165034B2 (en) 2009-10-15 2015-10-20 Hewlett-Packard Development Company, L.P. Heterogeneous data source management
US8645372B2 (en) 2009-10-30 2014-02-04 Evri, Inc. Keyword-based search engine results using enhanced query strategies
US20110125764A1 (en) 2009-11-26 2011-05-26 International Business Machines Corporation Method and system for improved query expansion in faceted search
EP2530605A4 (en) 2010-01-29 2013-12-25 Panasonic Corp Data processing device
US9710556B2 (en) 2010-03-01 2017-07-18 Vcvc Iii Llc Content recommendation based on collections of entities
CN103038764A (en) * 2010-04-14 2013-04-10 惠普发展公司,有限责任合伙企业 Method for keyword extraction
US8595234B2 (en) 2010-05-17 2013-11-26 Wal-Mart Stores, Inc. Processing data feeds
US9189357B2 (en) 2010-05-25 2015-11-17 Red Hat, Inc. Generating machine state verification using number of installed package objects
US8429256B2 (en) 2010-05-28 2013-04-23 Red Hat, Inc. Systems and methods for generating cached representations of host package inventories in remote package repositories
US8548969B2 (en) * 2010-06-02 2013-10-01 Cbs Interactive Inc. System and method for clustering content according to similarity
US9443008B2 (en) 2010-07-14 2016-09-13 Yahoo! Inc. Clustering of search results
US8538959B2 (en) 2010-07-16 2013-09-17 International Business Machines Corporation Personalized data search utilizing social activities
US8345998B2 (en) 2010-08-10 2013-01-01 Xerox Corporation Compression scheme selection based on image data type and user selections
US8321443B2 (en) 2010-09-07 2012-11-27 International Business Machines Corporation Proxying open database connectivity (ODBC) calls
US20120102121A1 (en) 2010-10-25 2012-04-26 Yahoo! Inc. System and method for providing topic cluster based updates
US8645298B2 (en) * 2010-10-26 2014-02-04 Microsoft Corporation Topic models
US9275001B1 (en) * 2010-12-01 2016-03-01 Google Inc. Updating personal content streams based on feedback
US9245022B2 (en) 2010-12-30 2016-01-26 Google Inc. Context-based person search
US8423522B2 (en) 2011-01-04 2013-04-16 International Business Machines Corporation Query-aware compression of join results
US20120246154A1 (en) 2011-03-23 2012-09-27 International Business Machines Corporation Aggregating search results based on associating data instances with knowledge base entities
US20120310934A1 (en) 2011-06-03 2012-12-06 Thomas Peh Historic View on Column Tables Using a History Table
KR20120134916A (en) 2011-06-03 2012-12-12 삼성전자주식회사 Storage device and data processing device for storage device
US9104979B2 (en) 2011-06-16 2015-08-11 Microsoft Technology Licensing, Llc Entity recognition using probabilities for out-of-collection data
EP2727247B1 (en) 2011-06-30 2017-04-05 Openwave Mobility, Inc. Database compression system and method
US9032387B1 (en) 2011-10-04 2015-05-12 Amazon Technologies, Inc. Software distribution framework
US9026480B2 (en) 2011-12-21 2015-05-05 Telenav, Inc. Navigation system with point of interest classification mechanism and method of operation thereof
US9037579B2 (en) 2011-12-27 2015-05-19 Business Objects Software Ltd. Generating dynamic hierarchical facets from business intelligence artifacts
US10908792B2 (en) 2012-04-04 2021-02-02 Recorded Future, Inc. Interactive event-based information system
US20130290232A1 (en) 2012-04-30 2013-10-31 Mikalai Tsytsarau Identifying news events that cause a shift in sentiment
US8948789B2 (en) 2012-05-08 2015-02-03 Qualcomm Incorporated Inferring a context from crowd-sourced activity data
US9275135B2 (en) 2012-05-29 2016-03-01 International Business Machines Corporation Annotating entities using cross-document signals
US20130325660A1 (en) 2012-05-30 2013-12-05 Auto 100 Media, Inc. Systems and methods for ranking entities based on aggregated web-based content
US9053420B2 (en) 2012-09-25 2015-06-09 Reunify Llc Methods and systems for scalable group detection from multiple data streams
US9703833B2 (en) 2012-11-30 2017-07-11 Sap Se Unification of search and analytics
US20140229476A1 (en) 2013-02-14 2014-08-14 SailMinders, Inc. System for Information Discovery & Organization
US9542652B2 (en) 2013-02-28 2017-01-10 Microsoft Technology Licensing, Llc Posterior probability pursuit for entity disambiguation
US20140255003A1 (en) 2013-03-05 2014-09-11 Google Inc. Surfacing information about items mentioned or presented in a film in association with viewing the film
US9104710B2 (en) 2013-03-15 2015-08-11 Src, Inc. Method for cross-domain feature correlation
US8977600B2 (en) 2013-05-24 2015-03-10 Software AG USA Inc. System and method for continuous analytics run against a combination of static and real-time data
US9734221B2 (en) 2013-09-12 2017-08-15 Sap Se In memory database warehouse
US9201744B2 (en) 2013-12-02 2015-12-01 Qbase, LLC Fault tolerant architecture for distributed computing systems
US9424294B2 (en) 2013-12-02 2016-08-23 Qbase, LLC Method for facet searching and search suggestions
US9025892B1 (en) 2013-12-02 2015-05-05 Qbase, LLC Data record compression with progressive and/or selective decomposition
US9223875B2 (en) 2013-12-02 2015-12-29 Qbase, LLC Real-time distributed in memory search architecture

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520008A (en) * 2018-03-15 2018-09-11 链家网(北京)科技有限公司 The construction method and construction device of data warehouse model
CN109509110A (en) * 2018-07-27 2019-03-22 福州大学 Method is found based on the hot microblog topic for improving BBTM model
US11410644B2 (en) 2019-10-18 2022-08-09 Invoca, Inc. Generating training datasets for a supervised learning topic model from outputs of a discovery topic model
US11804216B2 (en) 2019-10-18 2023-10-31 Invoca, Inc. Generating training datasets for a supervised learning topic model from outputs of a discovery topic model

Also Published As

Publication number Publication date
US20160042276A1 (en) 2016-02-11
US9626623B2 (en) 2017-04-18
US9177262B2 (en) 2015-11-03
US20150154148A1 (en) 2015-06-04

Similar Documents

Publication Publication Date Title
US9626623B2 (en) Method of automated discovery of new topics
US11645317B2 (en) Recommending topic clusters for unstructured text documents
US9201931B2 (en) Method for obtaining search suggestions from fuzzy score matching and population frequencies
US9542477B2 (en) Method of automated discovery of topics relatedness
US11573996B2 (en) System and method for hierarchically organizing documents based on document portions
US9239875B2 (en) Method for disambiguated features in unstructured text
US9720944B2 (en) Method for facet searching and search suggestions
US9424524B2 (en) Extracting facts from unstructured text
US9619571B2 (en) Method for searching related entities through entity co-occurrence
Vysotska et al. Method of similar textual content selection based on thematic information retrieval
WO2015084757A1 (en) Systems and methods for processing data stored in a database
Abbas et al. Automated File Labeling for Heterogeneous Files Organization Using Machine Learning.
Ise Integration and analysis of unstructured data for decision making: Text analytics approach
US20170124090A1 (en) Method of discovering and exploring feature knowledge
Radhakrishnan et al. Modeling the evolution of product entities
Choega et al. Building Knowledge Graphs with Python
Afolabi et al. Topic Modelling for Research Perception: Techniques, Processes and a Case Study
Patankar et al. Seminal Paper on Genealogy by using Ontology
Ordina Classification Problem in Real Estate Corpora: Furniture Detection in Real Estate Listings
Sakthisree et al. Analysing the Social Data Opinion through Public User Raw Information
Motwani et al. Hadoop based Information Extract from Text Document
Lavrač et al. Exploratory analysis of the social network of researchers in inductive logic programming
Tran et al. Context-Aware Timeline for Entity Exploration
Sultan et al. Automated File Labeling for Heterogeneous Files Organization Using Machine Learning
Koppel Using SQL-based Scripting Languages in Hadoop Ecosystem for Data Analytics

Legal Events

Date Code Title Description
AS Assignment

Owner name: QBASE, LLC, VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIGHTNER, SCOTT;WECKESSER, FRANZ;BODDHU, SANJAY;AND OTHERS;SIGNING DATES FROM 20141201 TO 20141202;REEL/FRAME:042033/0949

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION