US20190384800A1 - Machine learning and inference system - Google Patents

Machine learning and inference system Download PDF

Info

Publication number
US20190384800A1
US20190384800A1 US16/558,263 US201916558263A US2019384800A1 US 20190384800 A1 US20190384800 A1 US 20190384800A1 US 201916558263 A US201916558263 A US 201916558263A US 2019384800 A1 US2019384800 A1 US 2019384800A1
Authority
US
United States
Prior art keywords
content
user
patterns
relationships
augmented content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/558,263
Inventor
Shauki Elassaad
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/573,564 external-priority patent/US9275148B1/en
Priority claimed from US14/217,462 external-priority patent/US9632654B1/en
Application filed by Individual filed Critical Individual
Priority to US16/558,263 priority Critical patent/US20190384800A1/en
Publication of US20190384800A1 publication Critical patent/US20190384800A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/904Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • G06F17/241
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/169Annotation, e.g. comment data or footnotes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the subject of this application generally relates to information processing and to systems and methods for improving data-exploration, learning and browsing, and more specifically for context sensitive data augmentation for a richer user experience, data-exploration, and knowledge discovery.
  • Having a context sensitive user interface which can automatically choose from a multiplicity of options based on the current or previous state(s) of a program operation can be found in current graphical user interface. For example: Clicking on a text document automatically opens the document in a word processing environment. The user does not have to specify what type of program to use to open the file.
  • Program files and their shortcuts i.e. executable files
  • Program files and their shortcuts can be associated with certain type of files, e.g. text document, and are automatically run by the operating system when the user selects or double clicks the file.
  • the user-interface may also provide context sensitive feedback, such as changing the appearance and/or color of the mouse pointer or cursor.
  • context sensitive feedback may also be used in video games where it changes a button's function based on a player who is in a certain position or a place and needs to interact with an object.
  • Relational databases are currently the predominant choice in storing data like financial records, medical records, personal information, manufacturing and logistical data.
  • large-scale data or information processing can involve various types of collection, extraction, warehousing, analysis and statistics. For example, organizing and matching data by using some common characteristics found within the data set would result in new groups of data that can organized and are easier for many people to understand, search, index and manipulate.
  • a webpage may include metadata specifying what language was used in writing its code, what tools were used to create it, and where to go for more on the subject, higher-level concepts that describe the data.
  • the results of any large-scale data processing can be an extensive set of meta-data, data, and relationships that may be used in a search engine, for example, to provide a possible set of related information to a term that is used in a search query.
  • search engines have used and generated enormous amount of data and metadata that is used to provide links to content that may be of possible interest to a user based on what the user is searching for.
  • Knowledge discovery platform systems as described in related patent applications can be used to generate augmented knowledge using such large scale data that meet the needs of a user.
  • the augmented knowledge provided to the user can be highly relevant to another user or another knowledge discovery system.
  • the other user or system may have certain distinct criterions, characteristics, preferences or interests that are different from the first user.
  • an increase in accuracy and efficiency can be achieved by benefiting from using the augmented knowledge already obtained for a first user and by a regenerated or modified augmented content and knowledge tailored to a second user's interest, profile, or preferences. Therefore, there exists a need for a knowledge discovery system that can leverage the knowledge discovered for a first user to provide augmented knowledge and/or newly discovered or augmented knowledge based on a second user's preferences or interests.
  • this disclosure presents new and useful methods and systems to provide multilevel context sensitive augmented experience, browsing, data exploration, knowledge discovery, and e-learning.
  • this multilevel context sensitive augmented content is presented using overlaid layers on top of the digital information (reference content or original content) being viewed by a user.
  • the overlaid layers can be transparent or translucent for a non-obtrusive user experience.
  • the updated augmented content is generated based at least on the user interaction with the reference content.
  • the user can manipulate the original content and its associated or related categories and other relevant augmentation data to generate more relevant and meaningful augmentation while viewing the augmented content on top of the reference content.
  • a system for generating and presenting augmented content on a translucent display layer overlaid on top of a reference content display layer on the same display screen is generated using relevant features of the reference content or the displayed portion of the reference content.
  • the generation of the augmented content is further customized using user-relevant characteristics, attributes, history, and relevant features in relation to the reference content such as generic categories and relationships.
  • the user controls the position and size of both the reference content display layer and the augmented content display layers on the same display screen, as well as the ability of the user to control the visibility and hiding of all display layers.
  • the user controls the sharing of the same display screen by the reference content and augmented content display layers.
  • the system for generating and presenting augmented content provides a set of augmentation filters: topics and categories based on the reference content to aid the user in further customization of the augmentation filters to suite his/her interests.
  • the generated augmentation content is one or more of online documents, web pages, web links.
  • the generated augmentation content can be a customized version using a variety of ways such as presenting a summary of the augmented content, or in deleting un-necessary links and ads.
  • the generated augmentation content is based at least on one of (i) a set of criteria associated with the reference content, (ii) user customization of augmentation filters, (iii) user interaction with the reference content, and (iv) user interaction with the generated augmented content.
  • the system can employ the same methods and algorithms to enable the user to custom build a knowledge graph of concepts and relationships based on information retrieved from structured and unstructured data residing in a private or public data store or other public repositories.
  • the Augmentation System relies on these data sources along with the user's feedback and interests to generate on the fly relevant augmentation data for the task at hand.
  • a physician can utilize this system to custom build a knowledge graph for a patient based on the physician's experience and knowledge, the patient's history, the patient's known diseases, symptoms, and ailments, and known public data related to the patient's case.
  • Such a system will enable the physician to make educated and informed decisions instead of being mired in a plethora of sources where it would be extremely hard for the physician to manually extract reliable and relevant data in an efficient and useful way.
  • the system for generating and presenting augmented content dynamically updates the augmented content by utilizing additional filters, metrics, and customization provided by the user as a result of the generated augmented content. Furthermore, the user can save any or all the data associated with a particular session of data augmentation. This will enable the user to build on the augmentation of previous sessions.
  • the system for generating and presenting augmented content generates global and local augmentation content associated with the reference content and any selected or highlighted part of it. For example, the system generates a plurality of global augmentation content based on the augmentation filters associated with the overall reference content, and the system generates a plurality of local augmentation content based on a specific part of the reference content that is selected or flagged by the user, or currently being viewed by the user.
  • the system for generating and presenting augmented content enables collaborative augmentation, e.g. a user can share the generated augmented content with other users. Furthermore, the user can share the content augmentation filters, or the settings used to generate the augmented content with other users.
  • the system for generating and presenting augmented content enables a user to make use of nested hierarchical content augmentation capabilities.
  • a user can request content augmentation using at least a portion of a previously generated augmented content.
  • the previously generated augmented content serves as new reference content for the system to generate and present to the user a new augmented content.
  • the user can traverse the content augmentation graph to further customize the content augmentation at any level.
  • the display screen may be physically attached to an electronic device, e.g. a mobile device, a handheld device, a tablet, etc . . . , or the display screen may physically separate from the electronic device.
  • an electronic device e.g. a mobile device, a handheld device, a tablet, etc . . .
  • the display screen may physically separate from the electronic device.
  • a touch display where a user interacts with the display screen and controls both the position and size of the various display layers on the display screen.
  • the display screen can communicate with a remote electronic device such as a remote server, or a mobile device. Alternately, the user can control the position and size of all display layers on the physically detached display screen using the electronic device.
  • the user interaction with the reference content includes at least one of a manipulation of a region of the first display layer, a manipulation of a region of the second display layer, hiding of the first display layer, hiding of the second display layer, saving the first set of augmented content, saving a portion of the first set of augmented content, modifying the translucency of the second display layer, a selection of a region of the first display screen, a manipulation of a region of the first display screen, one or more user gesture made onto the first display screen, an activation of a button of the first display screen, an activation of a button of the electronic device, and using a human interface device to communicate the user interaction to the electronic device.
  • the reference content, local content, and augmentation content are displayed using multiple display layers by means of one or more display screens.
  • the display screen comprises electronic system to receive and/or transmit information to an electronic device.
  • the user interaction with the reference content includes the manipulation of one or more regions of at least one display layer, a manipulation of one or more regions of at least one display layer of the augmented content, hiding of any one or more of the display layers, saving the first set of augmented content, saving a portion of the first set of augmented content, modifying the translucency of any one of the display layers, a selection or a manipulation of one or more regions of any one of the display screens, one or more user gesture made onto the display screen, an activation of a button of the display screen, an activation of a button of the electronic device, and using a human interface device to communicate the user interaction to the electronic device or to the display screen.
  • this disclosure refers to augmenting a given content based on a number of manually defined and automatically extracted parameters to generate a set of local and global data elements.
  • the set of local and global data elements can be used in a variety of application specific augmentation systems to enhance a user's experience while interacting with the given content.
  • this disclosure facilitates the construction and presentation of a user-customized network of concepts, objects and relationships that serve to augment the content at hand for the purpose of knowledge discovery, learning, and a richer user experience in browsing and/or interacting with data information. Furthermore, the constructed network can be saved and further augmented over time for richer and more efficient user experience. This is in contrast to having a pre-built network of concepts and relationships that a user can access. This system generates a network that can be customized and tailored based on the user's interests.
  • this disclosure facilitates a system that provides the user the ability to fully control the generated augmented content by virtue of changing the scope of certain topics, e.g. expanding or specifying a narrower sub-topic, based at least on one of a defined theme, predefined themes, and categories. Therefore, the augmented content can serve to further explain, define, and to elaborate and expound on reference content or a selected portion of reference content being viewed, observed, or interacted with by a user.
  • this disclosure can be used to aggregate information related to a reference or selected content by customizing the augmentation filters to achieve the desired or intended results.
  • the information, reference content, or the generated augmented content can include rich media like video, audio, images as well as text.
  • Various filters can be customized by the user to enable a user to increase the relevance of the generated augmented content to the intended user objective.
  • a hierarchical system of content augmentation maybe defined and customized by a selected theme or a category.
  • the generated augmented content and its display layers can be monetized for ads and other monetization purposes.
  • this disclosure enables real-time manipulation of reference and augmented content for enhanced and richer User Experience (UX).
  • UX User Experience
  • collaboration and sharing of augmented content provides an increase in value and productivity to a user.
  • collaboration and sharing of augmentations filters and settings provide additional richness and ease of viewing, browsing, sharing, and manipulation of reference and augmented content.
  • the user is able to control the presentation style of the generated augmented content, e.g. as raw links, concise summary of augmented content, or other methods that capture the essence of the augmented content.
  • the presentation style of the generated augmented content maybe for data analysis, research, information, monetization, commercial, or educational purposes.
  • a system for generating and presenting augmented content on a translucent display layer overlaid on top of a reference content display layer on the same display screen is generated using relevant features and filters extracted from the reference content or the displayed portion of the reference content.
  • a feature is a pattern that can be extracted or inferred from the content at hand.
  • Feature extraction is the process of reducing the dimensionality of a document by capturing a set of features which reflect the most relevant and salient properties of that document.
  • a feature can be a keyword in the content, title of the content, or other metadata that can be extracted or inferred from the content, its link, or any embedded content or link to other content.
  • a feature could also correspond to a concept such as a name, a topic, or an event that can be extracted or inferred from the content.
  • a group of features are combined together using an association rule to form a pattern, a complex pattern, or a filter.
  • a filter may comprise or describe a relationship between features, a collection of features, or a group of features.
  • a filter may also reflect a correlation between a set of features.
  • a category is a grouping of features or a grouping of multiple sets of features.
  • a category may correspond to a classification of entities or concepts that share some property or relationships.
  • a category maybe formed using a filter, group of filters, or any combination of filters and features.
  • An association rule to combine a feature, a set of features, a filter, or a set of filters can also be used to generate a category, a set of categories, a new feature, a new filter, a new set of features, or a new set of filters.
  • the generation of the augmented content can use (i) any one of a feature, a filter, a category, or (ii) any association in between, or a combination, of a filter, a feature, and a category.
  • the generation of augmented content is further customized using user-relevant characteristics, attributes, history, and other relevant features in relation to the reference content such as generic categories and relationships.
  • the user controls the position and size of both the reference content display layer and the augmented content display layers on the same display screen, as well as the ability of the user to control the visibility and hiding of all display layers. Furthermore, the user controls the sharing of the same display screen by the reference content and augmented content display layers.
  • the user interaction with any one of the reference content, a portion of the reference content, an augmented content, and a portion of an augmented content includes at least one of a manipulation of a region of the first display layer, a manipulation of a region of the second display layer, hiding of the first display layer, hiding of the second display layer, hiding a display layer, saving at least a portion of the reference content, saving at least a portion of the augmented content, saving at least a portion of one or more of a set of filters, a set of features, a set of categories, a set of metrics, a set of user preferences, modifying the translucency of a display layer, a selection of displayed content using a region of the first display screen, a manipulation of displayed content using a region of the first display screen, one or more user gesture made onto the first display screen, an activation of a button of the first display screen, a user input made using the electronic device, and using a human interface device to communicate the user interaction to the electronic device.
  • KDP Knowledge Discovery Platform
  • the one or more users of a KDP system can invoke knowledge discovery request or data augmentation request on certain user content.
  • the user content may be displayed using a wireless device or a monitor coupled to the computer system.
  • the KDP system automatically generates augmented content and display the augmented data based on user's parameters, preferences, or interests.
  • the one or more users can tailor or annotate the augmented content in accordance with certain parameters, preferences, or interests.
  • Also described herein is a method for performing data augmentation request in a computer system for providing enriched or augmented sharing of the knowledge discovered (or already augmented user content) between one or more users of KDP system and with one or more end users (clusters of users), another KDP system, or a process using another KDP system.
  • a receiving end user or process would receive the augmented content information or a notification pointing to the augmented content where the augmented content can be accessed, downloaded or manipulated by the end user or process using the KDP system.
  • Also described herein is a method for performing data augmentation request in a computer system for providing the receiving end user (or a process using another instance of a KDP system) may perform automated processing of the received notification of the augmented content using certain preferences, preconfigured preferences, or having certain programmable parameters such that the KDP system regenerates the shared augmented content using the preferences, preconfigured preferences, or certain programmable parameters of the process.
  • the regenerated augmented content can be parsed or additional invocation of the KDP system may be used to further refine the augmented content or for sharing a customized version of the received augmented content.
  • the augmented content can be stored in the computer system or transmitted to be processed further through additional computer systems, KDP systems, or using other computer systems or processes that are dedicated for processing knowledge discovery request or data augmentation request in response to one or more preferences, specific interests, and/or target market.
  • Real-time and theme based augmentation may also be used to further enhance the user's experience.
  • the present application discloses knowledge discovery platform systems and methods to provide a first user the ability to generate augmented content, and to allocate, regenerate, or modify the augmented content using stored information in a computer system, and the stored information is associated with the first user.
  • the stored information includes preferences or other programmable parameters associated with a second user, a cluster of users, or augmented content of the first user.
  • the stored information may also include any one of a profile of a second user, a parameter of an executable code or process, preconfigured preferences for a registered KDP user, and preconfigured preferences for an unregistered KDP user.
  • the computer system is programmable to collect the stored information using a temporary (volatile memory) or permanent storage (non-volatile memory) of the profile of the second user, the parameter of the executable code or process, the preconfigured preferences for registered KDP users, and preconfigured preferences for an unregistered KDP user.
  • the present application discloses knowledge discovery platform systems and methods to store a KDP's profile of a user, knowledge discovery request, or data augmentation request and provide a first user the ability to generate augmented content using new preferences and/or KDP's profile of the first user, and to allocate, regenerate, or modify the augmented content using preferences, parameters, KDP's profile, or information associated with the augmented data or discovered knowledge of a second KDP user.
  • the present application discloses knowledge discovery platform systems and methods to (i) provide a first user the ability to generate augmented content using at least one of a first user's preferences, first user's manual annotation, and first user's KDP profile; and (ii) automatically allocate, regenerate, or modify the augmented content using at least one of a designated process, preconfigured preferences of a designated process, preferences of a second user, a KDP's profile of a second user, preconfigured preferences of a registered KDP user, and preconfigured preferences for an unregistered KDP user.
  • the designated process can be a process or part of a process being executed using a KDP system, a computer system, a compute server or a wireless device.
  • one or more users of KDP are able to allocate and share augmented content (or discovered knowledge) with one or more end users using any one or more of means for communication, means for notification, and through a KDP process.
  • the one or more users of KDP invoke KDP system on certain content, and the KDP system automatically generates augmented content.
  • the one or more users can tailor or annotate the augmented content in accordance with certain parameters, preferences, or interests.
  • the one or more users then share this augmented content with one or more end users or processes.
  • a receiving end user or process would receive the augmented content information or a notification pointing to the augmented content where the augmented content can be accessed, downloaded or manipulated by the end user or processed using the KDP system.
  • a process may perform automated processing of the received notification of the augmented content using certain preconfigured preferences, or programmable parameters, such that the KDP system regenerates the shared augmented content using the preconfigured preferences or the programmable parameters of the process.
  • the regenerated augmented content can be parsed or additional invocation of KDP system may be used to further refine the received augmented content or for sharing a customized version of the received augmented content.
  • the received augmented content can be processed through additional systems or using other processes that are dedicated to one or more preferences, specific interests, localized parameters, geographical location, and/or a target market.
  • a user is enabled to share the augmented content (or a particular knowledge graph or other generated content) with one or more designated end users or processes in accordance with one or more predefined service levels, customized market, geographical locations, seasonal or timing events, and/or localized preferences per end user.
  • an end user is a registered KDP user then certain privileges or service levels can be invoked or processed to enrich or further customize the augmented content according to the end user profile.
  • the KDP system further customizes the regeneration of the received augmented content by (i) restricting or limiting certain privileges or service levels, (ii) enabling certain privileges or redirecting to certain service levels, or (iii) using certain localized parameters that are associated with the end user.
  • certain service levels or privileges may be invoked once the end user has become a registered KDP user.
  • a registered KDP user can tailor and automate the knowledge discovery and its broadcasting to the masses. This would enable a registered KDP user a rich and unique offering that will incentivize the growth of the number of registered KDP users and enabling, for example, people to discover stories or information, annotate them and share them with their friends or the public at large by sharing the story on any social or public network or forum, the KDP system in turn further customizes the delivered or shared stories or information using any one of the embodiment disclosed above or any combination of one or more of the embodiments disclosed above.
  • FIG. 1 shows Discovery Patterns 10100 .
  • FIG. 2 shows block diagram for an Augmentation System 11500 .
  • FIG. 3 shows block diagram for Causality and Augmentation System 11600 .
  • FIG. 4 shows block diagram of a Causality Graph Synthesis 11700 .
  • FIG. 5 shows block diagram of the Augmentation System with the causality graph 11900 .
  • FIG. 6 is a block diagram of a data augmentation system 12100 that is used to generate augmented content for a given reference content.
  • FIG. 7 is a block diagram of a hierarchical augmentation system 12200 that is used to support the generation of multilevel augmented content using multiple reference content.
  • FIG. 8 is a block diagram of a hierarchical augmentation system 12300 that is used to support the generation of multilevel augmented content using multiple reference content along with a controller that manages the nested augmentation functions and the user's interaction with the generated augmented content.
  • FIG. 9 is a block diagram of a data augmentation system 12400 having the capability of manipulating, controlling and displaying both the reference content and the augmented content simultaneously and dynamically.
  • FIG. 10 is a block diagram of a relevant augmented content extraction 12500 that is used as a subsystem of a data augmentation system.
  • FIG. 11 is a block diagram of a relevant augmented content extraction 12600 that is used as a subsystem of a data augmentation system.
  • FIG. 12 is a display example of the generated augmented content of using multiple display layers.
  • FIG. 13 is a display example of a simulated use case of a data augmentation system invoked while viewing a news article.
  • FIG. 14 is a display example of a simulated use case of a user interaction with a data augmentation system invoked while viewing a news article.
  • FIG. 15 shows an example block diagram of knowledge discovery system 13100 in accordance with one embodiment.
  • FIG. 16 shows an example block diagram of knowledge discovery system 14100 in accordance with one embodiment.
  • FIG. 17 shows an example block diagram of Knowledge Discovery Tailoring and Annotation system 14120 in accordance with one embodiment.
  • FIG. 18 shows an example block diagram of Knowledge Sharing and Broadcasting 14130 in accordance with one embodiment.
  • the present disclosure presents techniques, systems and methods to provide a user with global and local context sensitive augmented content to enhance the user experience while interacting with digital information be it while reading, writing, drawing, browsing, searching, viewing, or using digital data information such as financial, medical, business or corporate data, social media data, or any data that is accessible locally or on the web and/or remotely through web based services.
  • digital data information such as financial, medical, business or corporate data, social media data, or any data that is accessible locally or on the web and/or remotely through web based services.
  • These techniques, systems and methods are applicable to various computing platforms such as hand-held devices, desktop computers, notebook computers, mobile devices, as well as compute servers.
  • Coupled is defined as connected, although not necessarily directly, and not necessarily mechanically.
  • the terms “a” and “an” are defined as one or more unless this disclosure explicitly requires otherwise.
  • the terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”) and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs.
  • a method or device that “comprises,” “has,” “includes” or “contains” one or more steps or elements possesses those one or more steps or elements but is not limited to possessing only those one or more elements.
  • a step of a method or an element of a device that “comprises,” “has,” “includes” or “contains” one or more features possesses those one or more features but is not limited to possessing only those one or more features.
  • a device or structure that is configured in a certain way is configured in at least that way but may also be configured in ways that are not listed.
  • Example embodiments are described herein in the context of a system of one or more mobile device, electronic device, handheld device, computers, servers, firmware, and software. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Other embodiments will readily suggest themselves to such skilled persons having the benefit of this disclosure. Reference will now be made in detail to implementations of the example embodiments as illustrated in the accompanying drawings.
  • Multilevel, e.g. global and local, context sensitive augmented content would increase productivity and enhance a user's experience while viewing or interacting with data for the purpose of learning, reading, writing, drawing, browsing, searching, discovering, viewing images or any type of user interaction with digital data information whether structured or unstructured (e.g. financial, health, manufacturing, and corporate data).
  • the digital data information may be stored locally or remotely via a corporate server or in the cloud. Additionally, private as well as public sources of data may be used or selected by the user for the ultimate personalized range of choices that may be used to further narrow down or expand the augmented content being presented.
  • multilevel context sensitive augmented content would increase productivity and enhance business intelligence for the enterprise by providing context sensitive augmented content that is generated by dynamically mining and analyzing structured and unstructured enterprise data and/or possibly leveraging structured and unstructured publicly available data for further improving user experience.
  • multilevel context sensitive content augmentation filters provide the ability to dynamically mine data on the fly based on modification of a new input from a user.
  • a new input from a user can be the selection of a new text or a portion of the reference content, or it can be a feedback provided such as elevating the priority or weight (e.g. like) or decreasing the priority or weight (e.g. dislike, delete, dismiss) a single augmented content, a category of augmented content, or a theme of augmented content.
  • the multilevel context sensitive augmented content can be further in tune with what the user would like to see or expects to see in the augmented content being generated and presented.
  • a feature of the multilevel context sensitive augmented content is that the augmented content is generated either in the cloud or locally using sophisticated information retrieval algorithms or using a set of heuristics so as to enable large-scale data processing, information retrieval, and web mining. Knowing that extracting a feature set from a web page is a problem that is known and various algorithms, methods, and research into various solutions have been made, this system can use existing research or methodologies to extract a feature set. Furthermore, this system employs a set of heuristics and metrics that efficiently extract a set of features that characterize the reference content at hand.
  • a feature of the multilevel context sensitive augmented content is that the information retrieved and knowledge constructed can be saved and called upon in future augmentation tasks and sessions.
  • a feature of the multilevel context sensitive augmented content is that the augmented content is presented through a translucent layer on top of the original content being viewed by the user.
  • a non-obtrusive content augmentation that is hidden or made available whenever a user disables or enables the global and local context sensitive augmented content application.
  • Relevant augmented content is displayed on top of a translucent layer on top of the original content being viewed by the user.
  • the augmentation system provides a less obtrusive and more efficient interaction, browsing and exploration experience.
  • a multilevel corresponds to at least two levels, a global level and a local level.
  • a global and a local relevant features of reference content maybe defined as a global relevant feature corresponding to a feature or a theme common throughout the reference content, and a local relevant feature corresponding to a feature strongly related to a locality within the reference content.
  • One method of dynamically updating augmented content can be achieved by leveraging real-time user feedback, such as elevating priority or dismissing augmented content as being presented to the user. If an augmented content's priority is elevated, its weight increases as well as the metadata that describes this augmented content gets promoted which in turn updates existing augmentation filters as well as generating and presenting new augmentation content based on the new metrics.
  • the augmented content can be dynamically updated based on user interaction, e.g. selection and/or clicking, within the reference or augmented content in real time.
  • augmented content presentation layers such as dials for global and local augmented content, or a scroll-area of small windows for various augmented content. Describing all these various means to implement the augmented content presentation layer is not necessary to understand this disclosure. Furthermore, a person skilled in the art would understand and would be able to employ many different means to implement augmented content presentation layers without departing from the spirit of this disclosure.
  • augmented content while generating augmented content may result in a lot of data that cannot be shown on the display, this data can be stored in a deep queue.
  • a deep queue means that there is more augmented content (data) in the queue than what is displayed on the screen. For example, not all mined augmented content can be displayed simultaneously due to physical screen size limitations or the display layer size.
  • a user can hover over the queue or press an arrow to scroll through the augmented content in the queue.
  • the augmented content being presented to the user may comprise actual data, snap shot of the actual data, a processed portion of the actual data, or a link to the location where the actual data can be retrieved.
  • Theme-based augmented content can further enhance a user's experience by presenting a set of themes.
  • a new or updated augmented content is presented to the user.
  • An option to expedite augmentation and improve the quality is to rely on the user's preferences and feedback.
  • a set of categories/themes can be presented to the user. These constitute meta-data.
  • augmentation can be enhanced and filtered. For example, a research paper that deals with AIDS virus would trigger a set of themes such as Pharmaceuticals; Discrimination, etc. . . . .
  • Discrimination The user who is interested in science and pharmacology but not in the social aspects related to AIDS would deselect ‘Discrimination’.
  • all augmented content presented will be tailored to refer to categories that are related to science and other related aspects of the research.
  • the theme can further be defined by a category or a set of related categories. This will serve to prune the augmented data and only present the relevant data that is of interest to the user and the task he is carrying out at that moment.
  • Multilevel context sensitive augmented content application can be implemented as a stand-alone application, on top of another application, or as an extension for applications, e.g. a browser extension.
  • further refinement or fine tuning of various options for customization of augmentation system such as aggregating, mining, filtering, and presenting various aspect of data or metadata can be performed dynamically in real-time.
  • the customization of augmentation system may be performed based on at least one or more of a user's feedback, behavior, attributes, characteristics, theme, topics, and interests.
  • the user can provide feedback in the form or liking/disliking the tag. This is similar to promoting or dismissing an augmented content.
  • the augmented content can be updated live. Furthermore, this user's feedback would also result in updating various subsystems such as the underlying data-mining, statistical computing algorithms, or machine-learning algorithms or other information retrieval algorithms or heuristics. These updated subsystems are used to generate or create new signatures, metrics, or features which are based on user's feedback, e.g. liked/disliked tags, where the new signatures are used to generate new augmented content or update the currently presented augmented content.
  • various subsystems such as the underlying data-mining, statistical computing algorithms, or machine-learning algorithms or other information retrieval algorithms or heuristics.
  • These updated subsystems are used to generate or create new signatures, metrics, or features which are based on user's feedback, e.g. liked/disliked tags, where the new signatures are used to generate new augmented content or update the currently presented augmented content.
  • a feature of a system for generating and presenting multilevel context sensitive augmented content is the ability to utilize online and offline mining and analytics for augmentation. For example, mining and processing in real-time or in batch mode and store data in a data store (local or remote) or presenting real-time augmented content to the user. The stored data can be used for future augmentation. Metadata and other relevant data elements can also be annotated in real-time to capture user's preferences and experiences. In addition, metadata and other relevant data elements can be stored in a central repository to be leveraged for future augmentation of same or similar content.
  • a brief description of metadata is that it is data that describes other data. For example: ‘public health’ is a category that encompasses diseases. This higher level category ‘public health’ is a metadata for diseases.
  • a multilevel context sensitive augmented content system uses at least two levels, a global level and a local level. The following explains the difference between global and local augmented content.
  • Global augmented content refers to augmented data that pertain to the overall document that the user is currently browsing, exploring, or interacting with.
  • a local augmented content can refer to augmented content based on a particular piece, paragraph, sentence, word, image, icon, symbol, etc. . . . of that document that the user is currently browsing, exploring, or interacting with.
  • Global & local augmented content are presented using a dynamic deep queue, and the user can control the displaying of at least a portion of the augmented content.
  • Content sources for augmentation can be provided from many sources.
  • An example of such content sources includes but is not limited to a user's own documents and data on desktop, web-content, social media sites, enterprise data-marts, and local and remote data stores, ontologies, other categorization, and/or semantic or relationship graphs.
  • the multilevel context sensitive augmented content can be successfully implemented to augment a user's browsing experience as discussed above.
  • a system for generating and presenting multilevel context sensitive augmented content can be successfully implemented as an application for augmented user experience (UX).
  • the system can increase productivity, provides augmented data-mining & data-exploration platform, augmented e-learning and e-research system, augmented desktop-based & mobile-based browsing, exploration, research, discovery, and learning platforms, data augmentation for better healthcare products and services, data augmentation for better educational products and services, augmentation system for better content management and relationship platform for both enterprise and consumer applications, enhanced online-shopping research and UX, enhanced marketing campaigns, an enhanced news access UX are but to name a few of application benefiting from a system for generating and presenting multilevel context sensitive augmented content.
  • Semantic processing is the process of reasoning about the underlying concepts and expressing their relationships.
  • the following semantic based techniques can also be used in a system for generating and presenting augmented content.
  • utilizing existing tags in public sources utilizing batch-processed tags as a cloud application, semantic processing of selected content to generate a match to an existing tag, semantic processing to generate augmented content on the fly and utilizing user's feedback for promoting and dismissing augmented content are but examples for methods to provide a better user-relevant augmented content.
  • Generating augmented content on the fly can also be accomplished by using a feedback mechanism provided by the user to enable mining and generating of new augmented data to be presented to the user.
  • a system for generating and presenting multilevel context sensitive augmented content is used to improve the analytics of large data sets by leveraging pre-processed data and already generated relationships.
  • a user presents some key words to a search engine, the user gets a set of links that are related in addition to some ads that could very well be related to the key words you have entered or to some personal data known or extracted of the user.
  • the content presented to the search engine can be either parsed from the html or other format or interface produced by a data provider. Or, it can be scanned through OCR if the data format is encrypted.
  • This ability to take a snap shot of a screen and analyzes and leverages its data and relationships empowers and simplifies the augmentation and analytics processes and improves the throughput since the signatures/correlation metrics extracted are a result of processing a significantly smaller set of data. Therefore, the performance gain of a system for generating and presenting multilevel context sensitive augmented content is orders of magnitude compared to mining massive data sets in the cloud.
  • a system for generating and presenting multilevel context sensitive augmented content presents the augmented content along with the reference content using two or more different presentation layers displayed using the same display screen.
  • the system provides the ability to customize the generation of augmented data in situ (in place) while working on original or reference content, where the augmented data can be displayed on see thru presentation layers so as not to obscure the original or reference content and to maximize use of the display screen, and/or the displaying area.
  • a system for generating and presenting multilevel context sensitive augmented content utilizes dynamic updates of displayed augmented content using presentation layers while a user views and manipulates reference content displayed using another presentation layer. It is preferable to use a translucent presentation layer for the augmented content presentation layer that is located on top of the displayed reference content so that the user can easily manipulate or interact with the reference content while simultaneously viewing the dynamically updated augmented content.
  • a translucent presentation layer for the augmented content presentation layer that is located on top of the displayed reference content so that the user can easily manipulate or interact with the reference content while simultaneously viewing the dynamically updated augmented content.
  • displaying relevant augmented data in a separate tab or page would result in loss of context relationship and provides a less efficient and less friendly user experience.
  • displaying the augmented content on the sidebars is possible as well. However, it consumes screen space and hinders displaying of the reference content. Therefore, the ability to keep the reference content accessible to the user while displaying the augmented data on top of the original content provides a much smoother and efficient user experience.
  • the user
  • a system for generating and presenting multilevel context sensitive augmented content enables a user the ability to associate any of the augmented content with the reference content or an attribute of the reference content source using one or more types of metadata.
  • the system enables the user to save the associated metadata for future use or sessions.
  • the association of metadata can be accomplished by embedding a link in the text, by associating a link with a text, or by associating any data or metadata with the reference content or any part of the reference content.
  • the user has the ability to specify a category or more as a source or criterion of augmentation.
  • the user can also define association rules that join a group of attributes, categories, and other metrics together to provide a richer input to aid the augmentation system to generate more relevant augmentation content.
  • an enterprise sales projection document can always be augmented with any data source or data documents that generated the projection.
  • the criterion is a category that says source sales data and not necessarily the exact data documents.
  • the sales data can be extracted automatically by the augmentation system.
  • the augmentation system can carry out an updating procedure for any associated data or metadata for any other reference content.
  • the augmented content is displayed using see-through layers so that the user always sees and has access to the original or reference content.
  • the user is able to access, browse, move, select, hide, tap, scroll, or interact with the reference or augmented content while the system dynamically generates and displays an updated augmented content using the augmentation presentation layer. It is noted that the user interaction with the reference or augmented content can result in having a new reference content that the user wishes to interact with, hence, a new augmented content is generated and displayed.
  • the system keeps track of and saves certain information regarding this nested augmentation level.
  • the system provides the user the ability to switch back and forth between various nested augmentation levels as well as saving or sharing the augmentation filters or settings used for a particular session.
  • further enhancement of the user experience is achieved by enabling the user to change the skin (or look of a user interface UI) of the augmentation system.
  • a UI buttons, options, data
  • the same components of a UI can be displayed on the screen in a variety of ways.
  • a library of templates and color options can be provided to allow the user to customize the augmented content presented by the application.
  • the global augmented content and local augmented content can be displayed using one or more different regions of the screen or displaying the global and local links to the augmented content in two concentric circles around the reference content.
  • the enhancement of the user experience is achieved by enabling the user to choose the most efficient way for that user to utilize the augmented content.
  • user selectable skins can also be used to cover or hide pushed content that may exist or embedded in the reference content being viewed.
  • User selectable areas of a skin can be used to enable the display of user selected content such as images or augmented content or pushed content such as advertisement. For example, an ad for tickets to a local concert when the user is browsing a specific artist, or an ad for a book that relates to a global or local augmented content of the user reference or currently viewed augmented content, or any other monetization mechanism based on the augmentation process.
  • the enhancement of the user experience includes a nested multilevel context sensitive augmented content where the augmented content presented to the user can be further enhanced as a function of the various nested levels.
  • the augmented content is presented while keeping track of the current content being viewed in relationship to the original content that the user started with and all levels in between. This provides a hierarchical augmentation system that enables the user to access and build nested levels of augmentation.
  • the user interface, or UI, for a system for generating and presenting multilevel context sensitive augmented content can be launched or started automatically and stays hidden from view until the user invoke a predefined programming function to enable the UI functionality. For example, a single tap, hot-key, function-key, a gesture, or a multiple or a combination of actions acted upon a content would cause the transparent augmentation layer to be shown with the augmented content and in accordance with user preferences, such as tags, skins, themes, etc. . . . Selecting content presents or updates the augmented content already presented. Visiting an augmentation link results in completely or partially (split screen) covering the reference content or original layer comprising the original content.
  • the UI provides the user the ability to navigate nested augmented content or jump back to reference or original content.
  • additional UI features can further be used to increase the overall efficiency and provide a better user experience. For example, saving the augmented content metrics in user history, and using history to enhance and/or tailor analytics and augmentation as would be more relevant to each individual user or group of users such as in corporate environment.
  • Metrics here refer to the generated signatures as mentioned above. Also, it refers to any annotations that are provided by the user such as priority, liking/promoting an augmented content or dismissing it. This can be stored for future sessions as well as using the augmented content promotion and dismissal to enhance augmentation in real time.
  • skins that cover an undesirable part of the screen e.g. side columns where ads are pushed. The skin may be used for further customization of the viewed screen and potentially could be monetized and leveraged to present relevant augmented content that is paid for by the user, such as ads for objects, e.g. books, related to the content of a reference article.
  • a system for generating and presenting multilevel context sensitive augmented content provides dynamic user-guided and customized context-sensitive data augmentation to facilitate learning, exploration and knowledge discovery.
  • the system provides simultaneous interaction with the augmentation layer and the content layer.
  • the system generates augmentation data based on user-defined metrics and filters such as themes, categories of interest, document content and/or part of it.
  • the generated data is not a rigid augmented content.
  • the generated augmented content is any data, concept, and relationships that are presented as a result of the data mining and processing of the original content and the user-defined metrics and filters.
  • the system utilizes dynamic and interactive methods to successively refine and tailor the augmented content based on a user's guidelines, filters, and metrics.
  • the system relies on a variety of sources for content augmentation by accessing any online or offline databases, crowd-sourced databases, or open databases.
  • a custom built graph of concepts and relationships can be built between different pieces of data as they are processed and augmented based on the user's filters and metrics to improve the performance of the system and the User Experience.
  • the system provides a context-sensitive hierarchical augmentation framework for deeper and expansive exploration and knowledge discovery.
  • the system enables construction of a customized graph of data, concepts, and relationships based on the filters and metrics provided even in the absence of content. Content can be generated on the fly for further exploration.
  • the system enables sharing of augmented data and the associated metrics that generated them. This enables richer knowledge discovery by further refining a user's augmented data based on other users' augmented content. This is useful for collaborative research and knowledge discovery.
  • the system can be launched from offline and online documents or reference content to generate the augmentation content, data and graph of relationships amongst the concepts represented by the augmented content.
  • the system provides a UI to display and manipulate reference content and augmented content concurrently, dynamically, and interactively.
  • the system provides one or more translucent layers on top of the reference content to show the augmented content.
  • Translucent layers facilitate displaying the reference content as well as the augmented content.
  • Translucent layers can fully or partially cover the original content.
  • Augmentation layers can be hidden, minimized ((shown as icon), or moved around on the display screen to facilitate easier display and interaction with the reference content.
  • the system enables the user to manipulate and control a set of display layers (reference content layer, and/or augmentation display layers) in a very flexible fashion such that the user can size up, down, move, show, hide any of those display layers.
  • the system provides an intuitive, rich, and friendly UX for data exploration and knowledge discovery on small and large display screens.
  • displaying of the augmented content concurrently and interactively on the original content empowers the user to use this system on smart phones, tablets, and any other display.
  • the system provides means to insert additional content on the augmentation layers based on analytics on the augmented content and the original content.
  • Knowledge discovery system serves to augment, clarify, enrich, and expand on a relevant topic or topics in a document.
  • a number of information processing techniques is carried out to disambiguate information and extract names, concepts, events, and other relevant meta-data using Name-Entity-Recognition (NER), topic modeling to discover topics related to the reference document.
  • NER Name-Entity-Recognition
  • Such topics can be either explicitly mentioned or discovered by relying on techniques based on information processing, data mining, machine learning to process discovery patterns, and causality graph and other web and data repositories.
  • NER Name Entity Resolution/Recognition
  • NER processes the document, disambiguates names and concepts, extracts names, concepts, dates, name phrases, and any other data that can be parsed, processed, or inferred.
  • Use of latest information extraction, data mining and natural processing are some of the techniques and algorithms that can be used in this step.
  • Topic Modeling and Topic Graph Mine data from stored knowledge graphs, causality graphs, or other repositories to extract and cluster topics and categories from the mined names, concepts, and other processed data.
  • Latest research in data clustering, topic extraction, inference, modeling, and latent topic discovery can be used to build a topic graph.
  • a topic and clusters are interchangeable in this graph.
  • a cluster is a set of related data that share a set of common features and relationships. One of those features or relationships can be a theme.
  • a topic is a cluster of documents that share a common theme. Not all relevant topics can be discovered by the topic extraction step. More topics (relevant and possible hidden) can be extracted by the aid of the discovery patterns and causality graphs below.
  • Hierarchical Graph Discovery of intermediate topics and themes to discover/expose relationship between related topics and clusters This graph can be a pre-defined taxonomy, or a hierarchically constructed graph based on different levels of coarse and fine clusters constructed based on the available content.
  • Data Clustering is the process of constructing a set of clusters of related documents. The relatedness is defined based on a set of desired features and/or relationships.
  • Topic Modeling above is a form of data clusters where each topic is a cluster that shares a common theme.
  • Discovery Patterns (DP) 10100 are templates that aid the discovery system to extract the relevant knowledge for a topic or a concept, as shown in FIG. 1 .
  • a topic it will query for the relevant properties or relationships that are annotated on the topic or its meta-topic.
  • a DP can help in defining a set of competency questions that will be very important to data augmentation and knowledge discovery.
  • DPs define and a set of Competency Questions (CQ) that can be extracted from the data/content to extract and discover salient content and relationships.
  • CQ Competency Questions
  • Type 10111 Industry 10113
  • Equivalents 10115 and Treats 10117 are examples of discovery patterns for a topic Product 10110 .
  • Type 10121 , News Event 10130 and Actors 10123 are examples of discovery patterns for a topic Disease 10120 .
  • Type 10131 , Place 10135 , Date 10137 and Actors 10133 are examples of a topic News Event 10130 .
  • These discovery patterns can be pre-defined, manually constructed, extracted from other data sites or repositories, or crafted on the fly. They can also be further enhanced and massaged as the system gathers more data. Also, they can be tailored based on user's specific features and interests.
  • Competency Questions define a set of queries that are very specific to the content at hand. These queries enable very focused knowledge discovery. These CQs are domain dependent. For example, the set of CQs for knowledge discovery of legal corpus is different from the set of CQs for knowledge discovery of medical corpus.
  • Our system in addition to leveraging pre-defined CQs modeled in pre-defined DPs, it enables the user to provide a custom-defined DPs their associated CQs, and automatically infer a set of CQs and dynamically construct a set of DPs based on the available content and features.
  • Ontologies and public repositories provide pre-defined sets of concepts and relationships that can be leveraged in the knowledge discovery process.
  • Wikipedia, Wordnet, Freebase, Verbnet are examples of such repositories that are rich, and constantly updated. Although these ontologies and repositories are bulky, they are rich with relevant content. Our system leverages these repositories amongst other sources to discover rich augmentation content.
  • CG Causality graph
  • Abstracted Causality Graph is a graph that is constructed from the causality graph (CG), the CG should be abstracted so that similar topics, relationships, cause and effects, and their meta-subjects are captured. This will aid in leveraging all this knowledge to augment and enrich new information and knowledge. This is essential for knowledge discovery.
  • An example of abstracted concepts could be if company X acquires company Y, it is not important who X and Y are, but what is very important is the notion that a company can acquire another company. This way when we see a name of company in a new document, we can automatically ask the question about any prior or expected acquisition for the company at hand.
  • Data Augmentation as was discussed in a previous disclosure refers to local and global data augmentation and knowledge that are extracted and presented to the user to further expand on the document at hand.
  • This data is based on all the knowledge modeled in the discovery graph (topics, clusters, and relationships), causality graph, discovery patterns library, and other on the fly information extraction.
  • This augmentation will facilitate Timeline Events related to both local and global augmentation data and will provide a rich knowledge discovery experience. The user can browse in time to discover relevant knowledge about the topic or topics at hand.
  • FIG. 2 A block diagram for an Augmentation System 11500 is shown in FIG. 2 .
  • the goal of this system is to read, synthesize, and/or extract a set of competency questions that will enable smarter content discovery and augmentation.
  • Competency Questions is a set of queries that are very specific, well defined, and rich features that guide the knowledge discovery process.
  • Box 11510 reads and processes a set of ‘competency questions’ from the user,
  • Box 11520 synthesizes those CQs based on an automated template generation that will define and fill the relevant competency questions for the content at hand. Synthesis based on extracted relationships of topics in CG, topical graphs, or other synthesized relationships.
  • Box 11530 extracts those CQ based on content discovered by processing those relevant topics and entities.
  • Box 11540 compiles and outputs the constructed set of CQs as produced by any or all boxes 11510 , 11520 , and 11530 .
  • a block diagram for Causality and Augmentation System 11600 is shown in FIG. 3 .
  • the goal of this system is to construct a causality graph that captures the cause-effect between different mined or discovered topics in the system. This causality relationship extraction adds another dimension to the knowledge discovery system.
  • Box 11610 defines a set of prior topics that are relevant to content.
  • Box 11620 defines a reference topic to be augmented.
  • Box 11630 defines a set of topics that are caused by the prior topics and related to the current reference topic.
  • Box 11640 defines the set of actors that are at play in the causal relationships. These actors can be named entities such (person, location, organizations, groups).
  • Box 11650 defines a set of topics and their related categories so that the correct causal relationship is used should there be more than one relationship.
  • Box 11660 a set of Discovery Patterns (DP) that will enable the system to extract the right meta data and annotations when discovering the causal relationships.
  • Box 11680 defines a set of user-provided features that will aid in this discovery process.
  • Box 11670 defines that causality graph that is the result of processing all the input defined in the previous mentioned boxes.
  • Box 11690 is the set of causal relationships and relevant content to be added to the augmented content.
  • a block diagram of a Causality Graph Synthesis 11700 is shown in FIG. 4 .
  • the goal of this system is to build and abstract the causality graph such that it is applicable to a different set of actors and entities that share the same set of relationships defined in the graph.
  • Box 11710 defines a set of topics and relevant content that can be mined form any source public or private. These constitute the nodes in the causality graph. This system builds and infers edges between those nodes based on a set of rules, heuristics, discovered relationships, or pre-defined relationships.
  • Box 11720 extracts relationships between the presented entities.
  • Box 11730 extracts relationships between the presented topics and their corresponding categories, and Box 11740 extracts instances of causalities based on the presented content itself.
  • Box 11760 checks the existing causality graph for the discovered or inferred edges. If they are not present already, they are added to the causality graph (CG). Box 11750 processes the updated causality graph (CG) and infers abstracted relationships and adds it to the graph so that the CG becomes more abstract and applicable to future instances of relevant topics and entities. Box 11770 is the output of this system that presents a rich and abstract causality graph.
  • FIG. 5 A block diagram of the Augmentation System with the causality graph 11900 is shown in FIG. 5 .
  • This block diagram shows an overview of the whole augmentation system operation.
  • Block 11901 shows the document that needs to be augmented.
  • Box 11905 shows the features that were extracted from this document as signatures to aid in finding relevant augmentation.
  • Box 11910 defines the named-entity-recognition system what extracts the salient entities in the system.
  • Box 11915 presents the set of entities extracted.
  • Box 11920 presents other features or properties such as dates or others that will further aid in augmentation.
  • Topic Model 11925 includes a Feature Set Priority Engine 11930 and a Topic Modeling 11935 .
  • Box 11930 shows a ranking engine for the features presented so that noisy or less salient features are pruned out to further aid in higher quality augmentation content.
  • Clustering and Topic modeling is executed on this relevant content in Box 11935 .
  • Box 11940 presents the set of relevant clusters and topics that are constructed. Leveraging a library of pre-defined (Box 11960 ), dynamically synthesized (Box 11950 , Box 11955 ), or user-provided (Box 11965 ) discovery patterns carry out further content augmentation.
  • Box 11955 defines a mapping between a discovery pattern in the library and a synthesized relationship based on the presented features.
  • Box 11970 presents the resultant set of discovery patterns.
  • Box 11975 processes those patterns by examining the causality graph (CG) to see if such relationships exist or are defined. Further augmentation content can be added to the causality graph by processing relevant documents in public or private repositories (Box 11985 ).
  • Box 11990 presents a new set of entities and topics from the freshly mined content.
  • Box 11995 extracts a timeline from the freshly mined content so that the right part of the causality graph is updated. This data is further used to extract a relevant timeline to process the dates and timeline to link the relevant topics together (Box 11995 ).
  • the data in Boxes 11990 and 11995 are further utilized to infer and extract more knowledge from the Causality Graph in Box 11100 .
  • Box 11100 presents the new augmentation content that will be added to the causality graph. At the end of this process, a rich set of local and global augmentation along with a knowledge graph with a timeline that connect the different topics (local and global) and the mined relationships and properties will be available.
  • a block diagram of an Augmentation System 12100 is shown in FIG. 6 .
  • a Reference Content 12105 corresponds to any electronic document or web page that a user wants to invoke the Augmentation System 12100 to get Augmented Content 12190 .
  • the Reference Content 12105 can be stored locally in a memory subsystem of an electronic device, a memory subsystem of a display screen device, or is accessed from a remote location via a wired or wireless communication system.
  • the communication system could use the internet, a cloud, a data store, a computing device, server or a database via a wired or wireless networking link.
  • the augmented content is a generated content by the Augmentation System 12100 based on the Reference Content 12105 using a set of features, filters, and categories which are produced by at least one of an Extract Features 12120 , Extract Categories 12125 , and Update Categories 12137 subsystems as shown in FIG. 6 .
  • a Local Content 12110 is a selected portion of the Reference Content 12105 which the user wishes to get more specific augmentation about, or that is a portion of the Reference Content 12105 that the user is interacting with. Furthermore, the Local Content 12110 may also be automatically selected, tagged, managed, or generated by the Augmentation System 12100 , e.g. based on a displayed portion of the Reference Content 12105 or a user interaction with a portion of the Reference Content 12105 . Furthermore, the presentation and/or the displaying of the Augmented Content 12190 is managed using Manage RAC 12145 (RAC refers to Relevant Augmented Content) to control a Display Queue 12165 and Display RAC 12170 .
  • Manage RAC 12145 RAC refers to Relevant Augmented Content
  • the Augmentation System 12100 generates Augmented Content 12190 by facilitating the construction of a user-customized network of concepts, objects and relationships that serve to augment the Reference Content 12105 at hand for the purpose of knowledge discovery, learning, and a richer user experience in browsing and/or interacting with data information.
  • This Augmentation System 12100 generates any one of a network of concepts, a network of objects, and a network of relationships using one or more of a set of features, a set of filters, and a set of categories. Each of the set of features, the set of filters, and the set of categories can be customized and tailored based on the user's interests and input.
  • the constructed network can be saved and further augmented over time for richer and more efficient user experience.
  • the Extract Features 12120 subsystem extracts a set of features from the Reference Content 12105 .
  • General Features 12117 can provide a set of features that can be updated and tailored overtime to at least one of a specific user, specific project, specific objective, and specific subject.
  • Extract Features 12120 generates a set of filters that denotes the desired concepts for augmentation. For example, these concepts could be names of people, history, events, topics, or other meta-data. These data are either computed on the fly or pre-computed and stored locally or remotely for current or subsequent augmentation sessions.
  • This extraction process is based on embedded data in at least one of the Reference Content 12105 , in a linked content to the Reference Content 12105 , metadata of the Reference Content 12105 , e.g.
  • any part of the Augmentation System 12100 can be run remotely on a server or in the clouds, or it can be run locally on the host device.
  • the Extract Categories 12125 function uses a set of categories or topics that are extracted based on the data that can be associated or extracted from the Reference Content 12105 . This data can be either meta-data or any other related data to the Reference Content 12105 .
  • the Extract Categories 12125 extracts a set of categories from the Reference Content 12105 and its associated links and data. Also, the system utilizes any embedded categories or meta-data that are either embedded in the link or attached to the Reference Content 12105 .
  • the extracted categories can also describe meta-data about the topic at hand. For example, if the reference content is an article about AIDS, there are many categories that can augment data about AIDS. For example, a set of categories can be: History of AIDS, Science of AIDS, Social Impact of AIDS, Symptoms of AIDS, etc. .
  • a user may only be interested in the science of AIDS, so a user will interact with the presented categories, e.g. by deselecting all categories that are not related to science, and this will impact the set of features that are used in augmenting the Reference Content 12105 .
  • Other data that can be extracted or inferred can be further used for constructing a more meaningful category set by utilizing a variety of information retrieval, extraction, and inference algorithms and methods.
  • a General Categories 12115 is a set of default categories that the Update Categories 12137 processes to reflect the user's interests.
  • the General Categories 12115 can be Business, Politics, Education, Research, Health, Technology, etc. . . .
  • the Update Categories 12137 may use this optional input from the user to bias the augmentation to the categories of interest. This optional input can be stored and updated over time.
  • the interaction of a user with the Augmented Content 12190 may be accomplished in a variety of ways.
  • the user may select one or more of the presented categories for removal, selection, decreasing priority, and increasing priority.
  • the user may also define, modify, or interact with an association rule to aid Extract Features 12120 to generate a more useful set of filters for better augmented content.
  • the association rule can leverage, use, or joins one or more categories, features, filters, or concepts to (i) generate a new set of features, filters, categories, or Augmented Content 12190 , and (ii) to modify one or more of the set of features, filters, or categories which are being used to generate the Augmented Content 12190 .
  • an Update 12130 function enables the user's input to be considered by Update Categories 12137 , e.g. a user may choose to delete some of the default/general categories that are not of interest or to elevate the priorities of some of those categories.
  • the Update Categories 12137 When deleting categories, the Update Categories 12137 will reduce the weight of the features that are related to those categories. When categories are elevated in priority, the Update Categories 12137 increases the weight given to those features that are related to those categories. Thus, affecting and updating the Augmented Content 12190 presented to the user.
  • An Update Filters 12150 is used to indicate a user's preference for a feature or automatic feedback based on user's interaction with the Augmented Content 12190 . For example, when one or more of the Reference Content 12105 , Local Content 12110 , and Augmented Content 12190 get updated or interacted with by a user, then more clues and feedback can be gathered from the updated list or the user's interaction as to revise the features and categories that are of interest to the user in real time. However, the user may choose not to update the features and categories, and the Augmentation System 12100 provides the user the ability to control how and when the Augmented Content 12190 is generated and/or updated.
  • An Update Features & Categories 12135 subsystem receives a first set of features from the Extract Features 12120 subsystem, a first set of categories from the Extract Categories 12125 , and/or an updated set of categories from the Update Categories 12137 , and/or an Update Filters 12150 .
  • Update Features & Categories 12135 manages and controls the updating of the actual features and categories sets including any decision making based on the user input or interaction.
  • the Update Features & Categories 12135 may communicate with any one of Extract Features 12120 , Extract Categories 12125 , and Update Categories 12137 to generate more features and categories based on a variety of parameters including the user's preferences.
  • Update Features & Categories 12135 also handles updating relationships and cleaning up for those features and categories that were updated by the user.
  • a Compile RAC 12140 subsystem receives a set of categories and a set of features from the Update Features & Categories 12135 subsystem.
  • Compile RAC 12140 includes a variety of functions and algorithms such as machine-learning, data mining and extraction, web crawling, data-mart accessing, extraction and processing functions, and other intelligent algorithms and approaches are used to compile a set of relevant augmented content or pages (RACs) based on at least one of the Reference Content 12105 , Local Content 12110 , and the interest of the user.
  • Managed RAC 12145 subsystem is the controller that manages the presentation of the Augmented Content 12190 via a Display Queue 12165 and Display RAC 12170 .
  • the Augmentation System 12100 listens to inputs from the user and manages the generation of the Augmented Content 12190 .
  • the Managed RAC 12145 subsystem generates three outputs taking into consideration a user's feedback or input.
  • the Managed RAC 12145 subsystem generates and controls the communication of the generated Augmented Content 12190 using Display Queue 12165 and Display RAC 12170 .
  • Managed RAC 12145 generates an update request to Update RAC 12155 for any necessary update to the Display Queue 12165 based on a user's interaction or input.
  • the Display Queue 12165 displays in a desired skin at least a portion of the queue of RACs so that the user can browse through them and select some to view.
  • the Display Queue 12165 displays a link, a summary, or a portion of the compiled relevant content or pages.
  • the Display RAC 12170 retrieves the respective relevant page RAC and displays at least a portion of it.
  • the Display RAC 12170 subsystem manages and controls the displaying of the Augmented Content 12190 using the display screen.
  • Display RAC 12170 can use one or more display layers on top of the Reference Content 12105 or Local Content 12110 via translucent display layers as discussed in previous paragraphs.
  • FIG. 7 A block diagram of a Hierarchical Augmentation System 12200 is shown in FIG. 7 .
  • This hierarchical augmentation or nested augmentation capability enables a user to augment any content that is the result of data augmentation at any level of browsing or exploration.
  • the Augmentation System 12210 may select any one of the RACs or a group of RACs to invoke the augmentation system on and to generate another level of augmentation.
  • the Augmentation System 12200 allows the user to go back and forth in the hierarchical graph to browse any particular content at any level, be it a reference or augmented content.
  • the Augmentation System 12200 provides augmented content at Process 1 12220 , which is the first invocation of the augmentation system on reference content, a user may elect to augment one or more of the augmented content of Process 1 220 .
  • the Augmentation System 12200 uses the elected content to be augmented from Process 1 12220 as an input or reference content to Process 2 12230 for augmentation.
  • Process 2 12230 which is considered the second invocation of the augmentation system on a reference content, generates in turn augmented content which the user can further refine or interact with, and so on for Process K 12240 , Process (n ⁇ 1) 12250 , and Process (n) 12260 .
  • Multilevel nesting or hierarchical augmentation is not limited to a specific number of levels. Of course, certain hardware or software limitations or a particular application may dictate the use of a specific number of levels. However, this is an option that can be used to various extents as part of the customization of Augmentation System 12200 for any particular usage.
  • FIG. 8 A block diagram of a Hierarchical Augmentation System 12300 is shown in FIG. 8 .
  • This hierarchical augmentation or nested augmentation capability comprises the same capabilities as the Hierarchical Augmentation System 12200 is shown in FIG. 7 and includes an Augmentation System Control 12390 subsystem that is communicating Augmented and Reference contents AR- 12325 , AR- 12335 , AR- 12345 , AR- 12355 , and AR- 12365 with Process 1 12320 , Process 2 12330 , Process K 12340 , Process (n ⁇ 1) 12350 , and Process (n) 12360 , respectively.
  • Augmentation System Control 12390 subsystem that is communicating Augmented and Reference contents AR- 12325 , AR- 12335 , AR- 12345 , AR- 12355 , and AR- 12365 with Process 1 12320 , Process 2 12330 , Process K 12340 , Process (n ⁇ 1) 12350 , and Process (n) 12360 , respectively.
  • Process 1 12320 corresponds to a first level instance of Augmented System 12100
  • Process (n) corresponds to an n-th level instance of Augmented System 12100
  • the Augmentation System Control 12390 may receive one or more of the generated augmented content of each hierarchical level, a copy of the set of filters, a copy of the set of features, and a copy of the set of categories.
  • the Augmentation System Control 12390 can further run sophisticated statistics, analytics and algorithms to extract new features or generate new filters or categories.
  • the Augmentation System Control 12390 may receive user input to control what type of analysis or augmentation the user expects the Hierarchical Augmentation System 12300 to provide or keep track of nested contents that the user is interacting with, viewing, or manipulating at various levels of hierarchy.
  • FIG. 9 A block diagram of an Augmentation System 12400 using a Display Control 12420 subsystem is shown in FIG. 9 .
  • the Augmentation System 12410 is essentially the same as any one of the Augmentation System 12100 , Augmentation System 12200 and Augmentation System 12300 as shown in FIG. 6 , FIG. 7 , and FIG. 8 respectively.
  • the Display Control 12420 subsystem controls the displaying of various elements such as Augmented Content 12450 and Reference Content 12440 , which are output of the Augmentation System 12410 .
  • the Display Control 12420 receives input control from Augmentation Display 12430 subsystem and/or from a user interacting with the Augmentation Display 12430 or one or more display layers displayed using the Augmentation Display 12430 .
  • Display Control 12420 Based on the Augmented Content 12450 generated from Augmentation System 12410 , Display Control 12420 generates and/or controls different display layers, widgets, icons, and other knobs which are utilized to show, control, or manipulate any one of the Augmented Content 12450 and Reference Content 12440 . Furthermore, Display Control 12420 provides means for the user to interact with any one of the Reference Content 12440 or the Augmented Content 12450 .
  • the Augmentation System 12410 , the Display Control 12420 , and Augmentation Display 12430 are elements of the same physical electronic system such as a mobile device.
  • the user can manipulate any one of the Augmented Content 12450 , Reference Content 12440 , and how each is displayed onto the Augmentation Display 12430 .
  • a user interface may be used to further aid the user to manipulate or interact with any one of the Reference Content 12440 and the Augmented Content 12450 and the displaying of such content.
  • the UI can provide an easy mechanism for a user to interact with the categories, widgets, buttons, and any other option that is presented for the user to engage with the Augmentation System 12410 .
  • the Augmentation System 12410 , and the Display Control 12420 are elements of a first electronic device that is separate from a second electronic device comprising the Augmentation Display 12430 , wherein the first and second electronic devices communicate the Reference Content 12440 and the Augmented Content 12450 back and forth based on the Augmentation System 12410 and/or a user interaction with any one of Reference Content 12440 and Augmented Content 12450 .
  • the Augmentation Display 12430 , and the Display Control 12420 are elements of a first electronic device that is separate from a second electronic device comprising the Augmentation System 12410 , wherein the first and second electronic devices communicate the Reference Content 12440 and the Augmented Content 12450 back and forth based on the Augmentation System 12410 and/or a user interaction with any one of Reference Content 12440 and Augmented Content 12450 .
  • a system for extraction and generation of features and categories Extract Relevant Features 12500 is shown in FIG. 10 .
  • the Extract Relevant Features 12500 is tasked with building a set of features and categories that any one of the Augmentation System 12100 , Augmentation System 12200 , Augmentation System 12300 and Augmentation System 12400 ) can utilize to generate augmented content.
  • Reference Content 12510 is similar to Reference Content 12105
  • Local Content 12520 is similar to Local Content 12110 .
  • Categories 12530 is a subsystem which is responsible for constructing a list of categories that captures or is responsive to the user's inputs and preferences, a set of extracted categories from Reference Content 12510 and Local Content 12520 , and a set of customized categories associated with the user.
  • Features and Metrics 12525 is a subsystem which generates a set of features, a set of signatures, and/or a set of metrics each of which is either dynamically generated or pre-computed and stored.
  • Features and Metrics 12525 delivers these sets of features to an Extract Features 12540 subsystem.
  • the Extract Features 12540 receives input from Reference Content 12510 , Local Content 12520 , Features and Metrics 12525 , and Categories 12530 .
  • Extract Features 12540 delivers a set of features, a set of signatures, and a set of metrics to Compile RACs 12550 subsystem, which in turn utilizes one or more of those sets to compile from the internet, a local data store, or any other data repository (public or private) a set of data elements.
  • a Relevant Augmented Content 12560 subsystem receives the set of data elements and/or the set of features, the set of signatures, and the set of metrics to generate a customized augmented content for the user.
  • FIG. 11 a simplified block diagram of a system for extraction and generation of features and categories Extract Relevant Features 12600 is shown in FIG. 11 .
  • the Extract Relevant Features 12600 can be used as a part of an augmentation system such as Augmentation System 12100 , Augmentation System 12200 , Augmentation System 12300 and Augmentation System 12400 each of which has been described above.
  • the Extract Relevant Features 12600 is utilized to compile a set of features, using Compile RAC 12650 , to be used by an augmentation system to generate augmented content.
  • Extract Candidate 12618 processes at least a portion of a Reference Content 12608 and receives other user-provided input to extract or generate one or more set of filters and features.
  • Features 12620 uses the one or more set of filters and features to organize, build, compile or store a user-customized network of features, concepts, objects and their relationships.
  • Features 12620 serve to provide a better extraction or a focused extraction of a user's relevant set of features that can provide a faster convergence on what the user is interested to see or would want to see regarding the Reference Content 12608 .
  • this provides a better value add augmented content for the purpose of knowledge discovery, learning, and a richer user experience in browsing and/or interacting with data information.
  • features 12620 can learn, save and further refine the user-customized network of features, concepts, objects and their relationships over time for richer and more efficient user experience.
  • Categories 12630 uses the one or more set of filters and features generated by Extract Candidate 12618 to organize, build, compile or store a user-customized network of categories and their relationships. Categories 12630 can learn, save and further refine the user-customized network of categories and their relationships over time for richer and more efficient user experience.
  • Metrics 12640 is a system that can provide user influenced metrics information to Compile RAC 12650 .
  • Metrics 12640 uses the one or more set of filters and features generated by Extract Candidate 12618 to organize, build, compile or store a user-customized network of metrics which can be user defined or system's default.
  • Metrics 12640 can use date or time as a metric that can be used to further narrow and focus on the relevance of the augmented content to the user or to the Reference Content 608 .
  • Another example is to use a source or a group of sources to aid Compile RAC 12650 to limit or expand its compilation and generation of relevant augmented content.
  • Metrics 12640 can learn, save and further refine the user-customized network of metrics and their relationships over time for richer and more efficient user experience. Metrics 12640 can receive real time information from the user or other part of an augmentation system and provides an update in real time to Compile RAC 12650 .
  • Compile RAC 12650 is used to compile the networks of features, categories and metrics received from Features 12620 , Categories 12630 , and Metrics 12640 to generate and prioritize a focused set of relevant augmented content (RAC) that captures the properties and/or attributes of Reference Content 12608 and reflects the user's rules, interests, preferences, and attributes. This focused set of relevant augmented content (RAC) is to be used by an augmentation system to deliver or present a concise and highly relevant augmented content to the user.
  • Compile RAC 12650 is used to resolve any conflicts that may exist between any of the networks of features, categories and metrics.
  • Compile RAC 12650 also provides and determines the priority of the final list of RACs to be delivered or presented to the user.
  • Compile RAC 12650 can also receive, generate or modify an association rule which can be used to leverage, or join one or more categories, features, filters, concepts, or metrics to (i) generate a new set of features, filters, categories, or relevant augmented content, and (ii) modify one or more of the set of features, filters, or categories which are being used to generate the relevant augmented content.
  • an association rule which can be used to leverage, or join one or more categories, features, filters, concepts, or metrics to (i) generate a new set of features, filters, categories, or relevant augmented content, and (ii) modify one or more of the set of features, filters, or categories which are being used to generate the relevant augmented content.
  • an augmentation system can use Display Layers and Controls 12700 as shown in FIG. 12 to display the generated Augmented Content 12760 , Global Augmented Content Queue 12720 , Global Augmented Content Queue 12770 , and the Reference Content 12750 .
  • this can be one instantiation of the data presentation mechanism of an augmentation system as described above, e.g. Augmentation System 12100 .
  • the user can change the look and feel (skin) of the Display Layers and Controls 12700 using any number of skins (look and feel options).
  • the Global Augmented Content Queue 12720 corresponds to a displayed part of a relevant augmented content (RAC) generated by the augmentation system.
  • RAC relevant augmented content
  • Display Layers and Controls 12700 can manage the display of the Global Augmented Content Queue 12720 and Local Augmented Content Queue 12770 in various ways, such as the location of the display of the queues as well as the portion of any one of the queues that is being displayed using Display Screen 12710 .
  • the user can choose that only the Global Augmented Content Queue 12720 is displayed, thus Display Layers and Controls 12700 will manage to display the portion of RACs of the Global Augmented Content Queue 12720 that maybe accommodated onto the Display Screen 12710 .
  • the user may choose to emphasize the Local Augmented Content Queue 12720 and thus the Display Layers and Controls 12700 will manage that as well.
  • the Reference Content 12750 refers to the content being browsed and explored for further augmentation.
  • Display Screen 12710 corresponds to a display screen that may be physically collocated within the same device where the augmentation system is being used, or it can be part of a separate electronic device.
  • Augmented Content 12760 is displayed using one or more display layers and is the augmentation content that the user chooses to view.
  • Promote 12740 is used to highlight, select, or promote a specific RAC.
  • Promote 12740 provides a mechanism for the user to interact with any of the RACs of Global Augmented Content Queue 12720 and Local Augmented Content 12770 by elevating the priorities of a RACs.
  • a demote icon (not shown) can be used by the user to remove or dismiss a RAC or a group of RACs entirely if the user is not interested in them.
  • a Simulated Display 12800 is a use case scenario of the Display Layers and Controls 12700 and any one of the augmentation systems described earlier as shown in FIG. 13 .
  • This Simulated Display 12800 presents an example of a user reading an article about AIDS as shown in Reference Content 12810 .
  • the user then invokes the Augmentation System to augment Reference Content 12810 .
  • the user selects categories that are related to AIDS Research and Science. Categories Related to AIDS RACs 12820 shows part of the global augmentation deep queue that the system generated in response to the user's interest in AIDS, Science, and Research.
  • a Simulated Display 12900 is a use case scenario of the Display Layers and Controls 12700 and any one of the augmentation systems described earlier as shown in FIG. 14 .
  • This Simulated Display 12900 presents an example of a user reading an article about AIDS as shown in Reference Content 12910 .
  • Selected Content 12920 shows an example of selecting part of the Reference Content 12910 .
  • the user then invokes the Augmentation System to augment Reference Content 12910 and Selected Content 12920 .
  • Based on the user's choice of categories related to Science and Research of AIDS, and the system's extracted categories and features RACs 12905 shows part of the global augmentation deep queue that the system generated.
  • Africa/India AIDS RACs 12930 shows part of the local augmentation deep queue that the Augmentation System generates in response to the user's selection of part of Reference Content 12910 .
  • Augmented Display Layer 12940 is an example of displaying of Africa/India AIDS RACs 12930 .
  • Augmented Display Layer 12940 shows a RAC (HIV and AIDS) in the local augmentation queue that the elected to view.
  • Knowledge Discovery Framework Knowledge Discovery in Web Content; Relevant Topics Mining; Relevant Latent Topics Discovered; Relevant Queries Mining; Hierarchical Topics Graph, Discovery of intermediate topics to discover/expose relationship between related topics; Causality Graph Construction and Mining.
  • New figures are added to utilize Discovery Patterns, Synthesize Causality Graph, and mining of Causality Graph and other available repositories for a richer knowledge discovery experience. Not everything is available in causality graph and further online mining for more augmentation data might be needed.
  • Name Entity Relationship Named Entity Recognition (NER) is a key for accurate content extraction for knowledge discovery. Personal names, places, dates, organizations, groups, parties and other named entities (NEs) to characterize topics in a document; Name Disambiguation; Name phrase parsing, compound names, . . . ; Known concepts, events, . . . .
  • NER Named Entity Recognition
  • Possible Product & Service Offerings Knowledge Discovery platform that aids in any product or service where data augmentation and knowledge discovery are desired or suitable. Examples: News discovery, Political, Business, Historical, Science, . . . ; Browsing & research; Financial Data Discovery; Company profile, competitive assessment, etc. . . . ; eHealth Discovery; By leveraging a health-related DP; Medication Information, Patient Case Analysis, Prognosis and other related Data can be discovered and displayed.
  • DP Discovery Patterns: Pre-defined DP; Custom-tailored DP; On the fly synthesis of DP; Mapping info extracted from document/page to DP: Mining Web for CQ (Competitive Questions); Extracting Relevant/latent Topics; Discover hidden and or nonobvious topics & relationships; Filling DP based on user's preferences/interests.
  • Custom built DP Fluid DP Synthesis: Tapping into user's selected categories and topics, a DP can be synthesized. DP is fluid and will change over time based on user's preferences/interests. DP can be synthesized and tailored based on existing public information repositories (Freebase, dbpedia, Quora, and private knowledge (if accessible).
  • Library of DP Build a pre-defined set of topic-relevant DPs; Synthesize a library of DPs based on selected categories and relevant topics. Library will store all existing and new DPs for future processing. If on-the-fly synthesis causes performance problems.
  • Causality graph is vital for discovering hidden topics that are important to connecting known topics. Hidden topics discovered by CG are vital to discovering other important relevant topics. In particular, when a feature set of the reference topics (topics, categories, user feedback) and its relevant topics cannot discover important topics for further knowledge discovery, mined hidden topics can be the answer. Hidden topics go beyond topics defined by the words or phrases or known relationships of reference topics.
  • Competency Queries aid in seeding a set of interesting questions to answer about the reference topic or the relevant topics that can be extracted. CQ are also important to seed a discovery template to augment the topic at hand.
  • Competency Queries Extraction/Mining CQ can be manually crafted by the user/system. CQ can be automatically extracted or synthesized. For example, the knowledge discovery and augmentation system can query databases for questions relevant to topic and select the highest ranked questions that history shows people care about. Quora is an example of such database that can be mined to extract a set of CQ for a topic. In an enterprise setting, to mine a set of CQ about a product, customer's feedback/queries/marketing data can be mined to synthesize a set of CQ relevant for a product. This CQ will serve as a seed to craft a Discovery Pattern that will serve in augmentation of the relevant topic.
  • Topic Relationship Model is a mapping R such that T1 R T2: Discovered topics and docs that connect T1 and T2; R completes the knowledge graph that is relevant to the reference content.
  • the components, process steps, and/or data structures described herein may be implemented using various types of operating systems, software development platforms, computing platforms, computer programs, and/or general purpose machines.
  • those of ordinary skill in the art will recognize that devices of a less general purpose nature or having limited resources may require modification of an implementation of an illustrated embodiment which may be done without departing from the scope and spirit of the inventive concepts disclosed herein.
  • a method comprising a series of process steps is implemented by a computer or a machine and those process steps can be stored as a series of instructions readable by the machine, they may be stored on a tangible medium such as a computer memory device (e.g., ROM (Read Only Memory), PROM (Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory), FLASH Memory, Jump Drive, and the like), magnetic storage medium (e.g., tape, magnetic disk drive, and the like), optical storage medium (e.g., CD-ROM, DVD-ROM, paper card, paper tape and the like) and other types of memory.
  • ROM Read Only Memory
  • PROM Programmable Read Only Memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • FLASH Memory Jump Drive
  • magnetic storage medium e.g., tape, magnetic disk drive, and the like
  • optical storage medium e.g., CD-ROM, DVD-ROM, paper card, paper tape and the like
  • Multilevel, e.g. global and local, context sensitive augmented content would increase productivity and enhance a user's experience while viewing or interacting with data for the purpose of learning, reading, writing, drawing, browsing, searching, discovering, viewing images or any type of user interaction with digital data information whether structured or unstructured (e.g. financial, health, manufacturing, and corporate data).
  • the digital data information may be stored locally or remotely via a corporate server or in the cloud. Additionally, private as well as public sources of data may be used or selected by the user for the ultimate personalized range of choices that may be used to further narrow down or expand the augmented content being presented.
  • multilevel context sensitive augmented content would increase productivity and enhance business intelligence for the enterprise by providing context sensitive augmented content that is generated by dynamically mining and analyzing structured and unstructured enterprise data and/or possibly leveraging structured and unstructured publicly available data for further improving user experience.
  • multilevel context sensitive content augmentation filters provide the ability to dynamically mine data on the fly based on modification of a new input from a user. For example, a new input from a user can be the selection of a new text or a portion of the reference content, or it can be a feedback provided such as elevating the priority or weight (e.g. like) or decreasing the priority or weight (e.g.
  • the multilevel context sensitive augmented content can be further in tune with what the user would like to see or expects to see in the augmented content being generated and presented.
  • a feature of the multilevel context sensitive augmented content is that the augmented content is generated either in the cloud or locally using sophisticated information retrieval algorithms or using a set of heuristics so as to enable large-scale data processing, information retrieval, and web mining. Knowing that extracting a feature set from a web page is a problem that is known and various algorithms, methods, and research into various solutions have been made, this system can use existing research or methodologies to extract a feature set. Furthermore, this system employs a set of heuristics and metrics that efficiently extract a set of features that characterize the reference content at hand.
  • heuristics rely on embedded hints, metrics, metadata, or other embedded knowledge and information that can be extracted from the structure, URL link, embedded links, title of the document, or other types of data that may be directly or indirectly related to the reference content along with feedback provided by the user.
  • a feature of the multilevel context sensitive augmented content is that the information retrieved, and knowledge constructed can be saved and called upon in future augmentation tasks and sessions.
  • a feature of the multilevel context sensitive augmented content is that the augmented content is presented through a translucent layer on top of the original content being viewed by the user.
  • a non-obtrusive content augmentation that is hidden or made available whenever a user disables or enables the global and local context sensitive augmented content application.
  • Relevant augmented content is displayed on top of a translucent layer on top of the original content being viewed by the user.
  • the augmentation system provides a less obtrusive and more efficient interaction, browsing and exploration experience.
  • a multilevel corresponds to at least two levels, a global level and a local level.
  • a global and a local relevant features of reference content maybe defined as a global relevant feature corresponding to a feature or a theme common throughout the reference content, and a local relevant feature corresponding to a feature strongly related to a locality within the reference content.
  • One method of dynamically updating augmented content can be achieved by leveraging real-time user feedback, such as elevating priority or dismissing augmented content as being presented to the user. If an augmented content's priority is elevated, its weight increases as well as the metadata that describes this augmented content gets promoted which in turn updates existing augmentation filters as well as generating and presenting new augmentation content based on the new metrics.
  • the augmented content can be dynamically updated based on user interaction, e.g. selection and/or clicking, within the reference or augmented content in real time.
  • augmented content presentation layers such as dials for global and local augmented content, or a scroll-area of small windows for various augmented content. Describing all these various means to implement the augmented content presentation layer is not necessary to understand this disclosure. Furthermore, a person skilled in the art would understand and would be able to employ many different means to implement augmented content presentation layers without departing from the spirit of this disclosure.
  • augmented content while generating augmented content may result in a lot of data that cannot be shown on the display, this data can be stored in a deep queue.
  • a deep queue means that there is more augmented content (data) in the queue than what is displayed on the screen. For example, not all mined augmented content can be displayed simultaneously due to physical screen size limitations or the display layer size.
  • a user can hover over the queue or press an arrow to scroll through the augmented content in the queue.
  • the augmented content being presented to the user may comprise actual data, snap shot of the actual data, a processed portion of the actual data, or a link to the location where the actual data can be retrieved.
  • Theme-based augmented content can further enhance a user's experience by presenting a set of themes.
  • a new or updated augmented content is presented to the user.
  • An option to expedite augmentation and improve the quality is to rely on the user's preferences and feedback.
  • a set of categories/themes can be presented to the user. These constitute metadata.
  • augmentation can be enhanced and filtered. For example, a research paper that deals with AIDS virus would trigger a set of themes such as Pharmaceuticals; Discrimination, etc. . . . .
  • Discrimination The user who is interested in science and pharmacology but not in the social aspects related to AIDS would deselect ‘Discrimination’.
  • all augmented content presented will be tailored to refer to categories that are related to science and other related aspects of the research.
  • the theme can further be defined by a category or a set of related categories. This will serve to prune the augmented data and only present the relevant data that is of interest to the user and the task he is carrying out at that moment.
  • Multilevel context sensitive augmented content application can be implemented as a stand-alone application, on top of another application, or as an extension for applications, e.g. a browser extension.
  • further refinement or fine tuning of various options for customization of augmentation system such as aggregating, mining, filtering, and presenting various aspect of data or metadata can be performed dynamically in real-time.
  • the customization of augmentation system may be performed based on at least one or more of a user's feedback, attributes, characteristics, theme, topics, and interests.
  • the user can provide feedback in the form or liking/disliking the tag. This is similar to promoting or dismissing an augmented content.
  • the augmented content can be updated live. Furthermore, this user's feedback would also result in updating various subsystems such as the underlying data-mining, statistical computing algorithms, or machine-learning algorithms or other information retrieval algorithms or heuristics. These updated subsystems are used to generate or create new signatures, metrics, or features which are based on user's feedback, e.g. liked/disliked tags, where the new signatures are used to generate new augmented content or update the currently presented augmented content.
  • various subsystems such as the underlying data-mining, statistical computing algorithms, or machine-learning algorithms or other information retrieval algorithms or heuristics.
  • These updated subsystems are used to generate or create new signatures, metrics, or features which are based on user's feedback, e.g. liked/disliked tags, where the new signatures are used to generate new augmented content or update the currently presented augmented content.
  • a feature of a system for generating and presenting multilevel context sensitive augmented content is the ability to utilize online and offline mining and analytics for augmentation. For example, mining and processing in real-time or in batch mode and store data in a data store (local or remote) or presenting real-time augmented content to the user. The stored data can be used for future augmentation. Metadata and other relevant data elements can also be annotated in real-time to capture user's preferences and experiences. In addition, metadata and other relevant data elements can be stored in a central repository to be leveraged for future augmentation of same or similar content.
  • a brief description of metadata is that it is data that describes other data. For example: ‘public health’ is a category that encompasses diseases. This higher level category ‘public health’ is a metadata for diseases.
  • a multilevel context sensitive augmented content system uses at least two levels, a global level and a local level. The following explains the difference between global and local augmented content.
  • Global augmented content refers to augmented data that pertain to the overall document that the user is currently browsing, exploring, or interacting with.
  • a local augmented content can refer to augmented content based on a particular piece, paragraph, sentence, word, image, icon, symbol, etc. . . . of that document that the user is currently browsing, exploring, or interacting with.
  • Global and local augmented content are presented using a dynamic deep queue, and the user can control the displaying of at least a portion of the augmented content.
  • Content sources for augmentation can be provided from many sources.
  • An example of such content sources includes but is not limited to a user's own documents and data on desktop, web-content, social media sites, enterprise data-marts, and local and remote data stores, ontologies, other categorization, and/or semantic or relationship graphs.
  • the multilevel context sensitive augmented content can be successfully implemented to augment a user's browsing experience as discussed above.
  • a system for generating and presenting multilevel context sensitive augmented content can be successfully implemented as an application for augmented user experience (UX).
  • the system can increase productivity, provides augmented data-mining and data-exploration platform, augmented e-learning and e-research system, augmented desktop-based and mobile-based browsing, exploration, research, discovery, and learning platforms, data augmentation for better healthcare products and services, data augmentation for better educational products and services, augmentation system for better content management and relationship platform for both enterprise and consumer applications, enhanced online-shopping research and UX, enhanced marketing campaigns, an enhanced news access UX are but to name a few of application benefiting from a system for generating and presenting multilevel context sensitive augmented content.
  • Semantic processing is the process of reasoning about the underlying concepts and expressing their relationships.
  • the following semantic based techniques can also be used in a system for generating and presenting augmented content.
  • utilizing existing tags in public sources utilizing batch-processed tags as a cloud application, semantic processing of selected content to generate a match to an existing tag, semantic processing to generate augmented content on the fly and utilizing user's feedback for promoting and dismissing augmented content are but examples for methods to provide a better user-relevant augmented content.
  • Generating augmented content on the fly can also be accomplished by using a feedback mechanism provided by the user to enable mining and generating of new augmented data to be presented to the user.
  • a system for generating and presenting multilevel context sensitive augmented content is used to improve the analytics of large data sets by leveraging pre-processed data and already generated relationships.
  • a user presents some key words to a search engine, the user gets a set of links that are related in addition to some ads that could very well be related to the key words you have entered or to some personal data known or extracted of the user.
  • the content presented to the search engine can be either parsed from the html or other format or interface produced by a data provider. Or, it can be scanned through OCR if the data format is encrypted.
  • This ability to take a snap shot of a screen and analyzes and leverages its data and relationships empowers and simplifies the augmentation and analytics processes and improves the throughput since the signatures/correlation metrics extracted are a result of processing a significantly smaller set of data. Therefore, the performance gain of a system for generating and presenting multilevel context sensitive augmented content is orders of magnitude compared to mining massive data sets in the cloud.
  • a system for generating and presenting multilevel context sensitive augmented content presents the augmented content along with the reference content using two or more different presentation layers displayed using the same display screen.
  • the system provides the ability to customize the generation of augmented data in situ (in place) while working on original or reference content, where the augmented data can be displayed on see thru presentation layers so as not to obscure the original or reference content and to maximize use of the display screen, and/or the displaying area.
  • a system for generating and presenting multilevel context sensitive augmented content utilizes dynamic updates of displayed augmented content using presentation layers while a user views and manipulates reference content displayed using another presentation layer. It is preferable to use a translucent presentation layer for the augmented content presentation layer that is located on top of the displayed reference content so that the user can easily manipulate or interact with the reference content while simultaneously viewing the dynamically updated augmented content.
  • a translucent presentation layer for the augmented content presentation layer that is located on top of the displayed reference content so that the user can easily manipulate or interact with the reference content while simultaneously viewing the dynamically updated augmented content.
  • displaying relevant augmented data in a separate tab or page would result in loss of context relationship and provides a less efficient and less friendly user experience.
  • displaying the augmented content on the sidebars is possible as well. However, it consumes screen space and clutters displaying of the reference content. Therefore, the ability to keep the reference content accessible to the user while displaying the augmented data on top of the original content provides a much smoother and efficient user experience.
  • the user
  • a system for generating and presenting multilevel context sensitive augmented content enables a user the ability to associate any of the augmented content with the reference content or an attribute of the reference content source using one or more types of metadata.
  • the system enables the user to save the associated metadata for future use or sessions.
  • the association of metadata can be accomplished by embedding a link in the text, by associating a link with a text, or by associating any data or metadata with the reference content or any part of the reference content.
  • the user has the ability to specify a category or more as a source or criterion of augmentation.
  • the user can also define association rules that join a group of attributes, categories, and other metrics together to provide a richer input to aid the augmentation system to generate more relevant augmentation content.
  • an enterprise sales projection document can always be augmented with any data source or data documents that generated the projection.
  • the criterion is a category that says source sales data and not necessarily the exact data documents.
  • the sales data can be extracted automatically by the augmentation system.
  • the augmentation system can carry out an updating procedure for any associated data or metadata for any other reference content.
  • the augmented content is displayed using translucent layers so that the user always sees and has access to the original or reference content.
  • the user is able to access, browse, move, select, hide, tap, scroll, or interact with the reference or augmented content while the system dynamically generates and displays an updated augmented content using the augmentation presentation layer. It is noted that the user interaction with the reference or augmented content can result in having a new reference content that the user wishes to interact with, hence, a new augmented content is generated and displayed.
  • the system keeps track of and saves certain information regarding this nested augmentation level.
  • the system provides the user the ability to switch back and forth between various nested augmentation levels as well as saving or sharing the augmentation filters or settings used for a particular session.
  • further enhancement of the user experience is achieved by enabling the user to change the skin (or look of a user interface UI) of the augmentation system.
  • a UI buttons, options, data
  • the same components of a UI can be displayed on the screen in a variety of ways.
  • a library of templates and color options can be provided to allow the user to customize the display of the augmented content presented by the application.
  • the global augmented content and local augmented content can be displayed using one or more different regions of the screen or displaying the global and local links to the augmented content in two concentric circles around the reference content.
  • the enhancement of the user experience is achieved by enabling the user to choose the most efficient way to utilize and display the augmented content.
  • user selectable skins can also be used to cover or hide pushed content that may exist or embedded in the reference content being viewed.
  • User selectable areas of a skin can be used to enable the display of user selected content such as images or augmented content or pushed content such as advertisement. For example, an ad for tickets to a local concert when the user is browsing a specific artist, or an ad for a book that relates to a global or local augmented content of the user reference or currently viewed augmented content, or any other monetization mechanism based on the augmentation process.
  • the enhancement of the user experience includes a nested multilevel context sensitive augmented content where the augmented content presented to the user can be further enhanced as a function of the various nested levels.
  • the augmented content is presented while keeping track of the current content being viewed in relationship to the original content that the user started with and all levels in between. This provides a hierarchical augmentation system that enables the user to access and build nested levels of augmentation.
  • the user interface, or UI, for a system for generating and presenting multilevel context sensitive augmented content can be launched or started automatically and can reside in the background and stays hidden from view until the user invokes a predefined programming function to enable the UI functionality.
  • a single tap, hot-key, function-key, a gesture, or a multiple or a combination of actions acted upon a content would cause the transparent augmentation layer to be shown with the augmented content and in accordance with user preferences, such as tags, skins, themes, etc. . . .
  • Selecting content presents or updates the augmented content already presented.
  • Visiting an augmentation link results in completely or partially (split screen) covering the reference content or original layer comprising the original content.
  • the UI provides the user the ability to navigate nested augmented content or jump back to reference or original content.
  • additional system and UI features can further be used to increase the overall efficiency and provide a better user experience. For example, saving the augmented content metrics in user history, and using history to enhance and/or tailor analytics and augmentation as would be more relevant to each individual user or group of users such as in corporate environment.
  • Metrics here refer to the generated signatures as mentioned above. Also, it refers to any annotations that are provided by the user such as priority, liking/promoting an augmented content or dismissing it. This can be stored for future sessions as well as using the augmented content promotion and dismissal to enhance augmentation in real time.
  • skins that cover an undesirable part of the screen e.g. side columns where ads are pushed. The skin may be used for further customization of the viewed screen and potentially could be monetized and leveraged to present relevant augmented content that is paid for by the user, such as ads for objects, e.g. books, related to the content of a reference article.
  • a system for generating and presenting multilevel context sensitive augmented content provides dynamic user-guided and customized context-sensitive data augmentation to facilitate learning, exploration and knowledge discovery.
  • the system provides simultaneous interaction with the augmentation layer and the content layer.
  • the system generates augmentation data based on user-defined metrics and filters such as themes, categories of interest, document content and/or part of it.
  • the generated data is not a rigid augmented content.
  • the generated augmented content is any data, concept, and relationships that are presented as a result of the data mining and processing of the original content and the user-defined metrics and filters.
  • the system utilizes dynamic and interactive methods to successively refine and tailor the augmented content based on a user's guidelines, filters, and metrics.
  • the system relies on a variety of sources for content augmentation by accessing any online or offline databases, crowd-sourced databases, or open databases.
  • a custom built graph of concepts and relationships can be built between different pieces of data as they are processed and augmented based on the user's filters and metrics to improve the performance of the system and the User Experience.
  • the system provides a context-sensitive hierarchical augmentation framework for deeper and expansive exploration and knowledge discovery.
  • the system enables construction of a customized graph of data, concepts, and relationships based on the filters and metrics provided even in the absence of content. Content can be generated on the fly for further exploration.
  • the system enables sharing of augmented data and the associated metrics that generated them. This enables richer knowledge discovery by further refining a user's augmented data based on other users' augmented content. This is useful for collaborative research and knowledge discovery.
  • the system can be launched from offline and online documents or reference content to generate the augmentation content, data and graph of relationships amongst the concepts represented by the augmented content.
  • the system provides a UI to display and manipulate reference content and augmented content concurrently, dynamically, and interactively.
  • the system provides one or more translucent layers on top of the reference content to show the augmented content.
  • Translucent layers facilitate displaying the reference content as well as the augmented content.
  • Translucent layers can fully or partially cover the original content.
  • Augmentation layers can be hidden, minimized ((shown as icon), or moved around on the display screen to facilitate easier display and interaction with the reference content.
  • the system enables the user to manipulate and control a set of display layers (reference content layer, and/or augmentation display layers) in a very flexible fashion such that the user can size up, down, move, show, hide any of those display layers.
  • the system provides an intuitive, rich, and friendly UX for data exploration and knowledge discovery on small and large display screens.
  • displaying of the augmented content concurrently and interactively on the original content empowers the user to use this system on smart phones, tablets, and any other display.
  • the system provides means to insert additional content on the augmentation layers based on analytics on the augmented content and the original content.
  • An example knowledge discovery system 13100 includes a KDP system 13195 that is coupled to multiple users, e.g. User A 13101 , User B 13102 , and/or User C 13103 .
  • the KDP system 13195 communicates via Access Channel 13190 with each user by providing discovered knowledge or augmented data 13196 and/or data associated with preferences or knowledge discovery request or data augmentation request.
  • each user may also have a direct connection via communication network 13175 or other communication means which may include wired or wireless communication devices.
  • the KDP system 13195 also includes Memory 13185 to store data information and code associated with a KDP user, Share A-B 13131 , Share C 13132 and Discovery & Tailor Knowledge 13121 .
  • User A 13101 can access, use, manipulate, and display online content 13106 or private content and augmented content 13196 received from the KDP system 13195 .
  • User A 13101 can include any type of computer system or mobile device that allows its operator to interact, transmit and receive data information.
  • An example of sharing a link or content between User A 13101 and the KDP system 13195 would occur via Access Channel 13190 .
  • using a social network would be simply to copy the content and share it or to provide a link to where the content is or the source of the content where it can be accessed. Simply put when a link, e.g. a/v media or article, is shared then the end user cannot change the content.
  • An example of a Knowledge Discovery system is used by one or more users to explore and share augmented content.
  • Each user would receive a shared augmented content that is locally enriched and augmented by the KDP system 13195 as per the receiving user's preferences, specific parameters, and/or geographical location.
  • the receiving user is able to interact with the shared augmented content using the KDP system 13195 .
  • the KDP system 13195 having access to the other user's preferences would be able to regenerate the shared augmented content using his/her preferences or profile.
  • the KDP system 13195 enables each user to enrich and augment any shared content automatically and/or based on certain programmable parameters.
  • KDP system 13195 is a recursive augmentation, where an original author tackled one side of the knowledge and shared it with another user who receive not only what the original author has shared but also a customized augmentation based on the receiving user preferences where the KDP system 13195 can automatically process and generate augmented content of the original author's shared content based on the interest of the receiving user. This can allow the receiving user to dig deeper or expand the knowledge discovery. Furthermore, the receiving user can highlight, augment or add his comments to an augmented content, e.g. annotating or adding an external link to a content such as a video/audio or a link to an article not discovered by the system.
  • a KDP system 13195 is a smart knowledge discovery system that enables democratization of knowledge.
  • Another example of sharing augmented content using a KDP system 13195 is that the shared augmented content or knowledge can embody all or part of the augmentation parameters used by the original author e.g. User A 13101 .
  • a receiving user, e.g. User B 13102 , using the shared augmented content would be able to use the KDP system 13195 to use this shared knowledge as the seed for him to build on using his own preferences, parameters and/or specific interests as well as many other possible seed info for additional augmentation as described above.
  • FIG. 16 presents an example data flow for User A 14101 , e.g. user A is reading an article/content.
  • the user A 14101 invokes the knowledge discovery platform 14110 on the desired content to further explore and discover relevant knowledge.
  • the user A 14101 can interact with the automatically discovered knowledge.
  • User A 14101 tailors the discovered knowledge by promoting certain content or dimension of knowledge, or demotes others based on the task being accomplished or the interests of user A 14101 .
  • User A 14101 sets a policy in Knowledge Sharing and Broadcasting 14130 for sharing the content with one or more end users 14140 or the public at large.
  • the user A 14101 can share or broadcast the discovered knowledge represented by the knowledge graph and other related augmented data that is automatically updated based on policy or interests that is known to the KDP system 13195 for each end users 14140 .
  • each user of the end users 14140 can further augment or interact with the shared content using the KDP system 13195 which can provide selective feedback or additional content to the original user A 14101 and to each end users 14140 based on the collective content augmentation of all users or a selective group of users as may be determined by user A 14101 sharing policy in Knowledge Sharing and Broadcasting 14130 or as determined by each user of end users 14140 own sharing policy 14230 as shown in FIG. 17 and further explained below.
  • the KDP system 13195 comprises Knowledge Discovery Tailoring and Annotation 14120 which receives the augmented data 14200 and knowledge discovered by the Knowledge Discovery System 13100 .
  • FIG. 17 is data flow diagram for the Knowledge Discovery Tailoring and Annotation 14120 .
  • the KDP system 13195 may discover much relevant content to the content in hand.
  • the user A 13101 may choose to emphasize certain aspects of the knowledge and/or dismiss others in Augment Topics 14210 .
  • the user A 13101 interacts with the KDP system 13195 to tailor the resultant knowledge.
  • the user A 13101 can further manually annotate the shared content in Annotate Data/Knowledge 14220 , e.g. to add his/her remarks.
  • the user A 13101 can set the sharing policy in Sharing and Modification Policies 14230 , e.g. such as who can view it, who can edit it, and who can share it, etc.
  • Sharing and Modification Policies 14230 e.g. such as who can view it, who can edit it, and who can share it, etc.
  • the user A 13101 shares his discovered view 14240 with his circles of connections or broadcast it to the public.
  • FIG. 18 shows an example of Knowledge Discovery sharing policy Knowledge Sharing and Broadcasting 14130 .
  • the augmented data 14300 corresponds to the knowledge discovered or shared from a user A 13101 of the knowledge discovery system 13100 .
  • the user receives and views the shared link 14310 of discovered knowledge.
  • the shared link can be transmitted using any means for communication e.g. via email, text, social network, etc. . . . Or, via some notification in the knowledge discovery platform 13110 .
  • the user shares the knowledge based on his sharing policy 14320 .
  • the knowledge discovery system 13100 checks the membership of the user viewing the shared link 310 . If the user wishes to interact with the shared content e.g.
  • the KDP system 13195 can provide selective feedback or additional content to the original user A 13101 and/or to any user or a group of users of end users 14140 based on the collective content augmentation of end users 14140 or a selective group of users as may be determined by user A 13101 sharing policy 14130 or as determined by each user of end users 14140 own sharing policy 14320 .
  • the KDP system 13195 can exponentially grow the discovered knowledge as more users interact with the original shared content.
  • the KDP system 13195 can benefit not only the original user A 13101 through selective feedback of augmentation parameters or uses of his originally shared content, but also the richness of the augmented content to the community of end user 14140 where again each user through his own sharing policy may receive a customized feedback from the KDP system 13195 regarding his own tailored and shared content as well as the original shared content from user A 13101 .

Abstract

A machine learning and inference system operable to reason about content information and to infer a set of patterns and a set of relationships between patterns of the set of patterns. The machine learning and inference system access content information from a plurality of data sources, such as public and private data sources, the public and private data sources include structured and unstructured data. The machine learning and inference system is operable to reason about the content information and to compile a set of augmented content based at least in part on one or more of the content information, the set of patterns and the set of relationships, and its reasoning about the content information. The machine learning and inference system learns over time and enables nested or hierarchical content augmentation and can be customized for specific industries and content such as financial, medical, health, business, manufacturing and social media information content.

Description

    PRIORITY CLAIM
  • This application is a continuation of, and claims benefit and priority to, U.S. application Ser. No. 15/495,977 filed on Apr. 25, 2017 and will issue as U.S. Pat. No. 10,402,502 on Sep. 3, 2019, which is a continuation in part of, and claims benefit and priority to, U.S. application Ser. No. 14/217,462 filed on Mar. 17, 2014 and issued as U.S. Pat. No. 9,632,654 on Apr. 25, 2017, entitled “System and method for augmented knowledge discovery” which claims benefit and priority to U.S. Provisional Patent Application having application No. 61/801,359, filed Mar. 15, 2013, and is a continuation in part of U.S. application Ser. No. 14/491,977 filed on Sep. 19, 2014, entitled “System for Knowledge Discovery Platform” which claims benefit and priority to U.S. Provisional Patent Application having application No. 61/880,175, filed Sep. 19, 2013, and is a continuation in part of, and claims benefit and priority to, U.S. application Ser. No. 15/057,052 filed on Feb. 29, 2016, entitled “System for Knowledge Discovery” and issued as U.S. Pat. No. 9,817,906 on Nov. 14, 2017, which is a continuation of, and claims benefit and priority to U.S. application Ser. No. 13/573,564 filed on Sep. 24, 2012 and issued as U.S. Pat. No. 9,275,148 on Mar. 1, 2016, which claims benefit and priority to U.S. Provisional Patent Application having application No. 61/743,047, filed Aug. 24, 2012 and to U.S. Provisional Patent Application having application No. 61/626,253, filed Sep. 23, 2011. Each of the above named applications is hereby incorporated by reference herein in its entirety.
  • TECHNICAL FIELD
  • The subject of this application generally relates to information processing and to systems and methods for improving data-exploration, learning and browsing, and more specifically for context sensitive data augmentation for a richer user experience, data-exploration, and knowledge discovery.
  • BACKGROUND
  • Having a context sensitive user interface which can automatically choose from a multiplicity of options based on the current or previous state(s) of a program operation can be found in current graphical user interface. For example: Clicking on a text document automatically opens the document in a word processing environment. The user does not have to specify what type of program to use to open the file. Program files and their shortcuts (i.e. executable files) can be associated with certain type of files, e.g. text document, and are automatically run by the operating system when the user selects or double clicks the file. Similarly, the user-interface may also provide context sensitive feedback, such as changing the appearance and/or color of the mouse pointer or cursor. In addition, context sensitive feedback may also be used in video games where it changes a button's function based on a player who is in a certain position or a place and needs to interact with an object.
  • Relational databases are currently the predominant choice in storing data like financial records, medical records, personal information, manufacturing and logistical data. Nowadays large-scale data or information processing can involve various types of collection, extraction, warehousing, analysis and statistics. For example, organizing and matching data by using some common characteristics found within the data set would result in new groups of data that can organized and are easier for many people to understand, search, index and manipulate.
  • By describing the contents and context of data files, the quality of processing the original data files can be greatly increased. For example, a webpage may include metadata specifying what language was used in writing its code, what tools were used to create it, and where to go for more on the subject, higher-level concepts that describe the data. Thus, allowing browsers to automatically improve the experience of users. The results of any large-scale data processing can be an extensive set of meta-data, data, and relationships that may be used in a search engine, for example, to provide a possible set of related information to a term that is used in a search query. For example, search engines have used and generated enormous amount of data and metadata that is used to provide links to content that may be of possible interest to a user based on what the user is searching for.
  • As stored digital information has increased tremendously in size, the ability of a user to use effectively personal data, corporate data, or publicly available data has also increased many folds although it still falls short of the potential of reasoning about the large amount of data that is available and continues to grow at an astounding pace. Therefore, there exists a need to more effectively use and reason about the data, and with more a richer augmented user experience while reading, writing, searching, or using digital data information.
  • Large amount of data can be stored using various types of relational databases, network based storage, or cloud based storage. These are but some examples of predominant choices in storing data and information like financial records, medical records, personal information, manufacturing and logistical data. Nowadays large scale data or information processing can involve various types of collection, extraction, warehousing, analysis and statistics. For example, organizing and matching data by using some common characteristics found within the data set would result in a new groups of data that can be organized and are easier, for many people, to understand, search, index and manipulate.
  • As stored digital information has increased tremendously in terms of size or amount of data information, the ability of a user to use effectively personal data, corporate data, or publicly available data has also increased many folds. Additional problems are encountered in finding relevant data for a user's needs. Knowledge discovery platform, systems as described in related patent applications can be used to generate augmented knowledge using such large scale data that meet the needs of a user. The augmented knowledge provided to the user can be highly relevant to another user or another knowledge discovery system. However, the other user or system may have certain distinct criterions, characteristics, preferences or interests that are different from the first user.
  • Thus, an increase in accuracy and efficiency can be achieved by benefiting from using the augmented knowledge already obtained for a first user and by a regenerated or modified augmented content and knowledge tailored to a second user's interest, profile, or preferences. Therefore, there exists a need for a knowledge discovery system that can leverage the knowledge discovered for a first user to provide augmented knowledge and/or newly discovered or augmented knowledge based on a second user's preferences or interests.
  • SUMMARY
  • This disclosure presents new and useful methods and systems to provide multilevel context sensitive augmented experience, browsing, data exploration, knowledge discovery, and e-learning. In accordance with one embodiment, this multilevel context sensitive augmented content is presented using overlaid layers on top of the digital information (reference content or original content) being viewed by a user. Furthermore, the overlaid layers can be transparent or translucent for a non-obtrusive user experience. Thus, providing the user the ability to interact with the original content while viewing a dynamically updated augmented content on top of the original content, the updated augmented content is generated based at least on the user interaction with the reference content. Furthermore, the user can manipulate the original content and its associated or related categories and other relevant augmentation data to generate more relevant and meaningful augmentation while viewing the augmented content on top of the reference content.
  • In accordance with one embodiment, a system for generating and presenting augmented content on a translucent display layer overlaid on top of a reference content display layer on the same display screen. The augmented content is generated using relevant features of the reference content or the displayed portion of the reference content. The generation of the augmented content is further customized using user-relevant characteristics, attributes, history, and relevant features in relation to the reference content such as generic categories and relationships. In addition, the user controls the position and size of both the reference content display layer and the augmented content display layers on the same display screen, as well as the ability of the user to control the visibility and hiding of all display layers. Furthermore, the user controls the sharing of the same display screen by the reference content and augmented content display layers.
  • In accordance with one embodiment, the system for generating and presenting augmented content provides a set of augmentation filters: topics and categories based on the reference content to aid the user in further customization of the augmentation filters to suite his/her interests. The generated augmentation content is one or more of online documents, web pages, web links. The generated augmentation content can be a customized version using a variety of ways such as presenting a summary of the augmented content, or in deleting un-necessary links and ads. In accordance with one embodiment, the generated augmentation content is based at least on one of (i) a set of criteria associated with the reference content, (ii) user customization of augmentation filters, (iii) user interaction with the reference content, and (iv) user interaction with the generated augmented content.
  • In accordance with one embodiment, the system can employ the same methods and algorithms to enable the user to custom build a knowledge graph of concepts and relationships based on information retrieved from structured and unstructured data residing in a private or public data store or other public repositories. The Augmentation System relies on these data sources along with the user's feedback and interests to generate on the fly relevant augmentation data for the task at hand. For example, a physician can utilize this system to custom build a knowledge graph for a patient based on the physician's experience and knowledge, the patient's history, the patient's known diseases, symptoms, and ailments, and known public data related to the patient's case. Such a system will enable the physician to make educated and informed decisions instead of being mired in a plethora of sources where it would be extremely hard for the physician to manually extract reliable and relevant data in an efficient and useful way.
  • In accordance with one embodiment, the system for generating and presenting augmented content dynamically updates the augmented content by utilizing additional filters, metrics, and customization provided by the user as a result of the generated augmented content. Furthermore, the user can save any or all the data associated with a particular session of data augmentation. This will enable the user to build on the augmentation of previous sessions.
  • In accordance with one embodiment, the system for generating and presenting augmented content generates global and local augmentation content associated with the reference content and any selected or highlighted part of it. For example, the system generates a plurality of global augmentation content based on the augmentation filters associated with the overall reference content, and the system generates a plurality of local augmentation content based on a specific part of the reference content that is selected or flagged by the user, or currently being viewed by the user.
  • In accordance with one embodiment, the system for generating and presenting augmented content enables collaborative augmentation, e.g. a user can share the generated augmented content with other users. Furthermore, the user can share the content augmentation filters, or the settings used to generate the augmented content with other users.
  • In accordance with one embodiment, the system for generating and presenting augmented content enables a user to make use of nested hierarchical content augmentation capabilities. A user can request content augmentation using at least a portion of a previously generated augmented content. The previously generated augmented content serves as new reference content for the system to generate and present to the user a new augmented content. The user can traverse the content augmentation graph to further customize the content augmentation at any level.
  • In accordance with one embodiment, the display screen may be physically attached to an electronic device, e.g. a mobile device, a handheld device, a tablet, etc . . . , or the display screen may physically separate from the electronic device. For example, a touch display where a user interacts with the display screen and controls both the position and size of the various display layers on the display screen. The display screen can communicate with a remote electronic device such as a remote server, or a mobile device. Alternately, the user can control the position and size of all display layers on the physically detached display screen using the electronic device.
  • In accordance with one embodiment, the user interaction with the reference content includes at least one of a manipulation of a region of the first display layer, a manipulation of a region of the second display layer, hiding of the first display layer, hiding of the second display layer, saving the first set of augmented content, saving a portion of the first set of augmented content, modifying the translucency of the second display layer, a selection of a region of the first display screen, a manipulation of a region of the first display screen, one or more user gesture made onto the first display screen, an activation of a button of the first display screen, an activation of a button of the electronic device, and using a human interface device to communicate the user interaction to the electronic device.
  • In accordance with one embodiment, the reference content, local content, and augmentation content are displayed using multiple display layers by means of one or more display screens. The display screen comprises electronic system to receive and/or transmit information to an electronic device. The user interaction with the reference content includes the manipulation of one or more regions of at least one display layer, a manipulation of one or more regions of at least one display layer of the augmented content, hiding of any one or more of the display layers, saving the first set of augmented content, saving a portion of the first set of augmented content, modifying the translucency of any one of the display layers, a selection or a manipulation of one or more regions of any one of the display screens, one or more user gesture made onto the display screen, an activation of a button of the display screen, an activation of a button of the electronic device, and using a human interface device to communicate the user interaction to the electronic device or to the display screen.
  • In accordance with one embodiment, this disclosure refers to augmenting a given content based on a number of manually defined and automatically extracted parameters to generate a set of local and global data elements. The set of local and global data elements can be used in a variety of application specific augmentation systems to enhance a user's experience while interacting with the given content.
  • In accordance with one embodiment, this disclosure facilitates the construction and presentation of a user-customized network of concepts, objects and relationships that serve to augment the content at hand for the purpose of knowledge discovery, learning, and a richer user experience in browsing and/or interacting with data information. Furthermore, the constructed network can be saved and further augmented over time for richer and more efficient user experience. This is in contrast to having a pre-built network of concepts and relationships that a user can access. This system generates a network that can be customized and tailored based on the user's interests.
  • In accordance with one embodiment, this disclosure facilitates a system that provides the user the ability to fully control the generated augmented content by virtue of changing the scope of certain topics, e.g. expanding or specifying a narrower sub-topic, based at least on one of a defined theme, predefined themes, and categories. Therefore, the augmented content can serve to further explain, define, and to elaborate and expound on reference content or a selected portion of reference content being viewed, observed, or interacted with by a user.
  • In accordance with one embodiment, this disclosure can be used to aggregate information related to a reference or selected content by customizing the augmentation filters to achieve the desired or intended results. For example, the information, reference content, or the generated augmented content can include rich media like video, audio, images as well as text. Various filters can be customized by the user to enable a user to increase the relevance of the generated augmented content to the intended user objective. In addition, a hierarchical system of content augmentation maybe defined and customized by a selected theme or a category. The generated augmented content and its display layers can be monetized for ads and other monetization purposes.
  • In accordance with one embodiment, this disclosure enables real-time manipulation of reference and augmented content for enhanced and richer User Experience (UX). In addition, collaboration and sharing of augmented content provides an increase in value and productivity to a user. Similarly, collaboration and sharing of augmentations filters and settings provide additional richness and ease of viewing, browsing, sharing, and manipulation of reference and augmented content. Furthermore, the user is able to control the presentation style of the generated augmented content, e.g. as raw links, concise summary of augmented content, or other methods that capture the essence of the augmented content. The presentation style of the generated augmented content maybe for data analysis, research, information, monetization, commercial, or educational purposes.
  • In accordance with one embodiment, a system for generating and presenting augmented content on a translucent display layer overlaid on top of a reference content display layer on the same display screen. The augmented content is generated using relevant features and filters extracted from the reference content or the displayed portion of the reference content. A feature is a pattern that can be extracted or inferred from the content at hand. Feature extraction is the process of reducing the dimensionality of a document by capturing a set of features which reflect the most relevant and salient properties of that document. For example, a feature can be a keyword in the content, title of the content, or other metadata that can be extracted or inferred from the content, its link, or any embedded content or link to other content. A feature could also correspond to a concept such as a name, a topic, or an event that can be extracted or inferred from the content. A group of features are combined together using an association rule to form a pattern, a complex pattern, or a filter. A filter may comprise or describe a relationship between features, a collection of features, or a group of features. A filter may also reflect a correlation between a set of features. A category is a grouping of features or a grouping of multiple sets of features. A category may correspond to a classification of entities or concepts that share some property or relationships. A category maybe formed using a filter, group of filters, or any combination of filters and features. An association rule to combine a feature, a set of features, a filter, or a set of filters can also be used to generate a category, a set of categories, a new feature, a new filter, a new set of features, or a new set of filters. Thus, the generation of the augmented content can use (i) any one of a feature, a filter, a category, or (ii) any association in between, or a combination, of a filter, a feature, and a category. The generation of augmented content is further customized using user-relevant characteristics, attributes, history, and other relevant features in relation to the reference content such as generic categories and relationships. In addition, the user controls the position and size of both the reference content display layer and the augmented content display layers on the same display screen, as well as the ability of the user to control the visibility and hiding of all display layers. Furthermore, the user controls the sharing of the same display screen by the reference content and augmented content display layers.
  • In accordance with one embodiment, the user interaction with any one of the reference content, a portion of the reference content, an augmented content, and a portion of an augmented content includes at least one of a manipulation of a region of the first display layer, a manipulation of a region of the second display layer, hiding of the first display layer, hiding of the second display layer, hiding a display layer, saving at least a portion of the reference content, saving at least a portion of the augmented content, saving at least a portion of one or more of a set of filters, a set of features, a set of categories, a set of metrics, a set of user preferences, modifying the translucency of a display layer, a selection of displayed content using a region of the first display screen, a manipulation of displayed content using a region of the first display screen, one or more user gesture made onto the first display screen, an activation of a button of the first display screen, a user input made using the electronic device, and using a human interface device to communicate the user interaction to the electronic device.
  • Described herein is a system and method for performing knowledge discovery in a computer system having memory including coupling the memory to a processor via a memory channel, wherein the computer system is operable to have independent access to the memory channel and to a storage medium where the knowledge discovered representing augmented data is stored. Knowledge Discovery Platform (KDP) system is described to enable one or more users of KDP system to allocate and share augmented content or discovered knowledge with one or more end/other users using the computer system, multiple computer systems that can access the storage medium where the knowledge discovered or augmented data are stored, or via any one or more of a plurality of means for communication wired or wirelessly including devices for accessing such communication systems, means for notification, and/or through a process using a KDP system. The one or more users of a KDP system can invoke knowledge discovery request or data augmentation request on certain user content. The user content may be displayed using a wireless device or a monitor coupled to the computer system. The KDP system automatically generates augmented content and display the augmented data based on user's parameters, preferences, or interests. The one or more users can tailor or annotate the augmented content in accordance with certain parameters, preferences, or interests.
  • Also described herein is a method for performing data augmentation request in a computer system for providing enriched or augmented sharing of the knowledge discovered (or already augmented user content) between one or more users of KDP system and with one or more end users (clusters of users), another KDP system, or a process using another KDP system. A receiving end user or process would receive the augmented content information or a notification pointing to the augmented content where the augmented content can be accessed, downloaded or manipulated by the end user or process using the KDP system.
  • Also described herein is a method for performing data augmentation request in a computer system for providing the receiving end user (or a process using another instance of a KDP system) may perform automated processing of the received notification of the augmented content using certain preferences, preconfigured preferences, or having certain programmable parameters such that the KDP system regenerates the shared augmented content using the preferences, preconfigured preferences, or certain programmable parameters of the process. The regenerated augmented content can be parsed or additional invocation of the KDP system may be used to further refine the augmented content or for sharing a customized version of the received augmented content. The augmented content can be stored in the computer system or transmitted to be processed further through additional computer systems, KDP systems, or using other computer systems or processes that are dedicated for processing knowledge discovery request or data augmentation request in response to one or more preferences, specific interests, and/or target market. Real-time and theme based augmentation may also be used to further enhance the user's experience.
  • In accordance with one embodiment, the present application discloses knowledge discovery platform systems and methods to provide a first user the ability to generate augmented content, and to allocate, regenerate, or modify the augmented content using stored information in a computer system, and the stored information is associated with the first user. Furthermore, the stored information includes preferences or other programmable parameters associated with a second user, a cluster of users, or augmented content of the first user. The stored information may also include any one of a profile of a second user, a parameter of an executable code or process, preconfigured preferences for a registered KDP user, and preconfigured preferences for an unregistered KDP user. The computer system is programmable to collect the stored information using a temporary (volatile memory) or permanent storage (non-volatile memory) of the profile of the second user, the parameter of the executable code or process, the preconfigured preferences for registered KDP users, and preconfigured preferences for an unregistered KDP user.
  • In accordance with one embodiment, the present application discloses knowledge discovery platform systems and methods to store a KDP's profile of a user, knowledge discovery request, or data augmentation request and provide a first user the ability to generate augmented content using new preferences and/or KDP's profile of the first user, and to allocate, regenerate, or modify the augmented content using preferences, parameters, KDP's profile, or information associated with the augmented data or discovered knowledge of a second KDP user.
  • In accordance with one embodiment, the present application discloses knowledge discovery platform systems and methods to (i) provide a first user the ability to generate augmented content using at least one of a first user's preferences, first user's manual annotation, and first user's KDP profile; and (ii) automatically allocate, regenerate, or modify the augmented content using at least one of a designated process, preconfigured preferences of a designated process, preferences of a second user, a KDP's profile of a second user, preconfigured preferences of a registered KDP user, and preconfigured preferences for an unregistered KDP user. The designated process can be a process or part of a process being executed using a KDP system, a computer system, a compute server or a wireless device.
  • In accordance with one embodiment, one or more users of KDP are able to allocate and share augmented content (or discovered knowledge) with one or more end users using any one or more of means for communication, means for notification, and through a KDP process. The one or more users of KDP invoke KDP system on certain content, and the KDP system automatically generates augmented content. The one or more users can tailor or annotate the augmented content in accordance with certain parameters, preferences, or interests. The one or more users then share this augmented content with one or more end users or processes. A receiving end user or process would receive the augmented content information or a notification pointing to the augmented content where the augmented content can be accessed, downloaded or manipulated by the end user or processed using the KDP system.
  • In accordance with one embodiment, a process may perform automated processing of the received notification of the augmented content using certain preconfigured preferences, or programmable parameters, such that the KDP system regenerates the shared augmented content using the preconfigured preferences or the programmable parameters of the process. The regenerated augmented content can be parsed or additional invocation of KDP system may be used to further refine the received augmented content or for sharing a customized version of the received augmented content. The received augmented content can be processed through additional systems or using other processes that are dedicated to one or more preferences, specific interests, localized parameters, geographical location, and/or a target market.
  • In accordance with one embodiment, a user is enabled to share the augmented content (or a particular knowledge graph or other generated content) with one or more designated end users or processes in accordance with one or more predefined service levels, customized market, geographical locations, seasonal or timing events, and/or localized preferences per end user.
  • In accordance with one embodiment, if an end user is a registered KDP user then certain privileges or service levels can be invoked or processed to enrich or further customize the augmented content according to the end user profile.
  • In accordance with one embodiment, if an end user is not a registered KDP user then the KDP system further customizes the regeneration of the received augmented content by (i) restricting or limiting certain privileges or service levels, (ii) enabling certain privileges or redirecting to certain service levels, or (iii) using certain localized parameters that are associated with the end user.
  • In accordance with one embodiment, certain service levels or privileges may be invoked once the end user has become a registered KDP user. A registered KDP user can tailor and automate the knowledge discovery and its broadcasting to the masses. This would enable a registered KDP user a rich and unique offering that will incentivize the growth of the number of registered KDP users and enabling, for example, people to discover stories or information, annotate them and share them with their friends or the public at large by sharing the story on any social or public network or forum, the KDP system in turn further customizes the delivered or shared stories or information using any one of the embodiment disclosed above or any combination of one or more of the embodiments disclosed above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The attached drawings, which are incorporated into and constitute a part of this specification, illustrate one or more examples of embodiments. These drawings together with the description of example embodiments serve to explain the principles and implementations of the embodiments.
  • FIG. 1 shows Discovery Patterns 10100.
  • FIG. 2 shows block diagram for an Augmentation System 11500.
  • FIG. 3 shows block diagram for Causality and Augmentation System 11600.
  • FIG. 4 shows block diagram of a Causality Graph Synthesis 11700.
  • FIG. 5 shows block diagram of the Augmentation System with the causality graph 11900.
  • FIG. 6 is a block diagram of a data augmentation system 12100 that is used to generate augmented content for a given reference content.
  • FIG. 7 is a block diagram of a hierarchical augmentation system 12200 that is used to support the generation of multilevel augmented content using multiple reference content.
  • FIG. 8 is a block diagram of a hierarchical augmentation system 12300 that is used to support the generation of multilevel augmented content using multiple reference content along with a controller that manages the nested augmentation functions and the user's interaction with the generated augmented content.
  • FIG. 9 is a block diagram of a data augmentation system 12400 having the capability of manipulating, controlling and displaying both the reference content and the augmented content simultaneously and dynamically.
  • FIG. 10 is a block diagram of a relevant augmented content extraction 12500 that is used as a subsystem of a data augmentation system.
  • FIG. 11 is a block diagram of a relevant augmented content extraction 12600 that is used as a subsystem of a data augmentation system.
  • FIG. 12 is a display example of the generated augmented content of using multiple display layers.
  • FIG. 13 is a display example of a simulated use case of a data augmentation system invoked while viewing a news article.
  • FIG. 14 is a display example of a simulated use case of a user interaction with a data augmentation system invoked while viewing a news article.
  • FIG. 15 shows an example block diagram of knowledge discovery system 13100 in accordance with one embodiment.
  • FIG. 16 shows an example block diagram of knowledge discovery system 14100 in accordance with one embodiment.
  • FIG. 17 shows an example block diagram of Knowledge Discovery Tailoring and Annotation system 14120 in accordance with one embodiment.
  • FIG. 18 shows an example block diagram of Knowledge Sharing and Broadcasting 14130 in accordance with one embodiment.
  • DETAILED DESCRIPTION
  • The present disclosure presents techniques, systems and methods to provide a user with global and local context sensitive augmented content to enhance the user experience while interacting with digital information be it while reading, writing, drawing, browsing, searching, viewing, or using digital data information such as financial, medical, business or corporate data, social media data, or any data that is accessible locally or on the web and/or remotely through web based services. These techniques, systems and methods are applicable to various computing platforms such as hand-held devices, desktop computers, notebook computers, mobile devices, as well as compute servers.
  • The term “coupled” is defined as connected, although not necessarily directly, and not necessarily mechanically. The terms “a” and “an” are defined as one or more unless this disclosure explicitly requires otherwise. The terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”) and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a method or device that “comprises,” “has,” “includes” or “contains” one or more steps or elements possesses those one or more steps or elements but is not limited to possessing only those one or more elements. Likewise, a step of a method or an element of a device that “comprises,” “has,” “includes” or “contains” one or more features possesses those one or more features but is not limited to possessing only those one or more features. Furthermore, a device or structure that is configured in a certain way is configured in at least that way but may also be configured in ways that are not listed.
  • Example embodiments are described herein in the context of a system of one or more mobile device, electronic device, handheld device, computers, servers, firmware, and software. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Other embodiments will readily suggest themselves to such skilled persons having the benefit of this disclosure. Reference will now be made in detail to implementations of the example embodiments as illustrated in the accompanying drawings.
  • In the interest of clarity, not all of the routine features of the implementations described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific and application-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the art having the benefit of this disclosure.
  • In general, various information processing techniques and algorithms can be used to provide the augmented data system with global and local context sensitive augmented content. In the following paragraphs certain definitions and representations of data flow models are presented and discussed without limitations on how each model may be implemented whether by hardware, software, firmware, or any combination thereof.
  • Multilevel, e.g. global and local, context sensitive augmented content would increase productivity and enhance a user's experience while viewing or interacting with data for the purpose of learning, reading, writing, drawing, browsing, searching, discovering, viewing images or any type of user interaction with digital data information whether structured or unstructured (e.g. financial, health, manufacturing, and corporate data). The digital data information may be stored locally or remotely via a corporate server or in the cloud. Additionally, private as well as public sources of data may be used or selected by the user for the ultimate personalized range of choices that may be used to further narrow down or expand the augmented content being presented.
  • Additionally, multilevel context sensitive augmented content would increase productivity and enhance business intelligence for the enterprise by providing context sensitive augmented content that is generated by dynamically mining and analyzing structured and unstructured enterprise data and/or possibly leveraging structured and unstructured publicly available data for further improving user experience. In addition, multilevel context sensitive content augmentation filters provide the ability to dynamically mine data on the fly based on modification of a new input from a user. For example, a new input from a user can be the selection of a new text or a portion of the reference content, or it can be a feedback provided such as elevating the priority or weight (e.g. like) or decreasing the priority or weight (e.g. dislike, delete, dismiss) a single augmented content, a category of augmented content, or a theme of augmented content. Furthermore, leveraging the history and/or user personal preferences, the multilevel context sensitive augmented content can be further in tune with what the user would like to see or expects to see in the augmented content being generated and presented.
  • In accordance with one embodiment, a feature of the multilevel context sensitive augmented content is that the augmented content is generated either in the cloud or locally using sophisticated information retrieval algorithms or using a set of heuristics so as to enable large-scale data processing, information retrieval, and web mining. Knowing that extracting a feature set from a web page is a problem that is known and various algorithms, methods, and research into various solutions have been made, this system can use existing research or methodologies to extract a feature set. Furthermore, this system employs a set of heuristics and metrics that efficiently extract a set of features that characterize the reference content at hand. These heuristics rely on embedded hints, metrics, meta-data, or other embedded knowledge and information that can be extracted from the structure, URL link, embedded links, title of the document, or other types of data that may be directly or indirectly related to the reference content along with feedback provided by the user.
  • In accordance with one embodiment, a feature of the multilevel context sensitive augmented content is that the information retrieved and knowledge constructed can be saved and called upon in future augmentation tasks and sessions.
  • In accordance with one embodiment, a feature of the multilevel context sensitive augmented content is that the augmented content is presented through a translucent layer on top of the original content being viewed by the user. Hence a non-obtrusive content augmentation that is hidden or made available whenever a user disables or enables the global and local context sensitive augmented content application. Relevant augmented content is displayed on top of a translucent layer on top of the original content being viewed by the user. Hence, the augmentation system provides a less obtrusive and more efficient interaction, browsing and exploration experience.
  • In accordance with one embodiment, a multilevel corresponds to at least two levels, a global level and a local level. A global and a local relevant features of reference content, maybe defined as a global relevant feature corresponding to a feature or a theme common throughout the reference content, and a local relevant feature corresponding to a feature strongly related to a locality within the reference content. One method of dynamically updating augmented content can be achieved by leveraging real-time user feedback, such as elevating priority or dismissing augmented content as being presented to the user. If an augmented content's priority is elevated, its weight increases as well as the metadata that describes this augmented content gets promoted which in turn updates existing augmentation filters as well as generating and presenting new augmentation content based on the new metrics. For example, if an augmented content describing certain public policy information is promoted, then that augmented content's priority is increased, and the priorities of all augmented content that reference some public policy, or government policy get increased. In addition, the augmented content can be dynamically updated based on user interaction, e.g. selection and/or clicking, within the reference or augmented content in real time. There are various means to implement the augmented content presentation layers such as dials for global and local augmented content, or a scroll-area of small windows for various augmented content. Describing all these various means to implement the augmented content presentation layer is not necessary to understand this disclosure. Furthermore, a person skilled in the art would understand and would be able to employ many different means to implement augmented content presentation layers without departing from the spirit of this disclosure.
  • In accordance with one embodiment, while generating augmented content may result in a lot of data that cannot be shown on the display, this data can be stored in a deep queue. A deep queue means that there is more augmented content (data) in the queue than what is displayed on the screen. For example, not all mined augmented content can be displayed simultaneously due to physical screen size limitations or the display layer size. A user can hover over the queue or press an arrow to scroll through the augmented content in the queue. In addition, it is important to note that the augmented content being presented to the user may comprise actual data, snap shot of the actual data, a processed portion of the actual data, or a link to the location where the actual data can be retrieved.
  • Theme-based augmented content can further enhance a user's experience by presenting a set of themes. In accordance with one embodiment, when the user selects or deselects a theme, a new or updated augmented content is presented to the user. An option to expedite augmentation and improve the quality is to rely on the user's preferences and feedback. When the application is invoked, a set of categories/themes can be presented to the user. These constitute meta-data. By relying on the user choices of themes, augmentation can be enhanced and filtered. For example, a research paper that deals with AIDS virus would trigger a set of themes such as Pharmaceuticals; Discrimination, etc. . . . . The user who is interested in science and pharmacology but not in the social aspects related to AIDS would deselect ‘Discrimination’. Thus, all augmented content presented will be tailored to refer to categories that are related to science and other related aspects of the research. The theme can further be defined by a category or a set of related categories. This will serve to prune the augmented data and only present the relevant data that is of interest to the user and the task he is carrying out at that moment.
  • Multilevel context sensitive augmented content application can be implemented as a stand-alone application, on top of another application, or as an extension for applications, e.g. a browser extension. In accordance with one embodiment, further refinement or fine tuning of various options for customization of augmentation system such as aggregating, mining, filtering, and presenting various aspect of data or metadata can be performed dynamically in real-time. In addition, the customization of augmentation system may be performed based on at least one or more of a user's feedback, behavior, attributes, characteristics, theme, topics, and interests. Also, when augmentation system presents a list of tags/categories, the user can provide feedback in the form or liking/disliking the tag. This is similar to promoting or dismissing an augmented content. Therefore, in accordance with one embodiment, the augmented content can be updated live. Furthermore, this user's feedback would also result in updating various subsystems such as the underlying data-mining, statistical computing algorithms, or machine-learning algorithms or other information retrieval algorithms or heuristics. These updated subsystems are used to generate or create new signatures, metrics, or features which are based on user's feedback, e.g. liked/disliked tags, where the new signatures are used to generate new augmented content or update the currently presented augmented content.
  • In accordance with one embodiment, a feature of a system for generating and presenting multilevel context sensitive augmented content is the ability to utilize online and offline mining and analytics for augmentation. For example, mining and processing in real-time or in batch mode and store data in a data store (local or remote) or presenting real-time augmented content to the user. The stored data can be used for future augmentation. Metadata and other relevant data elements can also be annotated in real-time to capture user's preferences and experiences. In addition, metadata and other relevant data elements can be stored in a central repository to be leveraged for future augmentation of same or similar content. A brief description of metadata is that it is data that describes other data. For example: ‘public health’ is a category that encompasses diseases. This higher level category ‘public health’ is a metadata for diseases.
  • In accordance with one embodiment, a multilevel context sensitive augmented content system uses at least two levels, a global level and a local level. The following explains the difference between global and local augmented content. Global augmented content refers to augmented data that pertain to the overall document that the user is currently browsing, exploring, or interacting with. A local augmented content can refer to augmented content based on a particular piece, paragraph, sentence, word, image, icon, symbol, etc. . . . of that document that the user is currently browsing, exploring, or interacting with. Global & local augmented content are presented using a dynamic deep queue, and the user can control the displaying of at least a portion of the augmented content. Content sources for augmentation can be provided from many sources. An example of such content sources includes but is not limited to a user's own documents and data on desktop, web-content, social media sites, enterprise data-marts, and local and remote data stores, ontologies, other categorization, and/or semantic or relationship graphs.
  • The multilevel context sensitive augmented content can be successfully implemented to augment a user's browsing experience as discussed above. In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content can be successfully implemented as an application for augmented user experience (UX). The system can increase productivity, provides augmented data-mining & data-exploration platform, augmented e-learning and e-research system, augmented desktop-based & mobile-based browsing, exploration, research, discovery, and learning platforms, data augmentation for better healthcare products and services, data augmentation for better educational products and services, augmentation system for better content management and relationship platform for both enterprise and consumer applications, enhanced online-shopping research and UX, enhanced marketing campaigns, an enhanced news access UX are but to name a few of application benefiting from a system for generating and presenting multilevel context sensitive augmented content.
  • Semantic processing is the process of reasoning about the underlying concepts and expressing their relationships. In addition to various augmentation methods as described above, the following semantic based techniques can also be used in a system for generating and presenting augmented content. In accordance with one embodiment, utilizing existing tags in public sources, utilizing batch-processed tags as a cloud application, semantic processing of selected content to generate a match to an existing tag, semantic processing to generate augmented content on the fly and utilizing user's feedback for promoting and dismissing augmented content are but examples for methods to provide a better user-relevant augmented content. Generating augmented content on the fly can also be accomplished by using a feedback mechanism provided by the user to enable mining and generating of new augmented data to be presented to the user.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content is used to improve the analytics of large data sets by leveraging pre-processed data and already generated relationships. Given a content that is the result of a statistical data mining and exploration phase on a small or large amounts data—be it remote or local-extract the correlation metrics and other signatures that demonstrate a meta-relationship and leverage it in other data-mining, analytics, and to generate augmented content. For example: When a user presents some key words to a search engine, the user gets a set of links that are related in addition to some ads that could very well be related to the key words you have entered or to some personal data known or extracted of the user. These presented links and ads have gone through a huge amount of processing and computation in the cloud. By knowing that a relationship or a meta-relationship exists between the keywords, links, and may be other content pushed to the user like ads, the analytics operation can leverage them and extract, store, and leverage these signatures for future browsing or for presenting context sensitive augmented content.
  • In accordance with one embodiment, the content presented to the search engine can be either parsed from the html or other format or interface produced by a data provider. Or, it can be scanned through OCR if the data format is encrypted. This ability to take a snap shot of a screen and analyzes and leverages its data and relationships empowers and simplifies the augmentation and analytics processes and improves the throughput since the signatures/correlation metrics extracted are a result of processing a significantly smaller set of data. Therefore, the performance gain of a system for generating and presenting multilevel context sensitive augmented content is orders of magnitude compared to mining massive data sets in the cloud.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content presents the augmented content along with the reference content using two or more different presentation layers displayed using the same display screen. In addition, the system provides the ability to customize the generation of augmented data in situ (in place) while working on original or reference content, where the augmented data can be displayed on see thru presentation layers so as not to obscure the original or reference content and to maximize use of the display screen, and/or the displaying area.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content utilizes dynamic updates of displayed augmented content using presentation layers while a user views and manipulates reference content displayed using another presentation layer. It is preferable to use a translucent presentation layer for the augmented content presentation layer that is located on top of the displayed reference content so that the user can easily manipulate or interact with the reference content while simultaneously viewing the dynamically updated augmented content. As can be easily appreciated by person skilled in the art that displaying relevant augmented data in a separate tab or page would result in loss of context relationship and provides a less efficient and less friendly user experience. Similarly, displaying the augmented content on the sidebars is possible as well. However, it consumes screen space and hinders displaying of the reference content. Therefore, the ability to keep the reference content accessible to the user while displaying the augmented data on top of the original content provides a much smoother and efficient user experience. Furthermore, the user can easily hide, size, move, or display the augmented content without affecting the reference content.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content enables a user the ability to associate any of the augmented content with the reference content or an attribute of the reference content source using one or more types of metadata. The system enables the user to save the associated metadata for future use or sessions. For example, the association of metadata can be accomplished by embedding a link in the text, by associating a link with a text, or by associating any data or metadata with the reference content or any part of the reference content. Moreover, the user has the ability to specify a category or more as a source or criterion of augmentation. The user can also define association rules that join a group of attributes, categories, and other metrics together to provide a richer input to aid the augmentation system to generate more relevant augmentation content. For example, an enterprise sales projection document can always be augmented with any data source or data documents that generated the projection. The criterion is a category that says source sales data and not necessarily the exact data documents. The sales data can be extracted automatically by the augmentation system. Utilizing selected or provided categories of interest, the augmentation system can carry out an updating procedure for any associated data or metadata for any other reference content. Furthermore, the augmented content is displayed using see-through layers so that the user always sees and has access to the original or reference content. The user is able to access, browse, move, select, hide, tap, scroll, or interact with the reference or augmented content while the system dynamically generates and displays an updated augmented content using the augmentation presentation layer. It is noted that the user interaction with the reference or augmented content can result in having a new reference content that the user wishes to interact with, hence, a new augmented content is generated and displayed. The system keeps track of and saves certain information regarding this nested augmentation level. The system provides the user the ability to switch back and forth between various nested augmentation levels as well as saving or sharing the augmentation filters or settings used for a particular session.
  • In accordance with one embodiment, further enhancement of the user experience is achieved by enabling the user to change the skin (or look of a user interface UI) of the augmentation system. For example, the same components of a UI (buttons, options, data) can be displayed on the screen in a variety of ways. Usually, a library of templates and color options can be provided to allow the user to customize the augmented content presented by the application. In addition, the global augmented content and local augmented content can be displayed using one or more different regions of the screen or displaying the global and local links to the augmented content in two concentric circles around the reference content. The enhancement of the user experience is achieved by enabling the user to choose the most efficient way for that user to utilize the augmented content.
  • In accordance with one embodiment, user selectable skins can also be used to cover or hide pushed content that may exist or embedded in the reference content being viewed. User selectable areas of a skin can be used to enable the display of user selected content such as images or augmented content or pushed content such as advertisement. For example, an ad for tickets to a local concert when the user is browsing a specific artist, or an ad for a book that relates to a global or local augmented content of the user reference or currently viewed augmented content, or any other monetization mechanism based on the augmentation process. The enhancement of the user experience includes a nested multilevel context sensitive augmented content where the augmented content presented to the user can be further enhanced as a function of the various nested levels. The augmented content is presented while keeping track of the current content being viewed in relationship to the original content that the user started with and all levels in between. This provides a hierarchical augmentation system that enables the user to access and build nested levels of augmentation.
  • In accordance with one embodiment, the user interface, or UI, for a system for generating and presenting multilevel context sensitive augmented content can be launched or started automatically and stays hidden from view until the user invoke a predefined programming function to enable the UI functionality. For example, a single tap, hot-key, function-key, a gesture, or a multiple or a combination of actions acted upon a content would cause the transparent augmentation layer to be shown with the augmented content and in accordance with user preferences, such as tags, skins, themes, etc. . . . Selecting content presents or updates the augmented content already presented. Visiting an augmentation link results in completely or partially (split screen) covering the reference content or original layer comprising the original content. The UI provides the user the ability to navigate nested augmented content or jump back to reference or original content.
  • In accordance with one embodiment, additional UI features can further be used to increase the overall efficiency and provide a better user experience. For example, saving the augmented content metrics in user history, and using history to enhance and/or tailor analytics and augmentation as would be more relevant to each individual user or group of users such as in corporate environment. Metrics here refer to the generated signatures as mentioned above. Also, it refers to any annotations that are provided by the user such as priority, liking/promoting an augmented content or dismissing it. This can be stored for future sessions as well as using the augmented content promotion and dismissal to enhance augmentation in real time. Using skins that cover an undesirable part of the screen, e.g. side columns where ads are pushed. The skin may be used for further customization of the viewed screen and potentially could be monetized and leveraged to present relevant augmented content that is paid for by the user, such as ads for objects, e.g. books, related to the content of a reference article.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content provides dynamic user-guided and customized context-sensitive data augmentation to facilitate learning, exploration and knowledge discovery. The system provides simultaneous interaction with the augmentation layer and the content layer. The system generates augmentation data based on user-defined metrics and filters such as themes, categories of interest, document content and/or part of it. The generated data is not a rigid augmented content. The generated augmented content is any data, concept, and relationships that are presented as a result of the data mining and processing of the original content and the user-defined metrics and filters.
  • In accordance with one embodiment, the system utilizes dynamic and interactive methods to successively refine and tailor the augmented content based on a user's guidelines, filters, and metrics. The system relies on a variety of sources for content augmentation by accessing any online or offline databases, crowd-sourced databases, or open databases. Furthermore, over time, a custom built graph of concepts and relationships can be built between different pieces of data as they are processed and augmented based on the user's filters and metrics to improve the performance of the system and the User Experience. The system provides a context-sensitive hierarchical augmentation framework for deeper and expansive exploration and knowledge discovery. The system enables construction of a customized graph of data, concepts, and relationships based on the filters and metrics provided even in the absence of content. Content can be generated on the fly for further exploration.
  • In accordance with one embodiment, the system enables sharing of augmented data and the associated metrics that generated them. This enables richer knowledge discovery by further refining a user's augmented data based on other users' augmented content. This is useful for collaborative research and knowledge discovery. The system can be launched from offline and online documents or reference content to generate the augmentation content, data and graph of relationships amongst the concepts represented by the augmented content.
  • In accordance with one embodiment, the system provides a UI to display and manipulate reference content and augmented content concurrently, dynamically, and interactively. The system provides one or more translucent layers on top of the reference content to show the augmented content. Translucent layers facilitate displaying the reference content as well as the augmented content. Translucent layers can fully or partially cover the original content. Augmentation layers can be hidden, minimized ((shown as icon), or moved around on the display screen to facilitate easier display and interaction with the reference content. The system enables the user to manipulate and control a set of display layers (reference content layer, and/or augmentation display layers) in a very flexible fashion such that the user can size up, down, move, show, hide any of those display layers. The system provides an intuitive, rich, and friendly UX for data exploration and knowledge discovery on small and large display screens. In particular, displaying of the augmented content concurrently and interactively on the original content empowers the user to use this system on smart phones, tablets, and any other display. Furthermore, the system provides means to insert additional content on the augmentation layers based on analytics on the augmented content and the original content.
  • Knowledge discovery system serves to augment, clarify, enrich, and expand on a relevant topic or topics in a document. A number of information processing techniques is carried out to disambiguate information and extract names, concepts, events, and other relevant meta-data using Name-Entity-Recognition (NER), topic modeling to discover topics related to the reference document. Such topics can be either explicitly mentioned or discovered by relying on techniques based on information processing, data mining, machine learning to process discovery patterns, and causality graph and other web and data repositories.
  • Name Entity Resolution/Recognition (NER) processes the document, disambiguates names and concepts, extracts names, concepts, dates, name phrases, and any other data that can be parsed, processed, or inferred. Use of latest information extraction, data mining and natural processing are some of the techniques and algorithms that can be used in this step.
  • Topic Modeling and Topic Graph: Mine data from stored knowledge graphs, causality graphs, or other repositories to extract and cluster topics and categories from the mined names, concepts, and other processed data. Latest research in data clustering, topic extraction, inference, modeling, and latent topic discovery can be used to build a topic graph. A topic and clusters are interchangeable in this graph. A cluster is a set of related data that share a set of common features and relationships. One of those features or relationships can be a theme. A topic is a cluster of documents that share a common theme. Not all relevant topics can be discovered by the topic extraction step. More topics (relevant and possible hidden) can be extracted by the aid of the discovery patterns and causality graphs below.
  • Hierarchical Graph Discovery of intermediate topics and themes to discover/expose relationship between related topics and clusters. This graph can be a pre-defined taxonomy, or a hierarchically constructed graph based on different levels of coarse and fine clusters constructed based on the available content.
  • Data Clustering is the process of constructing a set of clusters of related documents. The relatedness is defined based on a set of desired features and/or relationships. Topic Modeling above is a form of data clusters where each topic is a cluster that shares a common theme.
  • Discovery Patterns (DP) 10100 are templates that aid the discovery system to extract the relevant knowledge for a topic or a concept, as shown in FIG. 1. For example, for a topic, it will query for the relevant properties or relationships that are annotated on the topic or its meta-topic. Furthermore, a DP can help in defining a set of competency questions that will be very important to data augmentation and knowledge discovery. DPs define and a set of Competency Questions (CQ) that can be extracted from the data/content to extract and discover salient content and relationships. For instance, Type 10111, Industry 10113, Equivalents 10115 and Treats 10117 are examples of discovery patterns for a topic Product 10110. Similarly, Type 10121, News Event 10130 and Actors 10123 are examples of discovery patterns for a topic Disease 10120. Furthermore, Type 10131, Place 10135, Date 10137 and Actors 10133 are examples of a topic News Event 10130. These discovery patterns can be pre-defined, manually constructed, extracted from other data sites or repositories, or crafted on the fly. They can also be further enhanced and massaged as the system gathers more data. Also, they can be tailored based on user's specific features and interests.
  • Competency Questions (CQ) define a set of queries that are very specific to the content at hand. These queries enable very focused knowledge discovery. These CQs are domain dependent. For example, the set of CQs for knowledge discovery of legal corpus is different from the set of CQs for knowledge discovery of medical corpus. Our system in addition to leveraging pre-defined CQs modeled in pre-defined DPs, it enables the user to provide a custom-defined DPs their associated CQs, and automatically infer a set of CQs and dynamically construct a set of DPs based on the available content and features.
  • Ontologies and public repositories provide pre-defined sets of concepts and relationships that can be leveraged in the knowledge discovery process. Wikipedia, Wordnet, Freebase, Verbnet are examples of such repositories that are rich, and constantly updated. Although these ontologies and repositories are bulky, they are rich with relevant content. Our system leverages these repositories amongst other sources to discover rich augmentation content.
  • Causality graph (CG) enriches and enhances the knowledge discovery phase. By minding known data (world wide web documents), accessible repositories (public and possible private), a large body of knowledge can be modeled. CG serves to build the set of relationship that exist between the topics can be extracted and modeled in the CG. Also, causality and dates of events can be extracted and inferred and modeled in the CG. These will serve in discovering more hidden but important topics and relationships that should exist in the topic graph, but they have not been discovered yet.
  • Abstracted Causality Graph is a graph that is constructed from the causality graph (CG), the CG should be abstracted so that similar topics, relationships, cause and effects, and their meta-subjects are captured. This will aid in leveraging all this knowledge to augment and enrich new information and knowledge. This is essential for knowledge discovery. An example of abstracted concepts could be if company X acquires company Y, it is not important who X and Y are, but what is very important is the notion that a company can acquire another company. This way when we see a name of company in a new document, we can automatically ask the question about any prior or expected acquisition for the company at hand.
  • Data Augmentation as was discussed in a previous disclosure refers to local and global data augmentation and knowledge that are extracted and presented to the user to further expand on the document at hand. This data is based on all the knowledge modeled in the discovery graph (topics, clusters, and relationships), causality graph, discovery patterns library, and other on the fly information extraction. This augmentation will facilitate Timeline Events related to both local and global augmentation data and will provide a rich knowledge discovery experience. The user can browse in time to discover relevant knowledge about the topic or topics at hand.
  • A block diagram for an Augmentation System 11500 is shown in FIG. 2. The goal of this system is to read, synthesize, and/or extract a set of competency questions that will enable smarter content discovery and augmentation. Competency Questions (CQ) is a set of queries that are very specific, well defined, and rich features that guide the knowledge discovery process. Box 11510 reads and processes a set of ‘competency questions’ from the user, Box 11520 synthesizes those CQs based on an automated template generation that will define and fill the relevant competency questions for the content at hand. Synthesis based on extracted relationships of topics in CG, topical graphs, or other synthesized relationships. Box 11530 extracts those CQ based on content discovered by processing those relevant topics and entities. Box 11540 compiles and outputs the constructed set of CQs as produced by any or all boxes 11510, 11520, and 11530.
  • A block diagram for Causality and Augmentation System 11600 is shown in FIG. 3. The goal of this system is to construct a causality graph that captures the cause-effect between different mined or discovered topics in the system. This causality relationship extraction adds another dimension to the knowledge discovery system. Box 11610 defines a set of prior topics that are relevant to content. Box 11620 defines a reference topic to be augmented. Box 11630 defines a set of topics that are caused by the prior topics and related to the current reference topic. Box 11640 defines the set of actors that are at play in the causal relationships. These actors can be named entities such (person, location, organizations, groups). Box 11650 defines a set of topics and their related categories so that the correct causal relationship is used should there be more than one relationship. Box 11660 a set of Discovery Patterns (DP) that will enable the system to extract the right meta data and annotations when discovering the causal relationships. Box 11680 defines a set of user-provided features that will aid in this discovery process. Box 11670 defines that causality graph that is the result of processing all the input defined in the previous mentioned boxes. Box 11690 is the set of causal relationships and relevant content to be added to the augmented content.
  • A block diagram of a Causality Graph Synthesis 11700 is shown in FIG. 4. The goal of this system is to build and abstract the causality graph such that it is applicable to a different set of actors and entities that share the same set of relationships defined in the graph. Box 11710 defines a set of topics and relevant content that can be mined form any source public or private. These constitute the nodes in the causality graph. This system builds and infers edges between those nodes based on a set of rules, heuristics, discovered relationships, or pre-defined relationships. Box 11720 extracts relationships between the presented entities. Box 11730 extracts relationships between the presented topics and their corresponding categories, and Box 11740 extracts instances of causalities based on the presented content itself. Box 11760 checks the existing causality graph for the discovered or inferred edges. If they are not present already, they are added to the causality graph (CG). Box 11750 processes the updated causality graph (CG) and infers abstracted relationships and adds it to the graph so that the CG becomes more abstract and applicable to future instances of relevant topics and entities. Box 11770 is the output of this system that presents a rich and abstract causality graph.
  • A block diagram of the Augmentation System with the causality graph 11900 is shown in FIG. 5. This block diagram shows an overview of the whole augmentation system operation. Block 11901 shows the document that needs to be augmented. Box 11905 shows the features that were extracted from this document as signatures to aid in finding relevant augmentation. Box 11910 defines the named-entity-recognition system what extracts the salient entities in the system. Box 11915 presents the set of entities extracted. Box 11920 presents other features or properties such as dates or others that will further aid in augmentation. Topic Model 11925 includes a Feature Set Priority Engine 11930 and a Topic Modeling 11935. Box 11930 shows a ranking engine for the features presented so that noisy or less salient features are pruned out to further aid in higher quality augmentation content. Clustering and Topic modeling is executed on this relevant content in Box 11935. Box 11940 presents the set of relevant clusters and topics that are constructed. Leveraging a library of pre-defined (Box 11960), dynamically synthesized (Box 11950, Box 11955), or user-provided (Box 11965) discovery patterns carry out further content augmentation. Box 11955 defines a mapping between a discovery pattern in the library and a synthesized relationship based on the presented features. Box 11970 presents the resultant set of discovery patterns. Box 11975 processes those patterns by examining the causality graph (CG) to see if such relationships exist or are defined. Further augmentation content can be added to the causality graph by processing relevant documents in public or private repositories (Box 11985). Box 11990 presents a new set of entities and topics from the freshly mined content. Box 11995 extracts a timeline from the freshly mined content so that the right part of the causality graph is updated. This data is further used to extract a relevant timeline to process the dates and timeline to link the relevant topics together (Box 11995). The data in Boxes 11990 and 11995 are further utilized to infer and extract more knowledge from the Causality Graph in Box 11100. Box 11100 presents the new augmentation content that will be added to the causality graph. At the end of this process, a rich set of local and global augmentation along with a knowledge graph with a timeline that connect the different topics (local and global) and the mined relationships and properties will be available.
  • A block diagram of an Augmentation System 12100 is shown in FIG. 6. A Reference Content 12105 corresponds to any electronic document or web page that a user wants to invoke the Augmentation System 12100 to get Augmented Content 12190. The Reference Content 12105 can be stored locally in a memory subsystem of an electronic device, a memory subsystem of a display screen device, or is accessed from a remote location via a wired or wireless communication system. The communication system could use the internet, a cloud, a data store, a computing device, server or a database via a wired or wireless networking link. The augmented content is a generated content by the Augmentation System 12100 based on the Reference Content 12105 using a set of features, filters, and categories which are produced by at least one of an Extract Features 12120, Extract Categories 12125, and Update Categories 12137 subsystems as shown in FIG. 6.
  • A Local Content 12110 is a selected portion of the Reference Content 12105 which the user wishes to get more specific augmentation about, or that is a portion of the Reference Content 12105 that the user is interacting with. Furthermore, the Local Content 12110 may also be automatically selected, tagged, managed, or generated by the Augmentation System 12100, e.g. based on a displayed portion of the Reference Content 12105 or a user interaction with a portion of the Reference Content 12105. Furthermore, the presentation and/or the displaying of the Augmented Content 12190 is managed using Manage RAC 12145 (RAC refers to Relevant Augmented Content) to control a Display Queue 12165 and Display RAC 12170.
  • The Augmentation System 12100 generates Augmented Content 12190 by facilitating the construction of a user-customized network of concepts, objects and relationships that serve to augment the Reference Content 12105 at hand for the purpose of knowledge discovery, learning, and a richer user experience in browsing and/or interacting with data information. This Augmentation System 12100 generates any one of a network of concepts, a network of objects, and a network of relationships using one or more of a set of features, a set of filters, and a set of categories. Each of the set of features, the set of filters, and the set of categories can be customized and tailored based on the user's interests and input. The constructed network can be saved and further augmented over time for richer and more efficient user experience.
  • The Extract Features 12120 subsystem extracts a set of features from the Reference Content 12105. General Features 12117 can provide a set of features that can be updated and tailored overtime to at least one of a specific user, specific project, specific objective, and specific subject. Extract Features 12120 generates a set of filters that denotes the desired concepts for augmentation. For example, these concepts could be names of people, history, events, topics, or other meta-data. These data are either computed on the fly or pre-computed and stored locally or remotely for current or subsequent augmentation sessions. This extraction process is based on embedded data in at least one of the Reference Content 12105, in a linked content to the Reference Content 12105, metadata of the Reference Content 12105, e.g. a title of the Reference Content 12105, linked content to the Local Content 12110, and semantic information that are either associated with the Reference Content 12105 or that can be extracted/aggregated from the Reference Content 12105. Other data that can be extracted or inferred can be further used for constructing a more meaningful feature set by utilizing a variety of information retrieval, extraction, and inference algorithms and methods. There is large body of work on feature extraction that utilizes the cloud as well as other large-scale solutions. These approaches can be leveraged by the Extract Features 12120 along with flexible and efficient algorithms to generate a feature set on the fly based on the metrics and signatures mentioned earlier. Furthermore, any part of the Augmentation System 12100 can be run remotely on a server or in the clouds, or it can be run locally on the host device.
  • The Extract Categories 12125 function uses a set of categories or topics that are extracted based on the data that can be associated or extracted from the Reference Content 12105. This data can be either meta-data or any other related data to the Reference Content 12105. The Extract Categories 12125 extracts a set of categories from the Reference Content 12105 and its associated links and data. Also, the system utilizes any embedded categories or meta-data that are either embedded in the link or attached to the Reference Content 12105. The extracted categories can also describe meta-data about the topic at hand. For example, if the reference content is an article about AIDS, there are many categories that can augment data about AIDS. For example, a set of categories can be: History of AIDS, Science of AIDS, Social Impact of AIDS, Symptoms of AIDS, etc. . . . . A user may only be interested in the science of AIDS, so a user will interact with the presented categories, e.g. by deselecting all categories that are not related to science, and this will impact the set of features that are used in augmenting the Reference Content 12105. Other data that can be extracted or inferred can be further used for constructing a more meaningful category set by utilizing a variety of information retrieval, extraction, and inference algorithms and methods. In addition, a General Categories 12115, as shown in FIG. 6, is a set of default categories that the Update Categories 12137 processes to reflect the user's interests. For example, the General Categories 12115 can be Business, Politics, Education, Research, Health, Technology, etc. . . . The Update Categories 12137 may use this optional input from the user to bias the augmentation to the categories of interest. This optional input can be stored and updated over time.
  • The interaction of a user with the Augmented Content 12190 may be accomplished in a variety of ways. For example, the user may select one or more of the presented categories for removal, selection, decreasing priority, and increasing priority. The user may also define, modify, or interact with an association rule to aid Extract Features 12120 to generate a more useful set of filters for better augmented content. The association rule can leverage, use, or joins one or more categories, features, filters, or concepts to (i) generate a new set of features, filters, categories, or Augmented Content 12190, and (ii) to modify one or more of the set of features, filters, or categories which are being used to generate the Augmented Content 12190. Based on the General Categories 12115 and user's interaction, further categorization and feature extraction will be biased towards the user's interaction or input. This is an optional input that is used to customize the Augmented Content 12190 based on a user's needs, the user's interaction with Augmented Content 12190, or to aid the Augmentation System 12100 to provide more relevant Augmented Content 12190 for a specific purpose. Upon a user's interaction with the Augmented Content 12190, an Update 12130 function enables the user's input to be considered by Update Categories 12137, e.g. a user may choose to delete some of the default/general categories that are not of interest or to elevate the priorities of some of those categories. When deleting categories, the Update Categories 12137 will reduce the weight of the features that are related to those categories. When categories are elevated in priority, the Update Categories 12137 increases the weight given to those features that are related to those categories. Thus, affecting and updating the Augmented Content 12190 presented to the user.
  • An Update Filters 12150 is used to indicate a user's preference for a feature or automatic feedback based on user's interaction with the Augmented Content 12190. For example, when one or more of the Reference Content 12105, Local Content 12110, and Augmented Content 12190 get updated or interacted with by a user, then more clues and feedback can be gathered from the updated list or the user's interaction as to revise the features and categories that are of interest to the user in real time. However, the user may choose not to update the features and categories, and the Augmentation System 12100 provides the user the ability to control how and when the Augmented Content 12190 is generated and/or updated.
  • An Update Features & Categories 12135 subsystem receives a first set of features from the Extract Features 12120 subsystem, a first set of categories from the Extract Categories 12125, and/or an updated set of categories from the Update Categories 12137, and/or an Update Filters 12150. Update Features & Categories 12135 manages and controls the updating of the actual features and categories sets including any decision making based on the user input or interaction. The Update Features & Categories 12135 may communicate with any one of Extract Features 12120, Extract Categories 12125, and Update Categories 12137 to generate more features and categories based on a variety of parameters including the user's preferences. Furthermore, Update Features & Categories 12135 also handles updating relationships and cleaning up for those features and categories that were updated by the user.
  • A Compile RAC 12140 subsystem receives a set of categories and a set of features from the Update Features & Categories 12135 subsystem. Compile RAC 12140 includes a variety of functions and algorithms such as machine-learning, data mining and extraction, web crawling, data-mart accessing, extraction and processing functions, and other intelligent algorithms and approaches are used to compile a set of relevant augmented content or pages (RACs) based on at least one of the Reference Content 12105, Local Content 12110, and the interest of the user. Managed RAC 12145 subsystem is the controller that manages the presentation of the Augmented Content 12190 via a Display Queue 12165 and Display RAC 12170. The Augmentation System 12100 listens to inputs from the user and manages the generation of the Augmented Content 12190. The Managed RAC 12145 subsystem generates three outputs taking into consideration a user's feedback or input. The Managed RAC 12145 subsystem generates and controls the communication of the generated Augmented Content 12190 using Display Queue 12165 and Display RAC 12170. In addition, Managed RAC 12145 generates an update request to Update RAC 12155 for any necessary update to the Display Queue 12165 based on a user's interaction or input. The Display Queue 12165 displays in a desired skin at least a portion of the queue of RACs so that the user can browse through them and select some to view. The Display Queue 12165 displays a link, a summary, or a portion of the compiled relevant content or pages. Upon selection or interaction by a user with one of the displayed RACs, the Display RAC 12170 retrieves the respective relevant page RAC and displays at least a portion of it. The Display RAC 12170 subsystem manages and controls the displaying of the Augmented Content 12190 using the display screen. Display RAC 12170 can use one or more display layers on top of the Reference Content 12105 or Local Content 12110 via translucent display layers as discussed in previous paragraphs.
  • A block diagram of a Hierarchical Augmentation System 12200 is shown in FIG. 7. This hierarchical augmentation or nested augmentation capability enables a user to augment any content that is the result of data augmentation at any level of browsing or exploration. For example, given that the Augmentation System 12210 generates a list of RACs, the user may select any one of the RACs or a group of RACs to invoke the augmentation system on and to generate another level of augmentation. The Augmentation System 12200 allows the user to go back and forth in the hierarchical graph to browse any particular content at any level, be it a reference or augmented content. For example, the Augmentation System 12200 provides augmented content at Process 1 12220, which is the first invocation of the augmentation system on reference content, a user may elect to augment one or more of the augmented content of Process 1 220. The Augmentation System 12200 uses the elected content to be augmented from Process 1 12220 as an input or reference content to Process 2 12230 for augmentation. Process 2 12230, which is considered the second invocation of the augmentation system on a reference content, generates in turn augmented content which the user can further refine or interact with, and so on for Process K 12240, Process (n−1) 12250, and Process (n) 12260. Multilevel nesting or hierarchical augmentation is not limited to a specific number of levels. Of course, certain hardware or software limitations or a particular application may dictate the use of a specific number of levels. However, this is an option that can be used to various extents as part of the customization of Augmentation System 12200 for any particular usage.
  • A block diagram of a Hierarchical Augmentation System 12300 is shown in FIG. 8. This hierarchical augmentation or nested augmentation capability comprises the same capabilities as the Hierarchical Augmentation System 12200 is shown in FIG. 7 and includes an Augmentation System Control 12390 subsystem that is communicating Augmented and Reference contents AR-12325, AR-12335, AR-12345, AR-12355, and AR-12365 with Process 1 12320, Process 2 12330, Process K 12340, Process (n−1) 12350, and Process (n) 12360, respectively. As described above Process 1 12320 corresponds to a first level instance of Augmented System 12100, and Process (n) corresponds to an n-th level instance of Augmented System 12100. Given that each of the nested augmented systems may generate different augmentation content for each hierarchical level at least due to variations in user input or the reference content corresponding to the hierarchical level, the Augmentation System Control 12390 may receive one or more of the generated augmented content of each hierarchical level, a copy of the set of filters, a copy of the set of features, and a copy of the set of categories. The Augmentation System Control 12390 can further run sophisticated statistics, analytics and algorithms to extract new features or generate new filters or categories. Furthermore, the Augmentation System Control 12390 may receive user input to control what type of analysis or augmentation the user expects the Hierarchical Augmentation System 12300 to provide or keep track of nested contents that the user is interacting with, viewing, or manipulating at various levels of hierarchy.
  • A block diagram of an Augmentation System 12400 using a Display Control 12420 subsystem is shown in FIG. 9. The Augmentation System 12410 is essentially the same as any one of the Augmentation System 12100, Augmentation System 12200 and Augmentation System 12300 as shown in FIG. 6, FIG. 7, and FIG. 8 respectively. The Display Control 12420 subsystem controls the displaying of various elements such as Augmented Content 12450 and Reference Content 12440, which are output of the Augmentation System 12410. In addition, the Display Control 12420 receives input control from Augmentation Display 12430 subsystem and/or from a user interacting with the Augmentation Display 12430 or one or more display layers displayed using the Augmentation Display 12430. Based on the Augmented Content 12450 generated from Augmentation System 12410, Display Control 12420 generates and/or controls different display layers, widgets, icons, and other knobs which are utilized to show, control, or manipulate any one of the Augmented Content 12450 and Reference Content 12440. Furthermore, Display Control 12420 provides means for the user to interact with any one of the Reference Content 12440 or the Augmented Content 12450.
  • In accordance with one embodiment, the Augmentation System 12410, the Display Control 12420, and Augmentation Display 12430 are elements of the same physical electronic system such as a mobile device. The user can manipulate any one of the Augmented Content 12450, Reference Content 12440, and how each is displayed onto the Augmentation Display 12430. In addition, a user interface (UI) may be used to further aid the user to manipulate or interact with any one of the Reference Content 12440 and the Augmented Content 12450 and the displaying of such content.
  • Furthermore, the UI can provide an easy mechanism for a user to interact with the categories, widgets, buttons, and any other option that is presented for the user to engage with the Augmentation System 12410.
  • In accordance with one embodiment, the Augmentation System 12410, and the Display Control 12420 are elements of a first electronic device that is separate from a second electronic device comprising the Augmentation Display 12430, wherein the first and second electronic devices communicate the Reference Content 12440 and the Augmented Content 12450 back and forth based on the Augmentation System 12410 and/or a user interaction with any one of Reference Content 12440 and Augmented Content 12450.
  • In accordance with one embodiment, the Augmentation Display 12430, and the Display Control 12420 are elements of a first electronic device that is separate from a second electronic device comprising the Augmentation System 12410, wherein the first and second electronic devices communicate the Reference Content 12440 and the Augmented Content 12450 back and forth based on the Augmentation System 12410 and/or a user interaction with any one of Reference Content 12440 and Augmented Content 12450.
  • In accordance with one embodiment, a system for extraction and generation of features and categories Extract Relevant Features 12500 is shown in FIG. 10. The Extract Relevant Features 12500 is tasked with building a set of features and categories that any one of the Augmentation System 12100, Augmentation System 12200, Augmentation System 12300 and Augmentation System 12400) can utilize to generate augmented content. Reference Content 12510 is similar to Reference Content 12105, and Local Content 12520 is similar to Local Content 12110. Categories 12530 is a subsystem which is responsible for constructing a list of categories that captures or is responsive to the user's inputs and preferences, a set of extracted categories from Reference Content 12510 and Local Content 12520, and a set of customized categories associated with the user. Features and Metrics 12525 is a subsystem which generates a set of features, a set of signatures, and/or a set of metrics each of which is either dynamically generated or pre-computed and stored. Features and Metrics 12525 delivers these sets of features to an Extract Features 12540 subsystem. In addition, the Extract Features 12540 receives input from Reference Content 12510, Local Content 12520, Features and Metrics 12525, and Categories 12530. Extract Features 12540 delivers a set of features, a set of signatures, and a set of metrics to Compile RACs 12550 subsystem, which in turn utilizes one or more of those sets to compile from the internet, a local data store, or any other data repository (public or private) a set of data elements. A Relevant Augmented Content 12560 subsystem receives the set of data elements and/or the set of features, the set of signatures, and the set of metrics to generate a customized augmented content for the user.
  • In accordance with one embodiment, a simplified block diagram of a system for extraction and generation of features and categories Extract Relevant Features 12600 is shown in FIG. 11. The Extract Relevant Features 12600 can be used as a part of an augmentation system such as Augmentation System 12100, Augmentation System 12200, Augmentation System 12300 and Augmentation System 12400 each of which has been described above. The Extract Relevant Features 12600 is utilized to compile a set of features, using Compile RAC 12650, to be used by an augmentation system to generate augmented content. Extract Candidate 12618 processes at least a portion of a Reference Content 12608 and receives other user-provided input to extract or generate one or more set of filters and features. Features 12620 uses the one or more set of filters and features to organize, build, compile or store a user-customized network of features, concepts, objects and their relationships. Features 12620 serve to provide a better extraction or a focused extraction of a user's relevant set of features that can provide a faster convergence on what the user is interested to see or would want to see regarding the Reference Content 12608. In addition, this provides a better value add augmented content for the purpose of knowledge discovery, learning, and a richer user experience in browsing and/or interacting with data information. Furthermore, features 12620 can learn, save and further refine the user-customized network of features, concepts, objects and their relationships over time for richer and more efficient user experience. Similarly, Categories 12630 uses the one or more set of filters and features generated by Extract Candidate 12618 to organize, build, compile or store a user-customized network of categories and their relationships. Categories 12630 can learn, save and further refine the user-customized network of categories and their relationships over time for richer and more efficient user experience.
  • Metrics 12640 is a system that can provide user influenced metrics information to Compile RAC 12650. Metrics 12640 uses the one or more set of filters and features generated by Extract Candidate 12618 to organize, build, compile or store a user-customized network of metrics which can be user defined or system's default. For example, Metrics 12640 can use date or time as a metric that can be used to further narrow and focus on the relevance of the augmented content to the user or to the Reference Content 608. Another example is to use a source or a group of sources to aid Compile RAC 12650 to limit or expand its compilation and generation of relevant augmented content. Metrics 12640 can learn, save and further refine the user-customized network of metrics and their relationships over time for richer and more efficient user experience. Metrics 12640 can receive real time information from the user or other part of an augmentation system and provides an update in real time to Compile RAC 12650.
  • Compile RAC 12650 is used to compile the networks of features, categories and metrics received from Features 12620, Categories 12630, and Metrics 12640 to generate and prioritize a focused set of relevant augmented content (RAC) that captures the properties and/or attributes of Reference Content 12608 and reflects the user's rules, interests, preferences, and attributes. This focused set of relevant augmented content (RAC) is to be used by an augmentation system to deliver or present a concise and highly relevant augmented content to the user. Compile RAC 12650 is used to resolve any conflicts that may exist between any of the networks of features, categories and metrics. Compile RAC 12650 also provides and determines the priority of the final list of RACs to be delivered or presented to the user. Compile RAC 12650 can also receive, generate or modify an association rule which can be used to leverage, or join one or more categories, features, filters, concepts, or metrics to (i) generate a new set of features, filters, categories, or relevant augmented content, and (ii) modify one or more of the set of features, filters, or categories which are being used to generate the relevant augmented content.
  • In accordance with one embodiment, an augmentation system can use Display Layers and Controls 12700 as shown in FIG. 12 to display the generated Augmented Content 12760, Global Augmented Content Queue 12720, Global Augmented Content Queue 12770, and the Reference Content 12750. For example, this can be one instantiation of the data presentation mechanism of an augmentation system as described above, e.g. Augmentation System 12100. The user can change the look and feel (skin) of the Display Layers and Controls 12700 using any number of skins (look and feel options). The Global Augmented Content Queue 12720 corresponds to a displayed part of a relevant augmented content (RAC) generated by the augmentation system. The user can browse and scroll through this queue to select a relevant augmented content of interest. Local Augmented Content Queue 12770 refers to the relevant augmentation results that are related to the part of Reference Content 12750 that the user has interacted with or is being displayed via Display Screen 12710, and which is referred to as Local Content. Display Layers and Controls 12700 can manage the display of the Global Augmented Content Queue 12720 and Local Augmented Content Queue 12770 in various ways, such as the location of the display of the queues as well as the portion of any one of the queues that is being displayed using Display Screen 12710. For example, the user can choose that only the Global Augmented Content Queue 12720 is displayed, thus Display Layers and Controls 12700 will manage to display the portion of RACs of the Global Augmented Content Queue 12720 that maybe accommodated onto the Display Screen 12710. Similarly, the user may choose to emphasize the Local Augmented Content Queue 12720 and thus the Display Layers and Controls 12700 will manage that as well. The Reference Content 12750 refers to the content being browsed and explored for further augmentation. Display Screen 12710 corresponds to a display screen that may be physically collocated within the same device where the augmentation system is being used, or it can be part of a separate electronic device. Augmented Content 12760 is displayed using one or more display layers and is the augmentation content that the user chooses to view. An icon Promote 12740 is used to highlight, select, or promote a specific RAC. Promote 12740 provides a mechanism for the user to interact with any of the RACs of Global Augmented Content Queue 12720 and Local Augmented Content 12770 by elevating the priorities of a RACs. Similarly, a demote icon (not shown) can be used by the user to remove or dismiss a RAC or a group of RACs entirely if the user is not interested in them.
  • A Simulated Display 12800 is a use case scenario of the Display Layers and Controls 12700 and any one of the augmentation systems described earlier as shown in FIG. 13. This Simulated Display 12800 presents an example of a user reading an article about AIDS as shown in Reference Content 12810. The user then invokes the Augmentation System to augment Reference Content 12810. Based on the categories presented to the user, the user selects categories that are related to AIDS Research and Science. Categories Related to AIDS RACs 12820 shows part of the global augmentation deep queue that the system generated in response to the user's interest in AIDS, Science, and Research.
  • A Simulated Display 12900 is a use case scenario of the Display Layers and Controls 12700 and any one of the augmentation systems described earlier as shown in FIG. 14. This Simulated Display 12900 presents an example of a user reading an article about AIDS as shown in Reference Content 12910. Selected Content 12920 shows an example of selecting part of the Reference Content 12910. The user then invokes the Augmentation System to augment Reference Content 12910 and Selected Content 12920. Based on the user's choice of categories related to Science and Research of AIDS, and the system's extracted categories and features RACs 12905 shows part of the global augmentation deep queue that the system generated. Africa/India AIDS RACs 12930 shows part of the local augmentation deep queue that the Augmentation System generates in response to the user's selection of part of Reference Content 12910. Augmented Display Layer 12940 is an example of displaying of Africa/India AIDS RACs 12930. Augmented Display Layer 12940 shows a RAC (HIV and AIDS) in the local augmentation queue that the elected to view.
  • New Work on Augmentation and Knowledge Discovery. Build on previous work of data augmentation and knowledge discovery. Tap into existing ontologies. Utilized known, constructed, and synthesized Discovery Patterns to improve the performance of the system for online discovery and mining. General Ontologies are bulky and inefficient to mine. It is best the system does that only if Discovery Patterns are not available to reduce the search space.
  • Mission: Provide an on-demand friendly and rich mobile knowledge discovery platform that makes context-sensitive remote and hidden relevant knowledge and information accessible and useful. We will seamlessly and intuitively bring knowledge to everyone.
  • Knowledge Discovery: The Next Revolution. Ongoing Progress to help users find information relevant to their immediate goals by improving search and document classification. A huge gap between the way most systems organize information and the way humans wish to access that information. Search views information as sequences of words or numbers with no deep interrelationships, while humans meaning conveyed by words. Humans explore ideas and concepts, while automated systems are limited to searching for words.
  • Knowledge Discovery Framework; Knowledge Discovery in Web Content; Relevant Topics Mining; Relevant Latent Topics Discovered; Relevant Queries Mining; Hierarchical Topics Graph, Discovery of intermediate topics to discover/expose relationship between related topics; Causality Graph Construction and Mining. New figures are added to utilize Discovery Patterns, Synthesize Causality Graph, and mining of Causality Graph and other available repositories for a richer knowledge discovery experience. Not everything is available in causality graph and further online mining for more augmentation data might be needed.
  • Name Entity Relationship: Named Entity Recognition (NER) is a key for accurate content extraction for knowledge discovery. Personal names, places, dates, organizations, groups, parties and other named entities (NEs) to characterize topics in a document; Name Disambiguation; Name phrase parsing, compound names, . . . ; Known concepts, events, . . . .
  • Possible Product & Service Offerings: Knowledge Discovery platform that aids in any product or service where data augmentation and knowledge discovery are desired or suitable. Examples: News discovery, Political, Business, Historical, Science, . . . ; Browsing & research; Financial Data Discovery; Company profile, competitive assessment, etc. . . . ; eHealth Discovery; By leveraging a health-related DP; Medication Information, Patient Case Analysis, Prognosis and other related Data can be discovered and displayed.
  • Discovery Patterns (DP): Pre-defined DP; Custom-tailored DP; On the fly synthesis of DP; Mapping info extracted from document/page to DP: Mining Web for CQ (Competitive Questions); Extracting Relevant/latent Topics; Discover hidden and or nonobvious topics & relationships; Filling DP based on user's preferences/interests.
  • Custom built DP: Fluid DP Synthesis: Tapping into user's selected categories and topics, a DP can be synthesized. DP is fluid and will change over time based on user's preferences/interests. DP can be synthesized and tailored based on existing public information repositories (Freebase, dbpedia, Quora, and private knowledge (if accessible).
  • Library of DP: Build a pre-defined set of topic-relevant DPs; Synthesize a library of DPs based on selected categories and relevant topics. Library will store all existing and new DPs for future processing. If on-the-fly synthesis causes performance problems.
  • Causality Graph for KD: Causality graph (CG) is vital for discovering hidden topics that are important to connecting known topics. Hidden topics discovered by CG are vital to discovering other important relevant topics. In particular, when a feature set of the reference topics (topics, categories, user feedback) and its relevant topics cannot discover important topics for further knowledge discovery, mined hidden topics can be the answer. Hidden topics go beyond topics defined by the words or phrases or known relationships of reference topics.
  • Competency Queries: Competency questions/queries (CQ) aid in seeding a set of interesting questions to answer about the reference topic or the relevant topics that can be extracted. CQ are also important to seed a discovery template to augment the topic at hand.
  • Competency Queries Extraction/Mining: CQ can be manually crafted by the user/system. CQ can be automatically extracted or synthesized. For example, the knowledge discovery and augmentation system can query databases for questions relevant to topic and select the highest ranked questions that history shows people care about. Quora is an example of such database that can be mined to extract a set of CQ for a topic. In an enterprise setting, to mine a set of CQ about a product, customer's feedback/queries/marketing data can be mined to synthesize a set of CQ relevant for a product. This CQ will serve as a seed to craft a Discovery Pattern that will serve in augmentation of the relevant topic.
  • Causality Relationships: Relationships Characterizing; causing and leading relationships can be constructed based on mining a causality graph that is constructed beforehand. Involved Relationship: Same actors involved in different topics in same timeline.
  • Topic Relationships: Topic Relationship Model is a mapping R such that T1 R T2: Discovered topics and docs that connect T1 and T2; R completes the knowledge graph that is relevant to the reference content. Example: Topics, Entities, Categories.
  • In accordance with this disclosure, the components, process steps, and/or data structures described herein may be implemented using various types of operating systems, software development platforms, computing platforms, computer programs, and/or general purpose machines. In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature or having limited resources may require modification of an implementation of an illustrated embodiment which may be done without departing from the scope and spirit of the inventive concepts disclosed herein. Where a method comprising a series of process steps is implemented by a computer or a machine and those process steps can be stored as a series of instructions readable by the machine, they may be stored on a tangible medium such as a computer memory device (e.g., ROM (Read Only Memory), PROM (Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory), FLASH Memory, Jump Drive, and the like), magnetic storage medium (e.g., tape, magnetic disk drive, and the like), optical storage medium (e.g., CD-ROM, DVD-ROM, paper card, paper tape and the like) and other types of memory.
  • The term “exemplary” is used exclusively herein to mean “serving as an example, instance or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
  • In general, various information processing techniques and algorithms can be used to provide the augmented data system with global and local context sensitive augmented content. In the following paragraphs certain definitions and representations of data flow models are presented and discussed without limitations on how each model may be implemented whether by hardware, software, firmware, or any combination thereof.
  • Multilevel, e.g. global and local, context sensitive augmented content would increase productivity and enhance a user's experience while viewing or interacting with data for the purpose of learning, reading, writing, drawing, browsing, searching, discovering, viewing images or any type of user interaction with digital data information whether structured or unstructured (e.g. financial, health, manufacturing, and corporate data). The digital data information may be stored locally or remotely via a corporate server or in the cloud. Additionally, private as well as public sources of data may be used or selected by the user for the ultimate personalized range of choices that may be used to further narrow down or expand the augmented content being presented.
  • Additionally, multilevel context sensitive augmented content would increase productivity and enhance business intelligence for the enterprise by providing context sensitive augmented content that is generated by dynamically mining and analyzing structured and unstructured enterprise data and/or possibly leveraging structured and unstructured publicly available data for further improving user experience. In addition, multilevel context sensitive content augmentation filters provide the ability to dynamically mine data on the fly based on modification of a new input from a user. For example, a new input from a user can be the selection of a new text or a portion of the reference content, or it can be a feedback provided such as elevating the priority or weight (e.g. like) or decreasing the priority or weight (e.g. dislike, delete, dismiss) of a single augmented content, a category of augmented content, or a theme of augmented content. Furthermore, leveraging the history and/or user personal preferences, the multilevel context sensitive augmented content can be further in tune with what the user would like to see or expects to see in the augmented content being generated and presented.
  • In accordance with one embodiment, a feature of the multilevel context sensitive augmented content is that the augmented content is generated either in the cloud or locally using sophisticated information retrieval algorithms or using a set of heuristics so as to enable large-scale data processing, information retrieval, and web mining. Knowing that extracting a feature set from a web page is a problem that is known and various algorithms, methods, and research into various solutions have been made, this system can use existing research or methodologies to extract a feature set. Furthermore, this system employs a set of heuristics and metrics that efficiently extract a set of features that characterize the reference content at hand. These heuristics rely on embedded hints, metrics, metadata, or other embedded knowledge and information that can be extracted from the structure, URL link, embedded links, title of the document, or other types of data that may be directly or indirectly related to the reference content along with feedback provided by the user.
  • In accordance with one embodiment, a feature of the multilevel context sensitive augmented content is that the information retrieved, and knowledge constructed can be saved and called upon in future augmentation tasks and sessions.
  • In accordance with one embodiment, a feature of the multilevel context sensitive augmented content is that the augmented content is presented through a translucent layer on top of the original content being viewed by the user. Hence a non-obtrusive content augmentation that is hidden or made available whenever a user disables or enables the global and local context sensitive augmented content application. Relevant augmented content is displayed on top of a translucent layer on top of the original content being viewed by the user. Hence, the augmentation system provides a less obtrusive and more efficient interaction, browsing and exploration experience.
  • In accordance with one embodiment, a multilevel corresponds to at least two levels, a global level and a local level. A global and a local relevant features of reference content, maybe defined as a global relevant feature corresponding to a feature or a theme common throughout the reference content, and a local relevant feature corresponding to a feature strongly related to a locality within the reference content. One method of dynamically updating augmented content can be achieved by leveraging real-time user feedback, such as elevating priority or dismissing augmented content as being presented to the user. If an augmented content's priority is elevated, its weight increases as well as the metadata that describes this augmented content gets promoted which in turn updates existing augmentation filters as well as generating and presenting new augmentation content based on the new metrics. For example, if an augmented content describing certain public policy information is promoted, then that augmented content's priority is increased, and the priorities of all augmented content that reference some public policy, or government policy get increased. In addition, the augmented content can be dynamically updated based on user interaction, e.g. selection and/or clicking, within the reference or augmented content in real time. There are various means to implement the augmented content presentation layers such as dials for global and local augmented content, or a scroll-area of small windows for various augmented content. Describing all these various means to implement the augmented content presentation layer is not necessary to understand this disclosure. Furthermore, a person skilled in the art would understand and would be able to employ many different means to implement augmented content presentation layers without departing from the spirit of this disclosure.
  • In accordance with one embodiment, while generating augmented content may result in a lot of data that cannot be shown on the display, this data can be stored in a deep queue. A deep queue means that there is more augmented content (data) in the queue than what is displayed on the screen. For example, not all mined augmented content can be displayed simultaneously due to physical screen size limitations or the display layer size. A user can hover over the queue or press an arrow to scroll through the augmented content in the queue. In addition, it is important to note that the augmented content being presented to the user may comprise actual data, snap shot of the actual data, a processed portion of the actual data, or a link to the location where the actual data can be retrieved.
  • Theme-based augmented content can further enhance a user's experience by presenting a set of themes. In accordance with one embodiment, when the user selects or deselects a theme, a new or updated augmented content is presented to the user. An option to expedite augmentation and improve the quality is to rely on the user's preferences and feedback. When the application is invoked, a set of categories/themes can be presented to the user. These constitute metadata. By relying on the user choices of themes, augmentation can be enhanced and filtered. For example, a research paper that deals with AIDS virus would trigger a set of themes such as Pharmaceuticals; Discrimination, etc. . . . . The user who is interested in science and pharmacology but not in the social aspects related to AIDS would deselect ‘Discrimination’. Thus, all augmented content presented will be tailored to refer to categories that are related to science and other related aspects of the research. The theme can further be defined by a category or a set of related categories. This will serve to prune the augmented data and only present the relevant data that is of interest to the user and the task he is carrying out at that moment.
  • Multilevel context sensitive augmented content application can be implemented as a stand-alone application, on top of another application, or as an extension for applications, e.g. a browser extension. In accordance with one embodiment, further refinement or fine tuning of various options for customization of augmentation system such as aggregating, mining, filtering, and presenting various aspect of data or metadata can be performed dynamically in real-time. In addition, the customization of augmentation system may be performed based on at least one or more of a user's feedback, attributes, characteristics, theme, topics, and interests. Also, when augmentation system presents a list of tags/categories, the user can provide feedback in the form or liking/disliking the tag. This is similar to promoting or dismissing an augmented content. Therefore, in accordance with one embodiment, the augmented content can be updated live. Furthermore, this user's feedback would also result in updating various subsystems such as the underlying data-mining, statistical computing algorithms, or machine-learning algorithms or other information retrieval algorithms or heuristics. These updated subsystems are used to generate or create new signatures, metrics, or features which are based on user's feedback, e.g. liked/disliked tags, where the new signatures are used to generate new augmented content or update the currently presented augmented content.
  • In accordance with one embodiment, a feature of a system for generating and presenting multilevel context sensitive augmented content is the ability to utilize online and offline mining and analytics for augmentation. For example, mining and processing in real-time or in batch mode and store data in a data store (local or remote) or presenting real-time augmented content to the user. The stored data can be used for future augmentation. Metadata and other relevant data elements can also be annotated in real-time to capture user's preferences and experiences. In addition, metadata and other relevant data elements can be stored in a central repository to be leveraged for future augmentation of same or similar content. A brief description of metadata is that it is data that describes other data. For example: ‘public health’ is a category that encompasses diseases. This higher level category ‘public health’ is a metadata for diseases.
  • In accordance with one embodiment, a multilevel context sensitive augmented content system uses at least two levels, a global level and a local level. The following explains the difference between global and local augmented content. Global augmented content refers to augmented data that pertain to the overall document that the user is currently browsing, exploring, or interacting with. A local augmented content can refer to augmented content based on a particular piece, paragraph, sentence, word, image, icon, symbol, etc. . . . of that document that the user is currently browsing, exploring, or interacting with. Global and local augmented content are presented using a dynamic deep queue, and the user can control the displaying of at least a portion of the augmented content. Content sources for augmentation can be provided from many sources. An example of such content sources includes but is not limited to a user's own documents and data on desktop, web-content, social media sites, enterprise data-marts, and local and remote data stores, ontologies, other categorization, and/or semantic or relationship graphs.
  • The multilevel context sensitive augmented content can be successfully implemented to augment a user's browsing experience as discussed above. In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content can be successfully implemented as an application for augmented user experience (UX). The system can increase productivity, provides augmented data-mining and data-exploration platform, augmented e-learning and e-research system, augmented desktop-based and mobile-based browsing, exploration, research, discovery, and learning platforms, data augmentation for better healthcare products and services, data augmentation for better educational products and services, augmentation system for better content management and relationship platform for both enterprise and consumer applications, enhanced online-shopping research and UX, enhanced marketing campaigns, an enhanced news access UX are but to name a few of application benefiting from a system for generating and presenting multilevel context sensitive augmented content.
  • Semantic processing is the process of reasoning about the underlying concepts and expressing their relationships. In addition to various augmentation methods as described above, the following semantic based techniques can also be used in a system for generating and presenting augmented content. In accordance with one embodiment, utilizing existing tags in public sources, utilizing batch-processed tags as a cloud application, semantic processing of selected content to generate a match to an existing tag, semantic processing to generate augmented content on the fly and utilizing user's feedback for promoting and dismissing augmented content are but examples for methods to provide a better user-relevant augmented content. Generating augmented content on the fly can also be accomplished by using a feedback mechanism provided by the user to enable mining and generating of new augmented data to be presented to the user.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content is used to improve the analytics of large data sets by leveraging pre-processed data and already generated relationships. Given a content that is the result of a statistical data mining and exploration functions on a small or large amounts data—be it remote or local-extract the correlation metrics and other signatures that demonstrate a meta-relationship and leverage it in other data-mining, analytics, and to generate augmented content. For example: When a user presents some key words to a search engine, the user gets a set of links that are related in addition to some ads that could very well be related to the key words you have entered or to some personal data known or extracted of the user. These presented links and ads have gone through a huge amount of processing and computation in the cloud. By knowing that a relationship or a meta-relationship exists between the keywords, links, and may be other content pushed to the user like ads, the analytics operation can leverage them and extract, store, and leverage these signatures for future browsing or for presenting context sensitive augmented content.
  • In accordance with one embodiment, the content presented to the search engine can be either parsed from the html or other format or interface produced by a data provider. Or, it can be scanned through OCR if the data format is encrypted. This ability to take a snap shot of a screen and analyzes and leverages its data and relationships empowers and simplifies the augmentation and analytics processes and improves the throughput since the signatures/correlation metrics extracted are a result of processing a significantly smaller set of data. Therefore, the performance gain of a system for generating and presenting multilevel context sensitive augmented content is orders of magnitude compared to mining massive data sets in the cloud.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content presents the augmented content along with the reference content using two or more different presentation layers displayed using the same display screen. In addition, the system provides the ability to customize the generation of augmented data in situ (in place) while working on original or reference content, where the augmented data can be displayed on see thru presentation layers so as not to obscure the original or reference content and to maximize use of the display screen, and/or the displaying area.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content utilizes dynamic updates of displayed augmented content using presentation layers while a user views and manipulates reference content displayed using another presentation layer. It is preferable to use a translucent presentation layer for the augmented content presentation layer that is located on top of the displayed reference content so that the user can easily manipulate or interact with the reference content while simultaneously viewing the dynamically updated augmented content. As can be easily appreciated by person skilled in the art that displaying relevant augmented data in a separate tab or page would result in loss of context relationship and provides a less efficient and less friendly user experience. Similarly, displaying the augmented content on the sidebars is possible as well. However, it consumes screen space and clutters displaying of the reference content. Therefore, the ability to keep the reference content accessible to the user while displaying the augmented data on top of the original content provides a much smoother and efficient user experience. Furthermore, the user can easily hide, size, move, or display the augmented content without affecting the reference content.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content enables a user the ability to associate any of the augmented content with the reference content or an attribute of the reference content source using one or more types of metadata. The system enables the user to save the associated metadata for future use or sessions. For example, the association of metadata can be accomplished by embedding a link in the text, by associating a link with a text, or by associating any data or metadata with the reference content or any part of the reference content. Moreover, the user has the ability to specify a category or more as a source or criterion of augmentation. The user can also define association rules that join a group of attributes, categories, and other metrics together to provide a richer input to aid the augmentation system to generate more relevant augmentation content. For example, an enterprise sales projection document can always be augmented with any data source or data documents that generated the projection. The criterion is a category that says source sales data and not necessarily the exact data documents. The sales data can be extracted automatically by the augmentation system. Utilizing selected or provided categories of interest, the augmentation system can carry out an updating procedure for any associated data or metadata for any other reference content. Furthermore, the augmented content is displayed using translucent layers so that the user always sees and has access to the original or reference content. The user is able to access, browse, move, select, hide, tap, scroll, or interact with the reference or augmented content while the system dynamically generates and displays an updated augmented content using the augmentation presentation layer. It is noted that the user interaction with the reference or augmented content can result in having a new reference content that the user wishes to interact with, hence, a new augmented content is generated and displayed. The system keeps track of and saves certain information regarding this nested augmentation level. The system provides the user the ability to switch back and forth between various nested augmentation levels as well as saving or sharing the augmentation filters or settings used for a particular session.
  • In accordance with one embodiment, further enhancement of the user experience is achieved by enabling the user to change the skin (or look of a user interface UI) of the augmentation system. For example, the same components of a UI (buttons, options, data) can be displayed on the screen in a variety of ways. Usually, a library of templates and color options can be provided to allow the user to customize the display of the augmented content presented by the application. In addition, the global augmented content and local augmented content can be displayed using one or more different regions of the screen or displaying the global and local links to the augmented content in two concentric circles around the reference content. The enhancement of the user experience is achieved by enabling the user to choose the most efficient way to utilize and display the augmented content.
  • In accordance with one embodiment, user selectable skins can also be used to cover or hide pushed content that may exist or embedded in the reference content being viewed. User selectable areas of a skin can be used to enable the display of user selected content such as images or augmented content or pushed content such as advertisement. For example, an ad for tickets to a local concert when the user is browsing a specific artist, or an ad for a book that relates to a global or local augmented content of the user reference or currently viewed augmented content, or any other monetization mechanism based on the augmentation process. The enhancement of the user experience includes a nested multilevel context sensitive augmented content where the augmented content presented to the user can be further enhanced as a function of the various nested levels. The augmented content is presented while keeping track of the current content being viewed in relationship to the original content that the user started with and all levels in between. This provides a hierarchical augmentation system that enables the user to access and build nested levels of augmentation.
  • In accordance with one embodiment, the user interface, or UI, for a system for generating and presenting multilevel context sensitive augmented content can be launched or started automatically and can reside in the background and stays hidden from view until the user invokes a predefined programming function to enable the UI functionality. For example, a single tap, hot-key, function-key, a gesture, or a multiple or a combination of actions acted upon a content would cause the transparent augmentation layer to be shown with the augmented content and in accordance with user preferences, such as tags, skins, themes, etc. . . . Selecting content presents or updates the augmented content already presented. Visiting an augmentation link results in completely or partially (split screen) covering the reference content or original layer comprising the original content. The UI provides the user the ability to navigate nested augmented content or jump back to reference or original content.
  • In accordance with one embodiment, additional system and UI features can further be used to increase the overall efficiency and provide a better user experience. For example, saving the augmented content metrics in user history, and using history to enhance and/or tailor analytics and augmentation as would be more relevant to each individual user or group of users such as in corporate environment. Metrics here refer to the generated signatures as mentioned above. Also, it refers to any annotations that are provided by the user such as priority, liking/promoting an augmented content or dismissing it. This can be stored for future sessions as well as using the augmented content promotion and dismissal to enhance augmentation in real time. Using skins that cover an undesirable part of the screen, e.g. side columns where ads are pushed. The skin may be used for further customization of the viewed screen and potentially could be monetized and leveraged to present relevant augmented content that is paid for by the user, such as ads for objects, e.g. books, related to the content of a reference article.
  • In accordance with one embodiment, a system for generating and presenting multilevel context sensitive augmented content provides dynamic user-guided and customized context-sensitive data augmentation to facilitate learning, exploration and knowledge discovery. The system provides simultaneous interaction with the augmentation layer and the content layer. The system generates augmentation data based on user-defined metrics and filters such as themes, categories of interest, document content and/or part of it. The generated data is not a rigid augmented content. The generated augmented content is any data, concept, and relationships that are presented as a result of the data mining and processing of the original content and the user-defined metrics and filters.
  • In accordance with one embodiment, the system utilizes dynamic and interactive methods to successively refine and tailor the augmented content based on a user's guidelines, filters, and metrics. The system relies on a variety of sources for content augmentation by accessing any online or offline databases, crowd-sourced databases, or open databases. Furthermore, over time, a custom built graph of concepts and relationships can be built between different pieces of data as they are processed and augmented based on the user's filters and metrics to improve the performance of the system and the User Experience. The system provides a context-sensitive hierarchical augmentation framework for deeper and expansive exploration and knowledge discovery. The system enables construction of a customized graph of data, concepts, and relationships based on the filters and metrics provided even in the absence of content. Content can be generated on the fly for further exploration.
  • In accordance with one embodiment, the system enables sharing of augmented data and the associated metrics that generated them. This enables richer knowledge discovery by further refining a user's augmented data based on other users' augmented content. This is useful for collaborative research and knowledge discovery. The system can be launched from offline and online documents or reference content to generate the augmentation content, data and graph of relationships amongst the concepts represented by the augmented content.
  • In accordance with one embodiment, the system provides a UI to display and manipulate reference content and augmented content concurrently, dynamically, and interactively. The system provides one or more translucent layers on top of the reference content to show the augmented content. Translucent layers facilitate displaying the reference content as well as the augmented content. Translucent layers can fully or partially cover the original content. Augmentation layers can be hidden, minimized ((shown as icon), or moved around on the display screen to facilitate easier display and interaction with the reference content. The system enables the user to manipulate and control a set of display layers (reference content layer, and/or augmentation display layers) in a very flexible fashion such that the user can size up, down, move, show, hide any of those display layers. The system provides an intuitive, rich, and friendly UX for data exploration and knowledge discovery on small and large display screens. In particular, displaying of the augmented content concurrently and interactively on the original content empowers the user to use this system on smart phones, tablets, and any other display. Furthermore, the system provides means to insert additional content on the augmentation layers based on analytics on the augmented content and the original content.
  • An example knowledge discovery system 13100, as shown in FIG. 15, includes a KDP system 13195 that is coupled to multiple users, e.g. User A 13101, User B 13102, and/or User C 13103. The KDP system 13195 communicates via Access Channel 13190 with each user by providing discovered knowledge or augmented data 13196 and/or data associated with preferences or knowledge discovery request or data augmentation request. In addition, each user may also have a direct connection via communication network 13175 or other communication means which may include wired or wireless communication devices. The KDP system 13195 also includes Memory 13185 to store data information and code associated with a KDP user, Share A-B 13131, Share C 13132 and Discovery & Tailor Knowledge 13121. User A 13101 can access, use, manipulate, and display online content 13106 or private content and augmented content 13196 received from the KDP system 13195. User A 13101 can include any type of computer system or mobile device that allows its operator to interact, transmit and receive data information.
  • An example of sharing a link or content between User A 13101 and the KDP system 13195 would occur via Access Channel 13190. For example, using a social network would be simply to copy the content and share it or to provide a link to where the content is or the source of the content where it can be accessed. Simply put when a link, e.g. a/v media or article, is shared then the end user cannot change the content.
  • An example of a Knowledge Discovery system, as shown in FIG. 15, is used by one or more users to explore and share augmented content. Each user would receive a shared augmented content that is locally enriched and augmented by the KDP system 13195 as per the receiving user's preferences, specific parameters, and/or geographical location. In addition, the receiving user is able to interact with the shared augmented content using the KDP system 13195. This means adding more relevant content that the system can discover based on the receiving user's input or preferences. For example, an original author may dismiss certain aspects/topics from an augmented content while another user may consider those dismissed aspects/topics important. Thus, the KDP system 13195 having access to the other user's preferences would be able to regenerate the shared augmented content using his/her preferences or profile. Moreover, the KDP system 13195 enables each user to enrich and augment any shared content automatically and/or based on certain programmable parameters.
  • Another example of KDP system 13195 is a recursive augmentation, where an original author tackled one side of the knowledge and shared it with another user who receive not only what the original author has shared but also a customized augmentation based on the receiving user preferences where the KDP system 13195 can automatically process and generate augmented content of the original author's shared content based on the interest of the receiving user. This can allow the receiving user to dig deeper or expand the knowledge discovery. Furthermore, the receiving user can highlight, augment or add his comments to an augmented content, e.g. annotating or adding an external link to a content such as a video/audio or a link to an article not discovered by the system. Ultimately, every user can become an investigative reporter, and each successive user who receives a shared augmented content can build on what he/she received from the perspective of his/her own interests. A KDP system 13195 is a smart knowledge discovery system that enables democratization of knowledge.
  • Another example of sharing augmented content using a KDP system 13195 is that the shared augmented content or knowledge can embody all or part of the augmentation parameters used by the original author e.g. User A 13101. A receiving user, e.g. User B 13102, using the shared augmented content would be able to use the KDP system 13195 to use this shared knowledge as the seed for him to build on using his own preferences, parameters and/or specific interests as well as many other possible seed info for additional augmentation as described above.
  • FIG. 16 presents an example data flow for User A 14101, e.g. user A is reading an article/content. The user A 14101 invokes the knowledge discovery platform 14110 on the desired content to further explore and discover relevant knowledge. The user A 14101 can interact with the automatically discovered knowledge. User A 14101 tailors the discovered knowledge by promoting certain content or dimension of knowledge, or demotes others based on the task being accomplished or the interests of user A 14101. User A 14101 sets a policy in Knowledge Sharing and Broadcasting 14130 for sharing the content with one or more end users 14140 or the public at large. The user A 14101 can share or broadcast the discovered knowledge represented by the knowledge graph and other related augmented data that is automatically updated based on policy or interests that is known to the KDP system 13195 for each end users 14140. Moreover, each user of the end users 14140 can further augment or interact with the shared content using the KDP system 13195 which can provide selective feedback or additional content to the original user A 14101 and to each end users 14140 based on the collective content augmentation of all users or a selective group of users as may be determined by user A 14101 sharing policy in Knowledge Sharing and Broadcasting 14130 or as determined by each user of end users 14140 own sharing policy 14230 as shown in FIG. 17 and further explained below.
  • The KDP system 13195 comprises Knowledge Discovery Tailoring and Annotation 14120 which receives the augmented data 14200 and knowledge discovered by the Knowledge Discovery System 13100. FIG. 17 is data flow diagram for the Knowledge Discovery Tailoring and Annotation 14120. In the process of knowledge discovery, the KDP system 13195 may discover much relevant content to the content in hand. The user A 13101 may choose to emphasize certain aspects of the knowledge and/or dismiss others in Augment Topics 14210. The user A 13101 interacts with the KDP system 13195 to tailor the resultant knowledge. The user A 13101 can further manually annotate the shared content in Annotate Data/Knowledge 14220, e.g. to add his/her remarks. The user A 13101 can set the sharing policy in Sharing and Modification Policies 14230, e.g. such as who can view it, who can edit it, and who can share it, etc. The user A 13101 shares his discovered view 14240 with his circles of connections or broadcast it to the public.
  • FIG. 18 shows an example of Knowledge Discovery sharing policy Knowledge Sharing and Broadcasting 14130. The augmented data 14300 corresponds to the knowledge discovered or shared from a user A 13101 of the knowledge discovery system 13100. The user receives and views the shared link 14310 of discovered knowledge. For example, the shared link can be transmitted using any means for communication e.g. via email, text, social network, etc. . . . Or, via some notification in the knowledge discovery platform 13110. The user shares the knowledge based on his sharing policy 14320. The knowledge discovery system 13100 checks the membership of the user viewing the shared link 310. If the user wishes to interact with the shared content e.g. story, then the user is given an option to become a member of the knowledge discovery system 13100 and create his own profile or preferences and thus be able to further refine or generate customized augmented content. The user can edit the knowledge discovered, tailor, or annotate and the shared knowledge 14350. The user shares the tailored knowledge 14360 in a similar fashion as was described above. This process repeats and could go viral in sharing and tailoring of knowledge. Moreover, the KDP system 13195 can provide selective feedback or additional content to the original user A 13101 and/or to any user or a group of users of end users 14140 based on the collective content augmentation of end users 14140 or a selective group of users as may be determined by user A 13101 sharing policy 14130 or as determined by each user of end users 14140 own sharing policy 14320. The KDP system 13195 can exponentially grow the discovered knowledge as more users interact with the original shared content.
  • The KDP system 13195 can benefit not only the original user A 13101 through selective feedback of augmentation parameters or uses of his originally shared content, but also the richness of the augmented content to the community of end user 14140 where again each user through his own sharing policy may receive a customized feedback from the KDP system 13195 regarding his own tailored and shared content as well as the original shared content from user A 13101.
  • While embodiments, implementations, and applications have been shown and described, it would be apparent to those skilled in the art having the benefit of this disclosure that many more modifications than mentioned above are possible without departing from the inventive concepts disclosed herein.

Claims (20)

I claim:
1. A machine learning and inference system for use within an information processing system, the machine learning and inference system comprising:
a computing platform; and
an application specific augmentation system operable (i) to access content information from a public data source and a private data source, the public data source including at least one of structured and unstructured publicly available content, the private data source including at least one of structured and unstructured private content, and (ii) to provide the content information to the computing platform, the computing platform is operable to infer a set of patterns and a set of relationships between patterns of the set of patterns based at least in part on the content information, wherein the application specific augmentation system is further operable to discover knowledge by reasoning about the content information using the set of patterns and the set of relationships.
2. The machine learning and inference system of claim 1, wherein the application specific augmentation system is further operable to compile one or more sets of relevant augmented content based on the content information, the set of patterns, the set of relationships.
3. The machine learning and inference system of claim 2, wherein the application specific augmentation system is further operable to automatically refine at least one set of the one or more sets of relevant augmented content based on one or more of the content information, structured publicly available content, unstructured publicly available content, unstructured private content, structured private content, and the knowledge discovered by the application specific augmentation system reasoning about the content information.
4. The machine learning and inference system of claim 2, wherein the application specific augmentation system further comprising:
a relevant augmented content management subsystem operable to manage and to provide the one or more sets of relevant augmented content via a queue.
5. The machine learning and inference system of claim 4, wherein, in response to at least one of a real-time input, a feedback from a user, an automatically generated feedback, an interaction with at least a portion of the set of relevant augmented content, an interaction with the content information, a real-time feedback from a user, and an input from a user, the relevant augmented content management subsystem is further operable to provide a request for updating the one or more sets of relevant augmented content to the application specific augmentation system.
6. The machine learning and inference system of claim 5, wherein, in response to the request for updating the one or more sets of relevant augmented content, the computing platform is operable (i) to infer an updated set of patterns by (a) inferring additional patterns to be added to the set of patterns, (b) cleaning up patterns from the set of patterns, or (c) refining one or more patterns in the set of patterns, and (ii) to infer an updated set of relationships between patterns of the updated set of patterns by (a) inferring new relationships to be added to the set of relationships, (b) cleaning up relationships from the set of relationships, or (c) refining one or more relationships in the set of relationships, and wherein the application specific augmentation system is operable (i) to update the one or more sets of relevant augmented content based at least in part on the updated set of patterns and the updated set of relationships, or (ii) to generate a new set of relevant augmented content based at least in part on the updated set of patterns and the updated set of relationships.
7. The machine learning and inference system of claim 2, wherein the application specific augmentation system is further operable to dynamically update the one or more sets of relevant augmented content in response to at least one of a real-time input, a feedback from a user, an automatically generated feedback, an interaction with at least a portion of the set of relevant augmented content, an interaction with the content information, a real-time feedback from a user, an input from a user, a real-time feedback mechanism, a real-time interaction with the content information, and a real-time interaction with the one or more sets of relevant augmented content.
8. The machine learning and inference system of claim 1, wherein the application specific augmentation system is further operable to build a knowledge graph based at least in part on the knowledge discovered, the set of patterns and the set of relationships.
9. The machine learning and inference system of claim 8, wherein a user is able to interact with the content information or the one or more sets of relevant augmented content by traversing the knowledge graph.
10. The machine learning and inference system of claim 1, wherein the application specific augmentation system is further operable to perform a nested or hierarchical knowledge discovery, and wherein, during a first invocation of the application specific augmentation system, the application specific augmentation system is operable to compile a first set of relevant augmented content based on the content information, the set of patterns, the set of relationships.
11. The machine learning and inference system of claim 10, wherein, during a second invocation of the application specific augmentation system, the application specific augmentation system is operable to provide at least a portion of the first set of relevant augmented content as a new content information to the computing platform, and to compile a second set of relevant augmented content based at least in part on the new content information.
12. The machine learning and inference system of claim 11, wherein the computing platform is operable to infer a new set of patterns, and a new set of relationships between patterns of the new set of patterns based at least in part on the new content information, and wherein the application specific augmentation system is operable to compile the second set of relevant augmented content based at least in part on one or more of the new content information, the content information, the set of patterns, the set of relationships, the new set of patterns, and the new set of relationships.
13. The machine learning and inference system of claim 12, wherein the application specific augmentation system is further operable to share one or more of the first set of relevant augmented content, the second set of relevant augmented content, the set of patterns, the set of relationships, the new set of patterns, and the new set of relationships between a first process and one or more other processes.
14. The machine learning and inference system of claim 2, wherein the application specific augmentation system is further operable to enrich the one or more sets of relevant augmented content over time, and wherein the computing platform is further operable (i) to learn from the one or more sets of relevant augmented content, the set of patterns, and the set of relationships, (ii) to save the one or more sets of relevant augmented content, the set of patterns, and the set of relationships, and (iii) to refine over time the one or more sets of relevant augmented content, the set of patterns, and the set of relationships.
15. A machine learning and inference system for use within an information processing system, the machine learning and inference system comprising:
an application specific augmentation system comprising a computing platform and a relevant augmented content subsystem, the application specific augmentation system operable (i) to access a specific content information from a public data source and a private data source, the public data source including at least one of structured and unstructured publicly available content, the private data source including at least one of structured and unstructured private content, and (ii) to provide the specific content information to the computing platform, the computing platform operable to infer a set of patterns and a set of relationships between patterns of the set of patterns based on the specific content information, the relevant augmented content subsystem operable to compile a first set of relevant augmented content based on the specific content information, the set of patterns, and the set of relationships, wherein the specific content information is selected from any one of the group consisting of financial content information, medical content information, health content information, business content information, manufacturing content information, and social media content information.
16. The machine learning and inference system of claim 15, wherein the application specific augmentation system is further operable to enrich the first set of relevant augmented content over time.
17. The machine learning and inference system of claim 16, wherein the computing platform is further operable (i) to learn from one or more of the first set of relevant augmented content, the first set of patterns, and the first set of relationships, (ii) to save one or more of the first set of relevant augmented content, the first set of patterns, and the first set of relationships, and (iii) to refine over time one or more of the first set of relevant augmented content, the first set of patterns, and the first set of relationships.
18. The machine learning and inference system of claim 17, wherein the application specific augmentation system is further operable to reason about the specific content information using the set of patterns and the set of relationships, and to build a knowledge graph based on the set of patterns and the set of relationships, wherein a user is able to interact with the specific content information or the first set of relevant augmented content by traversing the knowledge graph.
19. A machine learning and inference system for use within an information processing system, the machine learning and inference system comprising:
a computing platform, wherein the machine learning and inference system is operable (i) to access content information from a public data source and a private data source, the public data source including at least one of structured and unstructured publicly available content, the private data source including at least one of structured and unstructured private content, and (ii) to provide the content information to the computing platform, the computing platform is operable to infer a set of patterns and a set of relationships between patterns of the set of patterns based on the content information, and wherein the machine learning and inference system is further operable to reason about the content information using the set of patterns and the set of relationships; and
a relevant augmented content subsystem operable to compile a set of relevant augmented content based at least in part on the content information, the set of patterns, and the set of relationships.
20. The machine learning and inference system of claim 19, wherein the computing platform is further operable (i) to learn from one or more of the set of relevant augmented content, the set of patterns, and the set of relationships, (ii) to save one or more of the set of relevant augmented content, the set of patterns, and the set of relationships, and (iii) to refine over time one or more of the set of relevant augmented content, the set of patterns, and the set of relationships.
US16/558,263 2011-09-23 2019-09-02 Machine learning and inference system Abandoned US20190384800A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/558,263 US20190384800A1 (en) 2011-09-23 2019-09-02 Machine learning and inference system

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US201161626253P 2011-09-23 2011-09-23
US201261743047P 2012-08-24 2012-08-24
US13/573,564 US9275148B1 (en) 2011-09-23 2012-09-24 System and method for augmented browsing and knowledge discovery
US201361801359P 2013-03-15 2013-03-15
US201361880175P 2013-09-19 2013-09-19
US14/217,462 US9632654B1 (en) 2013-03-15 2014-03-17 System and method for augmented knowledge discovery
US201414491977A 2014-09-19 2014-09-19
US15/057,052 US9817906B2 (en) 2011-09-23 2016-02-29 System for knowledge discovery
US15/495,977 US10402502B2 (en) 2011-09-23 2017-04-25 Knowledge discovery system
US16/558,263 US20190384800A1 (en) 2011-09-23 2019-09-02 Machine learning and inference system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/495,977 Continuation US10402502B2 (en) 2011-09-23 2017-04-25 Knowledge discovery system

Publications (1)

Publication Number Publication Date
US20190384800A1 true US20190384800A1 (en) 2019-12-19

Family

ID=59497717

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/495,977 Expired - Fee Related US10402502B2 (en) 2011-09-23 2017-04-25 Knowledge discovery system
US16/558,263 Abandoned US20190384800A1 (en) 2011-09-23 2019-09-02 Machine learning and inference system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/495,977 Expired - Fee Related US10402502B2 (en) 2011-09-23 2017-04-25 Knowledge discovery system

Country Status (1)

Country Link
US (2) US10402502B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506930A (en) * 2020-12-15 2021-03-16 北京三维天地科技股份有限公司 Data insight platform based on machine learning technology
US11514336B2 (en) 2020-05-06 2022-11-29 Morgan Stanley Services Group Inc. Automated knowledge base
WO2023152923A1 (en) * 2022-02-10 2023-08-17 富士通株式会社 Information processing program, information processing device, and information processing method

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10021247B2 (en) * 2013-11-14 2018-07-10 Wells Fargo Bank, N.A. Call center interface
US11397782B2 (en) * 2014-12-08 2022-07-26 Yahoo Assets Llc Method and system for providing interaction driven electronic social experience
US10284633B1 (en) * 2015-11-11 2019-05-07 Berryville Holdings, LLC Systems and methods for implementing an on-demand computing network environment utilizing a bridge device
US10332317B2 (en) * 2016-10-25 2019-06-25 Microsoft Technology Licensing, Llc Virtual reality and cross-device experiences
US10872107B2 (en) * 2017-06-30 2020-12-22 Keysight Technologies, Inc. Document search system for specialized technical documents
US10977242B2 (en) 2017-09-07 2021-04-13 Atlassian Pty Ltd. Systems and methods for managing designated content items
US10242320B1 (en) * 2018-04-19 2019-03-26 Maana, Inc. Machine assisted learning of entities
US11132755B2 (en) * 2018-10-30 2021-09-28 International Business Machines Corporation Extracting, deriving, and using legal matter semantics to generate e-discovery queries in an e-discovery system
CN111025644A (en) * 2019-12-24 2020-04-17 塔普翊海(上海)智能科技有限公司 Projection screen device of double-free-form-surface reflective AR glasses
US11631031B2 (en) 2020-02-20 2023-04-18 Bank Of America Corporation Automated model generation platform for recursive model building
US20220027331A1 (en) * 2020-07-23 2022-01-27 International Business Machines Corporation Cross-Environment Event Correlation Using Domain-Space Exploration and Machine Learning Techniques
WO2022258198A1 (en) * 2021-06-11 2022-12-15 Brainlab Ag Analysis and augmentation of display data

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965914B2 (en) * 2000-10-27 2005-11-15 Eric Morgan Dowling Negotiated wireless peripheral systems
US7373336B2 (en) * 2002-06-10 2008-05-13 Koninklijke Philips Electronics N.V. Content augmentation based on personal profiles
US8122014B2 (en) * 2003-07-02 2012-02-21 Vibrant Media, Inc. Layered augmentation for web content
US7257585B2 (en) * 2003-07-02 2007-08-14 Vibrant Media Limited Method and system for augmenting web content
US7853558B2 (en) * 2007-11-09 2010-12-14 Vibrant Media, Inc. Intelligent augmentation of media content
US8166189B1 (en) * 2008-03-25 2012-04-24 Sprint Communications Company L.P. Click stream insertions
WO2011008771A1 (en) * 2009-07-14 2011-01-20 Vibrant Media, Inc. Systems and methods for providing keyword related search results in augmented content for text on a web page
US9348935B2 (en) * 2010-06-29 2016-05-24 Vibrant Media, Inc. Systems and methods for augmenting a keyword of a web page with video content
US20120278825A1 (en) * 2011-04-30 2012-11-01 Samsung Electronics Co., Ltd. Crowd sourcing
US8739223B2 (en) * 2012-04-25 2014-05-27 Electronics And Telecommunications Research Institute Method and apparatus for processing augmented broadcast content using augmentation region information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11514336B2 (en) 2020-05-06 2022-11-29 Morgan Stanley Services Group Inc. Automated knowledge base
US11922327B2 (en) 2020-05-06 2024-03-05 Morgan Stanley Services Group Inc. Automated knowledge base
CN112506930A (en) * 2020-12-15 2021-03-16 北京三维天地科技股份有限公司 Data insight platform based on machine learning technology
WO2023152923A1 (en) * 2022-02-10 2023-08-17 富士通株式会社 Information processing program, information processing device, and information processing method

Also Published As

Publication number Publication date
US10402502B2 (en) 2019-09-03
US20170228239A1 (en) 2017-08-10

Similar Documents

Publication Publication Date Title
US10402502B2 (en) Knowledge discovery system
US9817906B2 (en) System for knowledge discovery
US9632654B1 (en) System and method for augmented knowledge discovery
US20220075812A1 (en) Using content
US20220051289A1 (en) Tagging and ranking content
US11514114B2 (en) User-centric contextual information for browser
US20200117658A1 (en) Techniques for semantic searching
US9189551B2 (en) Method and apparatus for category based navigation
US20220309037A1 (en) Dynamic presentation of searchable contextual actions and data
US11474843B2 (en) AI-driven human-computer interface for associating low-level content with high-level activities using topics as an abstraction
US20200004890A1 (en) Personalized artificial intelligence and natural language models based upon user-defined semantic context and activities
Chuprina et al. Using ontology-based adaptable scientific visualization and cognitive graphics tools to transform traditional information systems into intelligent systems
US11449764B2 (en) AI-synthesized application for presenting activity-specific UI of activity-specific content
WO2020005569A1 (en) Framework and store for user-level customizable activity-based applications for handling and managing data from various sources
Marchenkov et al. Smart museum of everyday life history in Petrozavodsk State University: Software design and implementation of the semantic layer
US11354581B2 (en) AI-driven human-computer interface for presenting activity-specific views of activity-specific content for multiple activities
Eichmann et al. Orchard: Exploring multivariate heterogeneous networks on mobile phones
AU2012283928B2 (en) Method and apparatus for category based navigation
WO2023159650A1 (en) Mining and visualizing related topics in knowledge base
Rahman Amplifying domain expertise in medical data pipelines
Alarayedh Design and implementation of search awareness cues in explicit collaborative information seeking
Dessì et al. Smart spaces for adaptive information integration in bioinformatics
Laqua Just-in-time Information Interfaces: A new Paradigm for Information Discovery and Exploration
WO2023205204A1 (en) Classification process systems and methods
Bostandjiev Bridging Social and Semantic Computing–Design and Evaluation of User Interfaces for Hybrid Systems

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION