US20150100562A1 - Contextual insights and exploration - Google Patents

Contextual insights and exploration Download PDF

Info

Publication number
US20150100562A1
US20150100562A1 US14/508,431 US201414508431A US2015100562A1 US 20150100562 A1 US20150100562 A1 US 20150100562A1 US 201414508431 A US201414508431 A US 201414508431A US 2015100562 A1 US2015100562 A1 US 2015100562A1
Authority
US
United States
Prior art keywords
results
query
context
attention
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/508,431
Inventor
Bernhard S.J. Kohlmeier
Pradeep Chilakamarri
Kristen M. Saad
Patrick Pantel
Ariel Damian Fuxman
Lorrissa Reyes
Ashok Kumar Chandra
Dhyanesh Narayanan
Bo Zhao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201361887954P priority Critical
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US14/508,431 priority patent/US20150100562A1/en
Publication of US20150100562A1 publication Critical patent/US20150100562A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHANDRA, ASHOK KUMAR, ZHAO, BO, REYES, Lorrissa, FUXMAN, ARIEL DAMIAN, SAAD, KRISTEN M., KOHLMEIER, Bernhard S.J., PANTEL, PATRICK, CHILAKAMARRI, Pradeep, NARAYANAN, Dhyanesh
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06F17/30867
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3322Query formulation using system suggestions
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • G06F17/3053
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
    • G06F3/04842Selection of a displayed object
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computer systems based on specific mathematical models
    • G06N7/005Probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology

Abstract

Techniques and systems are presented for providing “contextual insights,” or information that is tailored to the context of the content a user is consuming or authoring. Given a request for information about a topic, which may be indicated by a user gesture in an application, one or more queries to search services may be formulated without requiring entry of a search query directly by a user. Moreover, techniques and systems may leverage the context of the content the user is consuming or authoring, as well as user, device, and application metadata, to construct the queries and to organize and filter the results into relevant contextual insights.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application claims the benefit of U.S. Provisional Application Ser. No. 61/887,954, filed Oct. 7, 2013.
  • BACKGROUND
  • Applications for creating or consuming content include reader applications and productivity applications like notebook, word processors, spreadsheets or presentation programs. Users of these applications for creating or consuming content often research topics and rely on Internet search services to find additional information related to the content being created or consumed. To research topics, a user will often leave the application and go to a web browser to perform a search and review the results.
  • BRIEF SUMMARY
  • Techniques and systems are presented for providing “contextual insights,” or information that is tailored to the context of the content a user is consuming or authoring.
  • Given a request for information about a topic from within an application for creating or consuming content, one or more queries to search services may be formulated for the application for creating or consuming content without requiring entry of a search query directly by a user. Moreover, techniques and systems may leverage the context of the content the user is consuming or authoring, as well as user, device, and application metadata, to construct the queries and to organize and filter the results into relevant contextual insights.
  • A method for facilitating contextual insights can include: determining a focus of attention for contextual insights from information provided with a request for contextual insights with respect to at least some text; performing context analysis to determine query terms from context provided with the request; formulating at least one query using one or more of the query terms; initiating a search by sending the at least one query to at least one search service; and organizing and filtering results received from the at least one search service according to at least some of the context.
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A shows an example operating environment in which certain implementations of systems and techniques for contextual insights may be carried out.
  • FIGS. 1B-1E show example interactions indicating an initial selection of text for contextual insights.
  • FIG. 2 illustrates an example process flow for contextual insights and exploration.
  • FIG. 3 shows an example interface displaying contextual insights.
  • FIG. 4 shows a block diagram illustrating components of a computing device or system used in some implementations of the described contextual insights service.
  • FIG. 5 illustrates an example system architecture in which an implementation of techniques for contextual insights may be carried out.
  • DETAILED DESCRIPTION
  • Techniques and systems are presented for providing “contextual insights,” or information that is tailored to the context of the content a user is consuming or authoring. The contextual insights can include, without limitation, people/contact information, documents, meeting information, and advertisements that relate to a determined focus of attention (e.g., determined topics of interest) for the user.
  • Given a request for information about a topic, which may be a direct or indirect request by a user of an application for creating or consuming content, one or more queries to search services may be formulated by the application without requiring entry of a search query directly by a user. Moreover, techniques and systems may leverage the context of the content the user is consuming or authoring, as well as other context of the user, device, and application metadata, to construct the queries and to organize and filter the results into relevant contextual insights.
  • Advantageously, the techniques and systems described herein may improve a user's workflow and/or productivity while consuming or authoring content in an application for creating or consuming content. When a user wants to research a topic while in the application for creating or consuming content, the user does not need to move to a separate application to conduct a search. The techniques enable a user to immerse themselves in a topic without having to leave the application. In addition, context within (or accessible) by the application for creating or consuming content can be used to provide relevant results and may reduce the number of times a user may narrow or modify a search query to achieve a relevant result.
  • A “query” is a request for information from a data storage system. A query is a command that instructs a data storage system of the “query terms” that are desired by the requestor and the terms' relationship to one another. For example, if the data storage system includes a web search engine, such as available from a variety of search services, a query might contain the query terms “Russia” and “Syria” and indicate that the relationship between the two query terms is conjunctive (i.e., “AND”). In response, the search service may return only content having both words somewhere in the content. As frequently used here, a query is a command requesting additional content or information from a search service, where the content or information is associated with specific terms (e.g., words) in the query. A query is sometimes written in a special formatting language that is interpretable to a search service.
  • The queries may be shaped by the user's “context,” which may include both content surrounding the user's indicated interest and additional factors determined from attributes of the user, device, or application. “Surrounding content” refers to text or other content in a position before and/or after the user's indicated interest (e.g., selection). By organizing and filtering the results received from search services, information is fashioned into contextual insights that are tailored to a user's particular usage context. Implementations of the described systems and techniques may not only provide more relevant related content, but may do so without the interruptions associated with a web search using a separate application such as a web browser, and hence may improve user productivity.
  • As an example, consider a user who is reading an article on President Obama's 2013 address to the nation on the Syrian crisis. While authoring or reading an article in an application for creating or consuming content that incorporates the described techniques for contextual insights, the user may highlight the term “Russia” and request contextual insights (via a separate command or as a result of the highlighting action). The application can return information for the contextual insights that may include articles from an online encyclopedia such as “Russia”, “Russia's role in the Syrian civil war”, and “Russia-Syrian relations”. If the user instead highlights a different term, “weapons”, the returned information may be an article titled “Syria and weapons of mass destruction.” The returned information is dependent both on the user's indicated interest and on the context of the document that the user is reading.
  • Certain implementations utilize a contextual insights service. The contextual insights service includes functionality and logic for producing “contextual insights,” which includes results that are related through context and not just from a conventional search. In one such implementation, the portion of the text indicated by the user, along with additional text around the portion of the text selected by the user, is sent to the contextual insights service. The contextual insights service can perform a determination as to the intended item or topic for search. The contextual insights service can provide one or more proposed terms found in the associated text that forms the context of the user's selection, as well as determine additional query terms or limitations that take account of contextual factors relating to the user, device, or application. After return of the search results from one or more search services, relevant results may be organized and filtered (including sorting and grouping) based on context and other factors.
  • In some embodiments, techniques may be iteratively applied to progressively improve the relevance of contextual insights. Multiple modes of interacting with the contextual insights may be supported.
  • FIG. 1A shows an example operating environment in which certain implementations of systems and techniques for contextual insights may be carried out. The example operating environment in FIG. 1A may include a client device 100, user 101, application 102 contextual insights component 105, contextual insights service 110, and one or more search services 120.
  • Client device 100 may be a general-purpose device that has the ability to run one or more applications. The client device 100 may be, but is not limited to, a personal computer, a laptop computer, a desktop computer, a tablet computer, a reader, a mobile device, a personal digital assistant, a smart phone, a gaming device or console, a wearable computer, a wearable computer with an optical head-mounted display, computer watch, or a smart television.
  • Application 102 may be a program for creating or consuming content. Example applications for creating or consuming content include word processing applications such as MICROSOFT WORD; email applications; layout applications; note-taking applications such as MICROSOFT ONENOTE, EVERNOTE, and GOOGLE KEEP, presentation applications; and reader applications such as GOGGLE READER, APPLE iBooks, ACROBAT eBook Reader. AMAZON KINDLE READER, and MICROSOFT Reader and those available on designated hardware readers such as AMAZON KINDLE READER).
  • Contextual insights component 105 may be integrated with application 102 as an inherent feature of application 102 or as a plug-in or extension for an existing application 102 to provide the contextual insights feature. Although primarily described herein as being incorporated with application 102 at the client device 100, contextual insights component 105 may, in some cases, be available through a separate device from the client device 100.
  • Contextual insights component 105 facilitates the interaction between the application 102 and contextual insights service 110, for example through an application programming interface (API) of the contextual insights service 110.
  • An API is generally a set of programming instructions and standards for enabling two or more applications to communicate with each other and is commonly implemented as a set of Hypertext Transfer Protocol (HTTP) request messages and a specified format or structure for response messages according to a REST (Representational state transfer) or SOAP (Simple Object Access Protocol) architecture.
  • In response to receiving particular user interactions with the client device 100 by the user 101 of application 102, the contextual insights component 105 may facilitate a call (or invocation) of a contextual insights service 110 using the API of the contextual insights service 110. For example, the contextual insights component 105 sends a request 130 for contextual insights to the contextual insights service 110 so that contextual insights service 110 may execute one or more operations to provide the contextual insights 135, including those described with respect to FIG. 2. Contextual insights component 105 may also, in some cases, facilitate the presentation of contextual insights 135 for application 102, for example, by rendering the contextual insights 135 in a user interface.
  • Contextual insights service 110 receives the request 130 for contextual insights and generates contextual insights 135. The request 130 may contain text, text markup, and/or other usage context from application 102. The contextual insights service 110 may process the request via one or more components, shown in FIG. 1A as smart selection 131, context analysis 132, and query formulation 133. As part of its operations, contextual insights service 110 may direct one or more requests to one or more search service(s) 120, and may interpret or manipulate the results received from search service(s) 120 in a post-processing component 134 before returning contextual insights 135 to client device 100 via contextual insights component 105.
  • For example, upon receipt of request 130, the contextual insights service 110 can perform a determination of the user's intended content selection based on information provided by the contextual insights component 105, analyze the context of the content selection with respect both to other content the user is perusing and also to various device and user metadata, and construct and send one or more queries for requesting a search from one or more search services 120. These operational aspects of the contextual insights service, including result post-processing, are discussed in more detail with respect to FIG. 2.
  • In some implementations, contextual insights service 110 may determine that contextual insights can be further optimized after result post-processing component 134 activities. Another iteration of the processing stages of smart selection 131, context analysis 132, and/or query formulation 133 might be executed to produce improved insights through the modification of query terms.
  • It should be noted that, while sub-components of contextual insights service 110 are depicted in FIG. 1A (i.e., smart selection 131, context analysis 132, query formulation 133, and result post-processing 134), this arrangement of the contextual insights service 110 into components is exemplary only; other physical and logical arrangements of a contextual insights service capable of performing the operational aspects of the disclosed techniques are possible. Further, it should be noted that aspects of a contextual insights service 110 may be implemented on more than one device. In some cases, a contextual insights service 110 may include components located on user devices and on one or more services implemented on separate physical devices.
  • Search service(s) 120 may take myriad forms. A familiar kind of search service is a web search engine such as, but not limited to, MICROSOFT BING and GOOGLE. However, any service or data storage system having content that may be queried for content appropriate to contextual insights may be a search service 120. A search service may also be built to optimize for the queries and context patterns in an application so that retrieval of information may be further focused and/or improved. Sometimes, an “intranet” search engine implemented on an internal or private network may be queried as a search service 120; an example is Microsoft FAST Search. A custom company knowledge-base or knowledge management system, if accessible through a query, may be a search service 120. In some implementations, a custom database implemented in a relational database system (such as MICROSOFT SQL SERVER) that may have the capability to do textual information lookup may be a search service 120. A search service 120 may access information such as a structured file in Extended Markup Language (XML) format, or even a text file having a list of entries. Queries by the contextual insights service 110 to the search service(s) 120 may be performed in some cases via API.
  • A request for contextual insights 130 may contain a variety of cues for the contextual insights service 110 that are relevant to generating contextual insights. The contextual insights component 105 generates and sends the request 130 to the contextual insights service 110 based on an indication by a user 101.
  • The request for contextual insights 130 may be initiated by a user 101 interacting with an application 102 on client device 100. For example, content in the form of a document (including any format type document), article, picture (e.g., that may or may not undergo optical character recognition), book, and the like may be created or consumed (e.g., read) by a user 101 via the application 100 running on the client device 100. A user may interact with the content and/or an interface to application 102 to indicate a request for contextual insights 130 is desired. Contextual insights component 105 may interact with application 102, client device 100 and even other applications or user-specific resources to generate and send the request 130 to the contextual insights service 110 in response to the indication by the user 101 for the request 130.
  • As one example of an indication of a request for contextual insights 130, a user can indicate an initial selection of text for contextual insights. In the application 102 containing text or other readily searchable content, a user may indicate an interest in certain text in, for example, a document, email, notes taken in a note-taking application, e-book, or other electronic content. The indication of interest does not require the entering of search terms into a search field. Of course, in some implementation, a search box may be available as a tool in the application so that a user may enter terms or a natural language expression indicating a topic of interest.
  • Interaction by the user 101 indicating the initial text selection may take myriad forms. The input indicating an initial text selection can include, but is not limited to, a verbal selection (of one or more words or phrases), contact or contact-less gestural selection, touch selection (finger or stylus), swipe selection, cursor selection, encircling using a stylus/pen, or any other available technique that can be detected by the client device 100 (via a user interface system of the device). In some implementations, contextual insights may initiate without an active selection by a user.
  • The user 101 may also, for instance, utilize a device which is capable of detecting eye movements. In this scenario, the device detects that the user's eye lingers on a particular portion of content for a length of time, indicating the user's interest in selecting the content for contextual insights. A computing device capable of detecting voice commands can be used to recognize a spoken command to initially select content for contextual insights. It should also be noted that many other user interface elements, as diverse as drop-down menus, buttons, search box, or right-click context menus, may signify that the user has set an initial text selection. Further, it can be understood that an initial text selection may involve some or all of the text available on the document, page, or window.
  • FIGS. 1B-1E show example interactions indicating an initial selection of text for contextual insights. The contextual insights component provides the selection as well as context including content before and/or after the selection as part of the request. Therefore, the indication by the user of text for contextual insight and exploration may be of varying specificity.
  • As one example, in a graphical user interface 150 of application 102 in which text is depicted, the user may select a word (or phrase) 151. The selection of a word (or phrase) may be a swipe gesture 152 on a touch enabled display screen such as illustrated in FIG. 1B. Other gestures such as insertion point, tap, double tap, and pinch could be used. Of course, non-touch selection of a word (as well as cursor selection of the word) may be used as an indication. In the example shown in FIG. 1C, a cursor 153 may be used to indicate, for example, via a mouse click, a point on the content surface of the user interface 150. The cursor 153 may be placed within a term without highlighting a word or words. A similar selection may be conducted by touch (e.g., using a finger or pen/stylus) or even by eye gaze detection. This type of selection may be referred to as a selection of a region.
  • Just as less than a full word can be indicated by the user as the initial selection of text, a user may select more than a single word using any of the methods of user interaction described above. In some scenarios an initial selection may include a contiguous series of words (a phrase). For example, multiple words may be “marked” by the user using interface techniques such as illustrated in FIG. 1D, where a cursor 154 is shown selecting multiple words 155 of a sentence. Thus, as illustrated by the example scenarios, the user is not limited to selecting a particular amount of text.
  • In some scenarios, multiple, non-contiguous words or phrases may be selected by highlighting, circling or underlining with a digital stylus. Multiple words or phrases of interest also may be prioritized by the user. For example, one word or phrase may be marked as the primary text selection of interest, and other related words may be marked as supporting words or phrases which are of secondary, but related interest. For example, using interface techniques such as illustrated in FIG. 1E, several words 156 may be indicated on user interface 150.
  • Furthermore, even a scenario in which the user selects no specific words or phrases for the contextual information lookup is envisioned. In one such scenario, the input for initial text selection may be discerned from passive, rather than active, interactions by the user. For example, while the user is scrolling through the text rendered by an application, a paragraph on which the user lingers for a significant time might constitute an initial text selection. As an additional example, if the client device allows the user's eye movements to be tracked, words or phrases on which the user's eye lingers may form the input for initial text selection. In yet another example, the entire document, window, or page may be considered to be selected based on a passive interaction.
  • Returning to FIG. 1A, in some cases, additional information may be sent as part of the request 130 containing the user's indicated initial text selection. The additional information may be used by the contextual insights service 110 to improve the relevance or clarity of searches directed by the initial text selection. The additional information may vary by embodiment and scenario, but in some embodiments will include such information as the text surrounding the selection (which can also be referred to as an expanded portion of text, for example, a certain number of symbols or characters before and/or after the selection), information about the application in which the content is displayed, information about the device on which the application runs, and information about the specific user. In some cases, this information may be referred to herein as “application metadata”, “device metadata”, and “user metadata,” respectively.
  • Once contextual insights service 110 has processed the user's selection (and context) and has received and processed query results, contextual insights service 110 can return contextual insights 135 to the user. In some embodiments, contextual insights component 105 may operate to render or facilitate the application 102 in rendering or displaying one or more user interfaces to show the contextual insights to the user on a client device 100.
  • FIG. 2 illustrates an example process flow for contextual insights and exploration. A contextual insights service 110, such as described with respect to FIG. 1A, may implement the process.
  • Referring to FIG. 2, an indication of a request for contextual insights with respect to at least some text may be received (201). The request can include a selection such as described with respect to FIGS. 1B-1E and context including content before and/or after the selection.
  • The focus of attention for the contextual insights may be determined from information provided with the request (202), for example by the smart selection component 131 of contextual insights service 110 of FIG. 1A. The “focus of attention” refers to the concept (or “topic”) considered to be about what the user would like to explore and gain contextual insights.
  • Sometimes a user's selection of text may, on its own, sufficiently indicate the focus of attention. However, sometimes the user may improperly or incompletely indicate a focus of attention, for example by indicating a word that is near to but not actually the focus of attention, or by indicating only one word of a phrase that consists of multiple words. As a specific example, if the user selects the word “San” in the sentence, “The San Francisco 49ers scored big in last Monday's game,” the true focus of attention is likely to be “San Francisco 49ers” and not “San”; hence, the focus of attention may need to be adjusted from the selection indicated with the request.
  • In cases where the user's indication of the focus of attention is incomplete or improper, the intended focus of attention may sometimes be predictable. A variety of techniques may be used to predict candidates for the user's intended focus of attention based on a given user selection and the surrounding text or content. These processes may include, for example, iterative selection expansion, character n-gram probabilities, term frequency-inverse document frequency (tf-idf) information for terms, and capitalization properties. In some implementations, more than one technique may be used to select one or more candidate foci of attention. Candidate foci of attention determined from these multifarious techniques may then be scored and ranked by the contextual insights service 110, or smart selection component 131 thereof, to determine one or more likely foci of attention from among multiple possibilities.
  • Smart selection component 131 may iteratively determine for every current selection whether the selection should be expanded by one character or word to the right or to the left. In some implementations, smart selection component 131 may rank or score candidates for selection using “anchor texts” that may be obtained from an online encyclopedia or knowledge-base. “Anchor texts,” sometimes known as “link titles,” are text descriptions of hyperlinks. Anchor texts may give the user relevant descriptive or contextual information about the content at the hyperlink's destination. Anchor texts form a source of words and phrases that are positively correlated with one another as related concepts. Examples of online encyclopedias and knowledge bases are MICROSOFT ENCARTA, ENCYCLOPEDIA BRITTANICA, and WIKIPEDIA.
  • Character n-gram probabilities are based on n-gram models, a type of probabilistic language model for predicting the next item in a sequence of characters, phonemes, syllables, or words. A character n-gram probability may allow prediction of the next character that will be typed based on a probability distribution derived from a training data set. In some cases, a smart selection component 131 may be trained using machine learning techniques via character n-gram probability data from anchor texts.
  • In some implementations, a smart selection component 131 may interact with or use available commercial or free cloud-based services providing n-gram probability information. An example of a cloud-based service is “Microsoft Web N-gram Services”. This service continually analyzes all content indexed by the MICROSOFT BING search engine. Similar services are available from GOOGLE's N-gram corpus. A cloud-based service may include the analysis of search engine logs for the words that internet users add or change to disambiguate their searches. Smart selection component 131 may interoperate with such a cloud-based service via API.
  • In some cases, tf-idf techniques may be used in a smart selection component 131. The tf-idf is a numerical statistic intended to reflect how important a word is to a document in a collection or corpus. The tf-idf value increases in proportion to the number of times a term (e.g., a word or phrase) appears in a document, but is negatively weighted by the number of documents that contain the word in order to control for the fact that some words are generally more common than others. One way of using tf-idf techniques is by summing tf-idf values for each term in a candidate focus of attention.
  • In some cases, capitalization properties of terms may be used to identify nouns or noun phrases for focus of attention candidates. Capitalization properties may be used both to rank the importance of certain terms and as further scoring filters when final rankings are calculated. Other implementations may use dictionary-based techniques to additionally identify a large dictionary of known, named entities, such as the titles of albums, songs, movies, and TV shows. In some cases, a natural language analyzer can be used to identify the part of speech of words, term boundaries, and constituents (noun phrases, verb phrases, etc.). It should be noted that the techniques described for predicting the focus of attention are examples and are not intended to be limiting.
  • Scoring data from the various described techniques, and others, may be used to produce candidate foci of attention from a user-indicated focus of attention. The scores may be assembled by the smart selection component 131, and scores assigned by one or more of these techniques may be compiled, averaged and weighted. The scores may be further modified by the capitalization and stop-word properties of the words in the candidate focus of attention (stop-words are semantically irrelevant words, such as the articles “A”, “an”, and “the”). A final score and ranking for each candidate focus of attention may be calculated which may be used to find the top candidate focus (or foci) of attention for a given user selection.
  • Accordingly, the initial text selection provided with the request may be referred to as a “user-indicated” focus of attention. In addition to an indication of the user-indicated focus of attention, the request can include a range of text or content before and/or after the user-indicated focus of attention. The user-indicated focus of attention may then be analyzed for expansion, contraction, or manipulation to find an intended focus of attention in response to rankings emerging as various predictive techniques are applied. One or more foci of attention may be chosen that may be different from the user's indicated foci of attention.
  • Once one or more foci of attention are determined, context analysis may be performed to determine query terms for formulating a query (203). As part of the determination of query terms, query items including operators such as OR, NOT, and BOOST, as well as meta-information (e.g., derived from user metadata) such as the user's location (if available through privacy permissions), time of day, client device and the like may also be determined so as to facilitate the generation of the queries. Context analysis may identify representative terms in the context that can be used to query the search engine in conjunction with the focus of attention. Context analysis may be performed, for example, by a context analysis component 132 such as described with respect to FIG. 1A.
  • Here, context analysis is a technique by which a query to a search service (e.g., one or more of search services 120) may be refined to become more relevant to a particular user. Various forms of context may be analyzed, including, for example: the content of the article, document, e-book, or other electronic content a user is reading or manipulating (including techniques by which to analyze content for its contextual relationship to the focus of attention); application and device properties; and metadata associated with the client device user's identity, locality, environment, language, privacy settings, search history, interests, or access to computing resources. The use of these various forms of context with respect to query refinement will now be discussed.
  • The content of the article, document, e-book, or other electronic content with which a user is interacting is one possible aspect of the “context” that may refine a search query. For example, a user who selected “Russian Federation” as a focus of attention may be interested in different information about Russia when reading an article about the Syrian civil war than when reading an article about the Olympics. If context analysis of the article content were performed in this example, the query terms might be modified from “Russian Federation” (the user-indicated focus) to “Russian Federation involvement in Syrian civil war” or “Russian Federation 2014 Sochi Olympics,” respectively.
  • The electronic content surrounding the focus of attention may undergo context analysis to determine query terms in one or more of a variety of ways. In some cases, the entire document, article, or e-book may be analyzed for context to determine query terms. In some cases, the electronic content undergoing context analysis may be less than the entire document, article, or e-book. The amount and type of surrounding content analyzed for candidate context terms may vary according to application, content type, and other factors.
  • For example, the contextually analyzed content may be defined by a range of words, pages, or paragraphs surrounding the focus of attention. In an e-book, for example, the content for contextual analysis may be limited to only that portion of the e-book that the user has actually read, rather than the unread pages or chapters. In some cases, the content for contextual analysis may include the title, author, publication date, index, table of contents, bibliography, or other metadata about the electronic content. In some implementations, the contextual insights component 105 at the client may be used to determine and/or apply the rules for the amount of contextual content provided in a request to a contextual insights service.
  • Context analysis of an appropriate range of content surrounding the focus of attention may be conducted in some implementations by selecting candidate context terms from the surrounding content and analyzing them in relation to a focus of attention term. For example, a technique that scores candidate context terms independently of each other but in relation to a focus of attention term may be used. The technique may determine a score for each pair of focus-candidate context terms and then rank the scores.
  • In some implementations, the relevance of the relationship between the candidate term from the surrounding content and the focus of attention may be analyzed with reference to the query logs of search engines. The query logs may indicate, using heuristics gathered from prior searches run by a multiplicity of users, that certain relationships between foci of attention terms and candidate terms from the surrounding content are stronger than others. In some implementations, a context analysis component 132 may be trained on the terms by culling term relationships from web content crawls. In some cases, the strength of a relationship between terms may be available as part of a cloud-based service, such as the “Microsoft Web N-gram Services” system discussed above, from which relative term strengths may be obtained, for example via API call or other communication mechanism.
  • Another technique that may be used for determining the relevance of candidate context terms, used either alone or in concert with other techniques, is by determining whether a candidate context is a named entity. For example, a candidate context term may be part of a dictionary of known, named entities such as the titles of albums, songs, movies, and TV shows; if the candidate context term is a named entity, the relevance of the candidate term may be adjusted.
  • Distance between the candidate context term and the focus of attention may also be considered in context analysis. Distance may be determined by the number of words or terms interceding between the candidate context term and a focus of attention.
  • In some implementations, the relevance of a candidate context term with reference to focus of attention terms may be determined with respect to anchor text available from an online knowledge-base. Statistical measurements of the occurrence frequencies of terms in anchor texts may indicate whether candidate terms and focus of attention terms are likely to be related, or whether the juxtaposition of the terms is random. For example, highly entropic relationship values between the candidate context term and the focus of attention term(s) in anchor text may signify that the candidate context term is a poor choice for a query term.
  • Some techniques of context analysis may use metadata associated with the application, device, or user in addition to (or in lieu of) the gathering and analysis of terms from the content surrounding a focus of attention. These techniques may be used by context analysis component 132 to refine, expand, or reduce the query terms selected for the search query.
  • In some implementations, the type of application 102 or device 100 may be a factor in context analysis. For example, if a user is writing a paper in a content authoring application such as a word processor, then context analysis for query terms may be different than the analysis would be for a reader application. In this example of the authoring application, a context analysis may determine via the application type that a narrower focus to find query terms may be appropriate, perhaps limiting query terms to definitions and scholarly materials. In the case of the reader, more interest-based and informal materials may be appropriate, so candidate query terms are more wide-ranging.
  • Factors derived from user device metadata may also be considered in certain implementations. Sometimes, the type of user device may be a factor in the query terms determined from context analysis. For example, if the user device is a phone-sized mobile device, then candidate context terms may be selected from a different classification than those selected if the user device were a desktop computer. In the case of the small mobile device, a user's interests may be more casual, and the screen may have less space, so candidate terms which produce more summarized information may be selected. Further, context analysis may consider device mobility by selecting candidate terms that may be related to nearby attractions. In contrast, if the user device is a desktop device, then user may be at work and want more detailed and informative results; query terms might be added which obtain results from additional sources of information.
  • In some implementations, factors derived from user metadata may be used as part of context analysis to define query terms. Sometimes, a factor may be the type of user—e.g., whether the user's current role is as a corporate employee or consumer. The type of user may be determined, for example, by the internet protocol (IP) address from which the user is accessing a communications network. In the former case, work-oriented query terms may be preferentially selected by the context analysis component; in the latter case, more home or consumer-related terms may be preferred. In some implementations, the user type may determine the availability of computing resources such as a company knowledge management system accessible by a company intranet. Availability of company-related resources might enable a context analysis component to select query terms targeted toward such specialized systems.
  • In some implementations, a factor in context analysis may be the user's history of prior searches or interests. In some cases, the historical record of previous foci of attention selected by the user may be analyzed to generate or predict candidate query terms. Those candidate terms might be refined or ranked with respect to the user's current foci of attention using techniques similar to those described with respect to candidate terms for surrounding content; e.g., by using N-term services or anchor text analysis.
  • Candidate terms may be selected by the context analysis engine on the basis of prior user internet searches. The historical record of these searches may generate or predict candidate query terms. Similarly, internet browser cookies or browser history of websites visited may be used to discern user interests which may predict or refine candidate terms. Candidate terms generated may be ranked or refined using similar techniques to those described above with respect to historical foci of attention terms.
  • Other factors which may be analyzed during the context analysis component's determination of query terms might be the time of day that the user is requesting contextual insights and the current geographic locality of the client device. User profile and demographic information, such as age, gender, ethnicity, religion, profession, and preferred language may also be used as factors in query term determination. It should be noted that, in some implementations, privacy settings of the user may impact whether user profile metadata is available for context analysis and to what extent profile metadata may be used.
  • Continuing with the process illustrated in FIG. 2, a query may be formulated using one or more of the query terms (204). Query formulation may include a pre-processing determination in which a mode of operation is decided with reference to user preferences; the mode of operation may inform which context-related terms are used to formulate the query. Query formulation may include the assembly of the actual queries that may be sent to one or more search services. Query formulation may be performed, for example, by a query formulation component 133 described with respect to FIG. 1A.
  • In some embodiments, query formulation component 133 may engage in a pre-processing determination in which a mode of operation is decided with reference to user preferences. The mode of operation may determine one or more classes of appropriate or desirable search results. For example, two modes of operation may be “lookup” and “exploration.” A “lookup” mode may give targeted results directed narrowly toward a focus of attention (e.g., a dictionary lookup). An “exploration” mode may give more general search results, and, for example, present several options to the user with respect to which search results or topics to further explore. Naturally, other modes of operation representing different classes of search result are possible, as are scenarios in which multiple modes of operation are provided.
  • Thus, an operation of the query formulation component may be to determine to what extent the query terms from the contextual analysis phase may take over from or supersede the user's indicated/determined foci of attention (or explicit search query if the user provided one). A mode of operation may be selected by the user actively, such as by affirmative selection of the mode, or passively, such as based on some factor determined from user, device, or application metadata.
  • In some cases, a mode of operation may be determined by the query formulation component 133 based on outcomes from context analysis or other factors. For example, the query formulation component 133 may determine which mode of operation to use based on ambiguity of a focus of attention. If, during or after context analysis, contextual insights service determines that, because of ambiguity in the focus of attention, terms or results may not be acceptably narrowed for a lookup mode, an exploration mode may be chosen.
  • Sometimes, query formulation component 133 may determine that certain additional context terms may return search results that inappropriately overwhelm the focus of attention. In some cases, query formulation component 133 may modify query terms that may be likely to return adult or offensive content; user profile metadata (e.g., age of the user) may be a factor in such a modification of query terms. Contextual insights service 110 may make this determination, for example, by formulating and sending one or more probative queries to search services. Probative queries may enable the contextual insights service 110 to preview search results for several trial formulations of query terms so that terms added by context analysis may be adjusted or modified.
  • Query formulation may include the assembly of actual queries that may be sent to one or more search services. In some cases, a query formulation component may assemble and send a single query consisting of one or more query terms joined conjunctively to a single search service.
  • In some cases, however, context analysis could reveal that the context covers multiple aspects about the focus of attention that can lead the user to desire to explore different context terms differently. The query formulation component may, based on a determined need for different classes of search results, formulate disjunctive queries, formulate separate queries with differing terms, split queries into multiple execution phases, and/or send different queries to different search services. In some cases, query terms may be ordered in a particular sequence to obtain particular search results.
  • For example, query formulation component 133 may determine that a particular focus of attention and context analysis reveals query terms that may be best presented to the user in segmented fashion. In such a case, the query formulation component 133 may construct a disjunctive query of the form “focus-term AND (context-term1 OR context-term2 OR . . . )”. Moreover, the query formulation component 133 may sometimes construct multiple queries—a query that targets the focus of attention more narrowly, and one or more queries that target exploratory search results on a handful of related themes. In some cases, a query may be targeted toward a particular search service in order to retrieve results from a given class.
  • Query formulation can be carried out based on intended search services so that the contextual insights service initiates a search by sending a query to one or more search services (205). The search may occur when contextual insights service 110, or some component thereof (e.g., query formulation component 133) issues or sends the query to one or more search services 120 as described in FIG. 1A. Once sent, results of the search may be received (206). In many cases, the search query will be issued, and search results will be returned, via an API call to the search service, as noted in FIG. 1A. In situations where multiple queries have been sent, either to segment results or to target specific search services, multiple sets of search results may be received.
  • After receipt of the results, the results may be organized and filtered according to at least some of the context (207). The contextual insights service 110, or some component thereof (e.g., result post-processing component 134 described in FIG. 1A) may receive the results and/or perform organizing and filtering operations. Organization and filtering of the results may include, for example: ranking of results according to various criteria; assembly, sorting, and grouping of result sets, including those from multiple queries and/or multiple search services; and removal of spurious or less relevant results.
  • In some implementations, organization and filtering of the results may include ranking of results according to various criteria; some of the criteria may be determined from context. Result post-processing component may assess aspects of search results received from the search service according to varied techniques. Assessments of rank emerging from one or more techniques may be used in concert with one another; some of the techniques may weighted according to their aptitude for producing relevant answers in a given context.
  • In some cases, search results may be received from a search service with a ranking position; such a ranking may constitute a natural starting point for determining relevance. Another technique may include a linguistic assessment of how closely the title or URL of a search result matches the query; e.g., if words in the title are an almost exact match to terms in the query, the result may be more relevant.
  • Factors determined from context analysis may also be applied in the result post-processing phase to assist in congruency of the search results with respect to context. For example, results may be assessed to ensure that the results are congruous with user profile metadata. Results that are age-inappropriate, for example, might be removed entirely; in other cases, results that may more be more appropriate to a user's location may be ranked higher by the result post-processing component 134 than other results.
  • Factors such as whether a search result corresponds to a disambiguation page (e.g., on Wikipedia), the length of the query, the length of the context, and other query performance indicators may also be used in an assessment of search result relevance.
  • However, at times, other techniques aside from ranking the results may be relevant to organizing and filtering the results for contextual insights. For example, when multiple results sets from several queries or search services have been received, the results may be grouped or resorted. Furthermore, when there is disagreement or lack of congruity between different search services, determinations may be needed as to which result sets to prioritize.
  • In some cases, multiple queries may have been issued by the query formulation component 133, and perhaps to multiple search services. In those cases, the queries themselves may naturally reflect intended groupings of result sets. For example, if a focus of attention relates to a geographic location and the query formulation component 133 directed a query specifically at a search service with travel information, search results returned from that service may be grouped together under a “Travel” category by the result post-processing component 134. Similarly, if the query formulation component 133 had issued a separate query to a search service focusing on the history of the geographic location, those results may also be grouped together. Naturally, search results may also be ranked and refined within the group or category. In some cases, result sets returned from different queries may be reconsolidated by the result post-processing component 134. Moreover, sometimes results received from the search service as a single result set may be segmented and grouped more logically, for example according to topic, domain of the website having the result, type of result (e.g., text, photos, multimedia), content rating, or other criteria.
  • Some implementations may include the detection of result thresholds by the result post-processing component 134. Result “thresholds” are juncture points in one or more results or sets of results that indicate that particular groups of results may be related to one another, such as by relevance or by category/topic. These thresholds may be used to group, refine, remove, or re-sort results.
  • For example, in a given search, if the first three search results are ranked at the top because they have a high ranking score, but the next seven search results form a group having a low ranking score, the first group of three results may be highly relevant to the focus of attention and context. Here, a result threshold may exist beyond which results may either be truncated or presented differently to the user, for example in an interface displaying a different mode of operation. In another example scenario, perhaps all ten results have a ranking score that is similar, and the relevance of the results would be difficult to distinguish from one another; in this example, there is no result threshold with respect to relevance, and different presentation or grouping options may be used. Result thresholds may sometimes be used to determine how many insights 135 to return from a given contextual insights request 130. In some cases, characteristics of a given result threshold may be adapted to user, application, or device metadata (e.g., the size of the device screen).
  • Sometimes, result thresholds may be recognized from patterns that allow detection of content groups. Examples of patterns include when several results show similarities (or dissimilarities) in their title, site origin, or ranking scores. When the result post-processing component 134 receives results that can be determined to match a particular pattern, the result post-processing component 134 may group those results together as a single result or segmented category of results. For example, multiple postings of a similar news release to various websites may be determined to have very similar titles or brief descriptions; the result post-processing component 134 may recognize a pattern and either group or truncate the news releases into a single insight.
  • Result thresholds may be detected from patterns of disagreement between sources. For instance, a level of entropy—the degree to which there is or is not overlap between results returned by different sources—may indicate a result threshold. If, for example, results from one source have a low overlap with the results from another source, this pattern may indicate that the results may be separated into multiple groups having different contextual insights.
  • In some cases, as for example when a threshold is detected from an identifiable pattern of disagreement between sources, certain functions of the contextual insights service 110 may be executed repeatedly to determine appropriate contextual insights (for example, an adjustment may be made at operation 202 and the processes repeated). For example, as described with respect to FIG. 1A, result post-processing component 134 may determine that another iteration of the processing stages of smart selection, context analysis, and/or query formulation might produce improved insights. As a result of additional iterations of processing, the focus of attention, context terms from content and metadata, and query terms may be modified.
  • A pattern of disagreement between sources might occur, for instance, when the formulated query terms were ambiguous with respect one or more of the sources or search services. If, for example, a request is made for contextual insights about “John Woo,” and John Woo is both a prominent movie director and a high-ranking statesman at the United Nations, at least two distinct patterns of results would be returned. A further iteration of processing using additional context or a modified focus of attention may be used to determine the most relevant insights. Or, consider the homograph “row” (a homograph is each of two or more words spelled the same but not necessarily pronounced the same and having different meanings and origins). British people frequently use the word “row” to mean an argument or quarrel, but Americans seldom do; Americans tend to use the word in its verb form, e.g., “to paddle a boat, as with an oar”. If a threshold is determined that hinges upon the two meanings in a given query, a further context analysis might identify that, for example, the user is British (and hence means “argument”), or that the word “row” is being used in its verb form in the context of the content being consumed.
  • When the organizing and filtering of the results (207) has completed, contextual insights 135 may be returned (208) to the calling component 105 or client device 100 by the contextual insights service 110.
  • FIG. 3 shows an example interface displaying contextual insights. The example interface is provided to illustrate one way that contextual insights 135 may be displayed on the user device 100. An example interface such as the one shown in FIG. 3 may be generated by the application 102, or rendered by a contextual insights component 105 in cooperation with the application 102 or device 100. The example is for illustrative purposes only and is not intended to be limiting of the ways and varieties that contextual insights may be organized and filtered by the contextual insights service 110.
  • Referring to FIG. 3, a contextual insights preview 300 can be displayed non-obtrusively atop the existing application surface 301, only partly obscuring the content displayed in the application surface 301. In the example preview 300, a quick summary can be provided that may include a title 302 (as provided by the identified text 303), an image (still or moving) 304 (if available) and summary text 305 (if available).
  • Also included in the example contextual insights preview 300 may be a preview of various modes of operation that may form groupings in the contextual insights, or various other groupings 320 determined by the contextual insights service 110. To enable a user to navigate the contextual insights, the relevant results 310 can be grouped into modules 320 that may be indicative of modes of operation or other groupings.
  • In the example illustrated in FIG. 3, the results 310 are grouped by source. “Source,” in this context, may mean a network location, website, type of application, type of result (such as an image) or other logical method of grouping results. Some examples of sources might be the Wikipedia online encyclopedia; a local network source, such as an internal web server and/or social graph, privately available to the users in a company; a particular news website; image files from a photo-sharing website; structured data from a database; or private files on the user's drives or personal cloud storage.
  • It should be noted that the modular groupings may be displayed differently based on contextual information about the user. For example, a user at home may receive consumer or entertainment-oriented information sources. The same user might receive different groupings (and, as noted above, different results) when at work. Many such forms of groupings are possible. In some cases, as noted, the groupings or modules may be formed by the strength of the relationship between focus of attention terms or concepts and context terms. These aspects were discussed with respect to FIG. 2.
  • FIG. 4 shows a block diagram illustrating components of a computing device or system used in some implementations of the described contextual insights service. For example, any computing device operative to run a contextual insights service 110 or intermediate devices facilitating interaction between other devices in the environment may each be implemented as described with respect to system 400, which can itself include one or more computing devices. The system 400 can include one or more blade server devices, standalone server devices, personal computers, routers, hubs, switches, bridges, firewall devices, intrusion detection devices, mainframe computers, network-attached storage devices, and other types of computing devices. The hardware can be configured according to any suitable computer architectures such as a Symmetric Multi-Processing (SMP) architecture or a Non-Uniform Memory Access (NUMA) architecture.
  • The system 400 can include a processing system 401, which may include a processing device such as a central processing unit (CPU) or microprocessor and other circuitry that retrieves and executes software 402 from storage system 403. Processing system 401 may be implemented within a single processing device but may also be distributed across multiple processing devices or sub-systems that cooperate in executing program instructions.
  • Examples of processing system 401 include general purpose central processing units, application specific processors, and logic devices, as well as any other type of processing device, combinations, or variations thereof. The one or more processing devices may include multiprocessors or multi-core processors and may operate according to one or more suitable instruction sets including, but not limited to, a Reduced Instruction Set Computing (RISC) instruction set, a Complex Instruction Set Computing (CISC) instruction set, or a combination thereof. In certain embodiments, one or more digital signal processors (DSPs) may be included as part of the computer hardware of the system in place of or in addition to a general purpose CPU.
  • Storage system 403 may comprise any computer readable storage media readable by processing system 401 and capable of storing software 402 including contextual insights components 404 (such as smart selection 131, context analysis 132, query formulation 133, and result post processing 134). Storage system 403 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • Examples of storage media include random access memory (RAM), read only memory (ROM), magnetic disks, optical disks, CDs, DVDs, flash memory, solid state memory, phase change memory, or any other suitable storage media. Certain implementations may involve either or both virtual memory and non-virtual memory. In no case do storage media consist of a propagated signal. In addition to storage media, in some implementations, storage system 403 may also include communication media over which software 402 may be communicated internally or externally.
  • Storage system 403 may be implemented as a single storage device but may also be implemented across multiple storage devices or sub-systems co-located or distributed relative to each other. Storage system 403 may include additional elements, such as a controller, capable of communicating with processing system 401.
  • Software 402 may be implemented in program instructions and among other functions may, when executed by system 400 in general or processing system 401 in particular, direct system 400 or processing system 401 to operate as described herein for enabling contextual insights. Software 402 may provide program instructions 404 that implement a contextual insights service. Software 402 may implement on system 400 components, programs, agents, or layers that implement in machine-readable processing instructions 404 the methods described herein as performed by contextual insights service.
  • Software 402 may also include additional processes, programs, or components, such as operating system software or other application software. Software 402 may also include firmware or some other form of machine-readable processing instructions executable by processing system 401.
  • In general, software 402 may, when loaded into processing system 401 and executed, transform system 400 overall from a general-purpose computing system into a special-purpose computing system customized to facilitate contextual insights. Indeed, encoding software 402 on storage system 403 may transform the physical structure of storage system 403. The specific transformation of the physical structure may depend on various factors in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the storage media of storage system 403 and whether the computer-storage media are characterized as primary or secondary storage.
  • System 400 may represent any computing system on which software 402 may be staged and from where software 402 may be distributed, transported, downloaded, or otherwise provided to yet another computing system for deployment and execution, or yet additional distribution.
  • In embodiments where the system 400 includes multiple computing devices, one or more communications networks may be used to facilitate communication among the computing devices. For example, the one or more communications networks can include a local, wide area, or ad hoc network that facilitates communication among the computing devices. One or more direct communication links can be included between the computing devices. In addition, in some cases, the computing devices can be installed at geographically distributed locations. In other cases, the multiple computing devices can be installed at a single geographic location, such as a server farm or an office.
  • A communication interface 405 may be included, providing communication connections and devices that allow for communication between system 400 and other computing systems (not shown) over a communication network or collection of networks (not shown) or the air. Examples of connections and devices that together allow for inter-system communication may include network interface cards, antennas, power amplifiers, RF circuitry, transceivers, and other communication circuitry. The connections and devices may communicate over communication media to exchange communications with other computing systems or networks of systems, such as metal, glass, air, or any other suitable communication media. The aforementioned communication media, network, connections, and devices are well known and need not be discussed at length here.
  • It should be noted that many elements of system 400 may be included in a system-on-a-chip (SoC) device. These elements may include, but are not limited to, the processing system 401, a communications interface 405, and even elements of the storage system 403 and software 402.
  • FIG. 5 illustrates an example system architecture in which an implementation of techniques for contextual insights may be carried out. In the example illustrated in FIG. 5, an application 501 for interacting with textual content can be implemented on a client device 500, which may be or include computing systems such as a laptop, desktop, tablet, reader, mobile phone, and the like. Contextual insights component 502 can be integrated with application 502 to facilitate communication with contextual insights service 511.
  • Contextual insights service 511 may be implemented as software or hardware (or a combination thereof) on server 510, which may be an instantiation of system 400. The features and functions of a contextual insights service 511 may be callable by device 500, application 501, or contextual insights component 502 via an API.
  • The contextual insights service 511 may initiate and send search queries to search service 521. Search service 521 may be implemented on server 520, which may itself be an instantiation of a system similar to that described with respect to system 400 or aspects thereof. Many search services may be available for querying in a given environment.
  • Communications and interchanges of data between components in the environment may take place over network 550. The network 550 can include, but is not limited to, a cellular network (e.g., wireless phone), a point-to-point dial up connection, a satellite network, the Internet, a local area network (LAN), a wide area network (WAN), a Wi-Fi network, an ad hoc network, an intranet, an extranet, or a combination thereof. The network may include one or more connected networks (e.g., a multi-network environment) including public networks, such as the Internet, and/or private networks such as a secure enterprise private network.
  • Alternatively, or in addition, the functionality, methods and processes described herein can be implemented, at least in part, by one or more hardware modules (or logic components). For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field programmable gate arrays (FPGAs), system-on-a-chip (SoC) systems, complex programmable logic devices (CPLDs) and other programmable logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the functionality, methods and processes included within the hardware modules.
  • Certain aspects of the invention provide the following non-limiting embodiments:
  • Example 1
  • A method for facilitating contextual insights comprising: receiving a request for contextual insights with respect to at least some text; determining from information provided with the request a focus of attention for the contextual insights; performing context analysis from context provided with the request to determine query terms; formulating at least one query using one or more of the query terms; initiating a search by sending the at least one query to at least one search service; receiving results of the search; and organizing and filtering the results according to at least some of the context.
  • Example 2
  • The method of example 1, wherein query items including operators such as or, not and boost; and metadata such as user's location, time of day, and client device are also determined from the information provided with the request, the formulating of the at least one query further using one or more of the query items.
  • Example 3
  • The method of any of examples 1-2, wherein the information provided with the request for contextual insights comprises an indication of a selection of text.
  • Example 4
  • The method of any of examples 1-2, wherein the information provided with the request for contextual insights comprises an indication of a selection of a region.
  • Example 5
  • The method of any of examples 1-4, wherein determining from the indication the focus of attention comprises predicting the focus of attention by: modifying an initially indicated text section from the information provided with the request with additional text selected from the context provided with the request to form one or more candidate foci of attention; determining a probability or score for each of the one or more candidate foci of attention; and selecting at least one of the candidate foci of attention having the highest probability or score.
  • Example 6
  • The method of any of examples 1-5, wherein the context comprises one or more of content surrounding the indication, device metadata, application metadata, and user metadata.
  • Example 7
  • The method of any of examples 1-6, wherein formulating the at least one query further comprises: determining a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and modifying the query in response to the mode of operation.
  • Example 8
  • The method of any of examples 1-7, wherein formulating the at least one query further comprises modifying the query in response to user metadata.
  • Example 9
  • The method of any of examples 1-8, wherein organizing and filtering the results further comprises: detecting a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and using the pattern to group, re-sort, or remove results.
  • Example 10
  • The method of any of examples 1-9, wherein determining the query terms comprises: performing context analysis of one or more of: content of a file being consumed or created in an application that is a source of the request; application properties of the application; device properties of a device on which the application is executed; or metadata associated with a user's identity, locality, environment, language, privacy settings, search history, interests and/or access to computing resources.
  • Example 11
  • The method of example 10, wherein performing context analysis of the content of the file performs context analysis of all content of the file being consumed or created in the application or performs context analysis of a particular amount of content of the file.
  • Example 12
  • The method of example 10 or 11, wherein performing context analysis of the content further comprises selecting candidate context terms from content surrounding a focus-of-attention term and analyzing the candidate context terms in relation to the focus-of-attention term.
  • Example 13
  • The method of example 12, wherein determining the query terms further comprises scoring the candidate context terms independently of each other but in relation to the focus-of-attention term; and ranking the scores for each pair of candidate context term and focus-of-attention term.
  • Example 14
  • The method of any of examples 12-13, wherein determining the query terms further comprises using query logs of search engines to analyze a relevance of a candidate context term to the focus-of-attention-term.
  • Example 15
  • The method of any of examples 12-14, comprising requesting a strength relationship value for the candidate context terms from an n-gram service.
  • Example 16
  • The method of any of examples 12-15, wherein determining the query terms further comprises determining whether a candidate context term is a named entity and adjusting a relevance of the candidate context term according to whether or not the candidate context term is the named entity.
  • Example 17
  • The method of any of examples 12-16, wherein determining the query terms further comprises determining a distance value of a number of words or terms between the candidate context term and the focus-of-attention term.
  • Example 18
  • The method of any of examples 12-17, wherein the relevance of a candidate context term to the focus-of-attention term is determined using anchor text available from an online knowledge-base.
  • Example 19
  • A computer-readable storage medium having instructions stored thereon to perform the method of any of examples 1-18.
  • Example 20
  • A service comprising: one or more computer readable storage media; program instructions stored on at least one of the one or more computer readable storage media that, when executed by a processing system, direct the processing system to: in response to receiving a request for contextual insights with respect to at least some text: determine a focus of attention from the information provided with the request; perform context analysis from context provided with the request to determine one or more context terms; formulate at least one query using one or more of the focus of attention and the context terms; send the at least one query to at least one search service to initiate a search; and in response to receiving one or more results from the at least one search service, organize and filter the results according to at least some of the context.
  • Example 21
  • The service of example 20, wherein the program instructions that direct the processing system to determine the focus of attention from the indication direct the processing system to: modify an initially indicated text section from the information provided with the request with an additional text selected from the context provided with the request to form one or more candidate foci of attention; determine a probability or score for each of the one or more candidate foci of attention; and select at least one of the candidate foci of attention having the highest probability or score.
  • Example 22
  • The service of any of examples 20-21, wherein the context comprises one or more of the content surrounding the indication, device metadata, application metadata, and user metadata.
  • Example 23
  • The service of any of examples 20-22, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to: determine a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and modify the query in response to the mode of operation.
  • Example 24
  • The service of any of examples 20-23, wherein the program instructions that direct the processing system to formulate the at least one query directs the processing system to modify the query in response to user metadata.
  • Example 25
  • The service of any of examples 20-24, wherein the program instructions that direct the processing system to organize and filter the results direct the processing system to: detect a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and use the pattern to group, re-sort, or remove results.
  • Example 26
  • The service of any of examples 20-25, wherein the program instructions direct the processing system to perform any of the steps of the methods in examples 1-19.
  • Example 27
  • A system comprising: a processing system; one or more computer readable storage media; program instructions stored on at least one of the one or more storage media that, when executed by the processing system, direct the processing system to: determine, from information provided with a request for contextual insights with respect to at least some text, a focus of attention for the contextual insights; perform context analysis of a context provided with the request to determine query terms; formulate at least one query using one or more of the query terms; send the at least one query to at least one search service; organize and filter results received from the at least one search service according to at least some of the context; and provide the organized and filtered results to a source of the request.
  • Example 28
  • The system of example 27, wherein the program instructions that direct the processing system to determine the focus of attention from the indication direct the processing system to: modify an initially indicated text section from the information provided with the request with an additional text selected from the context provided with the request to form one or more candidate foci of attention; determine a probability or score for each of the one or more candidate foci of attention; and select at least one of the candidate foci of attention having the highest probability or score.
  • Example 29
  • The system of any of examples 27-28, wherein the request context comprises one or more of the content surrounding the indication, device metadata, application metadata, and user metadata.
  • Example 30
  • The system of any of examples 27-29, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to: determine a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and modify the query in response to the mode of operation.
  • Example 31
  • The system of any of examples 27-30, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to modify the query in response to user metadata.
  • Example 32
  • The system of any of examples 27-31, wherein the program instructions that direct the processing system to organize and filter results received from the at least one search service according to at least some of the context direct the processing system to: detect a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and use the pattern to group, re-sort, or remove results.
  • Example 33
  • The system of any of examples 27-32, wherein the program instructions direct the processing system to perform any of the steps of the methods in examples 1-19.
  • Example 34
  • A system comprising: a means for receiving a request for contextual insights with respect to at least some text; a means for determining from information provided with the request a focus of attention for the contextual insights; a means for performing context analysis from context provided with the request to determine query items; a means for formulating at least one query using one or more of the query terms; a means for initiating a search by sending the at least one query to at least one search service; a means for receiving results of the search; and a means for organizing and filtering the results according to at least some of the context.
  • It should be understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application.
  • Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.

Claims (20)

What is claimed is:
1. A method for facilitating contextual insights comprising:
receiving a request for contextual insights with respect to at least some text;
determining from information provided with the request a focus of attention for the contextual insights;
performing context analysis from context provided with the request to determine query terms;
formulating at least one query using one or more of the query terms;
initiating a search by sending the at least one query to at least one search service;
receiving results of the search; and
organizing and filtering the results according to at least some of the context.
2. The method of claim 1, wherein the information provided with the request for contextual insights comprises an indication of a selection of text.
3. The method of claim 1, wherein the information provided with the request for contextual insights comprises an indication of a selection of a region.
4. The method of claim 1, wherein determining the focus of attention from the information provided with the request comprises predicting the focus of attention by:
modifying an initially indicated text section from the information provided with the request with additional text selected from the context provided with the request to form one or more candidate foci of attention;
determining a probability or score for each of the one or more candidate foci of attention; and
selecting at least one of the candidate foci of attention having the highest probability or score.
5. The method of claim 1, wherein the context comprises one or more of content surrounding the indication, device metadata, application metadata, and user metadata.
6. The method of claim 1, wherein formulating the at least one query further comprises:
determining a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and
modifying the query in response to the mode of operation.
7. The method of claim 1, wherein formulating the at least one query further comprises modifying the query in response to user metadata.
8. The method of claim 1, wherein organizing and filtering the results further comprises:
detecting a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and
using the pattern to group, re-sort, or remove results.
9. A service comprising:
one or more computer readable storage media;
program instructions stored on at least one of the one or more computer readable storage media that, when executed by a processing system, direct the processing system to:
in response to receiving a request for contextual insights with respect to at least some text:
determine a focus of attention from the information provided with the request;
perform context analysis from context provided with the request to determine one or more context terms;
formulate at least one query using one or more of the focus of attention and the context terms;
send the at least one query to at least one search service to initiate a search; and
in response to receiving one or more results from the at least one search service, organize and filter the results according to at least some of the context.
10. The service of claim 9, wherein the program instructions that direct the processing system to determine the focus of attention direct the processing system to:
modify an initially indicated text section from the information provided with the request with an additional text selected from the context provided with the request to form one or more candidate foci of attention;
determine a probability or score for each of the one or more candidate foci of attention; and
select at least one of the candidate foci of attention having the highest probability or score.
11. The service of claim 9, wherein the context comprises one or more of the content surrounding the indication, device metadata, application metadata, and user metadata.
12. The service of claim 9, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to:
determine a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and
modify the query in response to the mode of operation.
13. The service of claim 9, wherein the program instructions that direct the processing system to perform the context analysis from the context provided with the request to determine the one or more context terms directs the processing system to perform context analysis of one or more of:
content of a file being consumed or created in an application that is a source of the request;
application properties of the application;
device properties of a device on which the application is executed; or
metadata associated with a user's identity, locality, environment, language, privacy settings, search history, interests and/or access to computing resources.
14. The service of claim 9, wherein the program instructions that direct the processing system to organize and filter the results direct the processing system to:
detect a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and
use the pattern to group, re-sort, or remove results.
15. A system comprising:
a processing system;
one or more computer readable storage media;
program instructions stored on at least one of the one or more storage media that, when executed by the processing system, direct the processing system to:
determine, from information provided with a request for contextual insights with respect to at least some text, a focus of attention for the contextual insights;
perform context analysis of a context provided with the request to determine query terms;
formulate at least one query using one or more of the query terms;
send the at least one query to at least one search service;
organize and filter results received from the at least one search service according to at least some of the context; and
provide the organized and filtered results to a source of the request.
16. The system of claim 15, wherein the program instructions that direct the processing system to determine the focus of attention direct the processing system to:
modify an initially indicated text section from the information provided with the request with an additional text selected from the context provided with the request to form one or more candidate foci of attention;
determine a probability or score for each of the one or more candidate foci of attention; and
select at least one of the candidate foci of attention having the highest probability or score.
17. The system of claim 15, wherein the request context comprises one or more of the content surrounding the indication, device metadata, application metadata, and user metadata.
18. The system of claim 15, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to:
determine a mode of operation from one or more of a level of ambiguity in the focus of attention and a user preference; and
modify the query in response to the mode of operation.
19. The system of claim 15, wherein the program instructions that direct the processing system to formulate the at least one query direct the processing system to modify the query in response to user metadata.
20. The system of claim 15, wherein the program instructions that direct the processing system to organize and filter results received from the at least one search service according to at least some of the context direct the processing system to:
detect a pattern in the results, wherein the pattern is based on a level of similarity of one or more of rankings of the results, content of the results, and origin of the results; and
use the pattern to group, re-sort, or remove results.
US14/508,431 2013-10-07 2014-10-07 Contextual insights and exploration Abandoned US20150100562A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201361887954P true 2013-10-07 2013-10-07
US14/508,431 US20150100562A1 (en) 2013-10-07 2014-10-07 Contextual insights and exploration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/508,431 US20150100562A1 (en) 2013-10-07 2014-10-07 Contextual insights and exploration

Publications (1)

Publication Number Publication Date
US20150100562A1 true US20150100562A1 (en) 2015-04-09

Family

ID=51790877

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/245,646 Active 2034-12-05 US9436918B2 (en) 2013-10-07 2014-04-04 Smart selection of text spans
US14/508,431 Abandoned US20150100562A1 (en) 2013-10-07 2014-10-07 Contextual insights and exploration

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/245,646 Active 2034-12-05 US9436918B2 (en) 2013-10-07 2014-04-04 Smart selection of text spans

Country Status (6)

Country Link
US (2) US9436918B2 (en)
EP (2) EP3055789A1 (en)
KR (1) KR20160067202A (en)
CN (2) CN105637507B (en)
TW (1) TW201519075A (en)
WO (2) WO2015053993A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150134653A1 (en) * 2013-11-13 2015-05-14 Google Inc. Methods, systems, and media for presenting recommended media content items
US20150205451A1 (en) * 2014-01-23 2015-07-23 Lg Electronics Inc. Mobile terminal and control method for the same
US20160048326A1 (en) * 2014-08-18 2016-02-18 Lg Electronics Inc. Mobile terminal and method of controlling the same
US9485543B2 (en) 2013-11-12 2016-11-01 Google Inc. Methods, systems, and media for presenting suggestions of media content
WO2016196697A1 (en) * 2015-06-03 2016-12-08 Microsoft Technology Licensing, Llc Graph-driven authoring in productivity tools
US20170140055A1 (en) * 2015-11-17 2017-05-18 Dassault Systemes Thematic web corpus
US20170357696A1 (en) * 2016-06-10 2017-12-14 Apple Inc. System and method of generating a key list from multiple search domains
US10193990B2 (en) * 2005-10-26 2019-01-29 Cortica Ltd. System and method for creating user profiles based on multimedia content
US10210146B2 (en) 2014-09-28 2019-02-19 Microsoft Technology Licensing, Llc Productivity tools for content authoring
US10303771B1 (en) 2018-02-14 2019-05-28 Capital One Services, Llc Utilizing machine learning models to identify insights in a document
US10331737B2 (en) 2005-10-26 2019-06-25 Cortica Ltd. System for generation of a large-scale database of hetrogeneous speech
US10372746B2 (en) 2005-10-26 2019-08-06 Cortica, Ltd. System and method for searching applications using multimedia content elements
US10387914B2 (en) 2005-10-26 2019-08-20 Cortica, Ltd. Method for identification of multimedia content elements and adding advertising content respective thereof
US10402061B2 (en) 2014-09-28 2019-09-03 Microsoft Technology Licensing, Llc Productivity tools for content authoring
US10402410B2 (en) * 2015-05-15 2019-09-03 Google Llc Contextualizing knowledge panels
US10528597B2 (en) 2015-06-03 2020-01-07 Microsoft Technology Licensing, Llc Graph-driven authoring in productivity tools

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
JP2014099052A (en) * 2012-11-14 2014-05-29 International Business Maschines Corporation Apparatus for editing text, data processing method and program
WO2015184186A1 (en) 2014-05-30 2015-12-03 Apple Inc. Multi-command single utterance input method
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9916328B1 (en) 2014-07-11 2018-03-13 Google Llc Providing user assistance from interaction understanding
US9965559B2 (en) 2014-08-21 2018-05-08 Google Llc Providing automatic actions for mobile onscreen content
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9703541B2 (en) 2015-04-28 2017-07-11 Google Inc. Entity action suggestion on a mobile device
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US9971940B1 (en) * 2015-08-10 2018-05-15 Google Llc Automatic learning of a video matching system
US10178527B2 (en) 2015-10-22 2019-01-08 Google Llc Personalized entity repository
US10055390B2 (en) * 2015-11-18 2018-08-21 Google Llc Simulated hyperlinks on a mobile device based on user intent and a centered selection of text
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10446143B2 (en) * 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
CN105975540A (en) * 2016-04-29 2016-09-28 北京小米移动软件有限公司 Information display method and device
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
WO2018053735A1 (en) * 2016-09-21 2018-03-29 朱小军 Search method and system
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
KR101881439B1 (en) * 2016-09-30 2018-07-25 주식회사 솔트룩스 System and method for recommending knowledge actively to write document
TWI603320B (en) * 2016-12-29 2017-10-21 大仁科技大學 Global spoken dialogue system
US20180189355A1 (en) * 2016-12-30 2018-07-05 Microsoft Technology Licensing, Llc Contextual insight system
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US20190294727A1 (en) * 2018-03-20 2019-09-26 Microsoft Technology Licensing, Llc Author-created digital agents
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832528A (en) * 1994-08-29 1998-11-03 Microsoft Corporation Method and system for selecting text with a mouse input device in a computer system
US6385602B1 (en) * 1998-11-03 2002-05-07 E-Centives, Inc. Presentation of search results using dynamic categorization
US20060074883A1 (en) * 2004-10-05 2006-04-06 Microsoft Corporation Systems, methods, and interfaces for providing personalized search and information access
US20070136251A1 (en) * 2003-08-21 2007-06-14 Idilia Inc. System and Method for Processing a Query
US20090228842A1 (en) * 2008-03-04 2009-09-10 Apple Inc. Selecting of text using gestures
US20140081993A1 (en) * 2012-09-20 2014-03-20 Intelliresponse Systems Inc. Disambiguation framework for information searching
US8706748B2 (en) * 2007-12-12 2014-04-22 Decho Corporation Methods for enhancing digital search query techniques based on task-oriented user activity

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7000197B1 (en) * 2000-06-01 2006-02-14 Autodesk, Inc. Method and apparatus for inferred selection of objects
US7536382B2 (en) * 2004-03-31 2009-05-19 Google Inc. Query rewriting with entity detection
GB0407816D0 (en) * 2004-04-06 2004-05-12 British Telecomm Information retrieval
US7603349B1 (en) * 2004-07-29 2009-10-13 Yahoo! Inc. User interfaces for search systems using in-line contextual queries
US8838562B1 (en) * 2004-10-22 2014-09-16 Google Inc. Methods and apparatus for providing query parameters to a search engine
US7856441B1 (en) * 2005-01-10 2010-12-21 Yahoo! Inc. Search systems and methods using enhanced contextual queries
US20100241663A1 (en) * 2008-02-07 2010-09-23 Microsoft Corporation Providing content items selected based on context
US8786556B2 (en) * 2009-03-12 2014-07-22 Nokia Corporation Method and apparatus for selecting text information
US20100289757A1 (en) * 2009-05-14 2010-11-18 Budelli Joey G Scanner with gesture-based text selection capability
US9262063B2 (en) 2009-09-02 2016-02-16 Amazon Technologies, Inc. Touch-screen user interface
US8489390B2 (en) * 2009-09-30 2013-07-16 Cisco Technology, Inc. System and method for generating vocabulary from network data
EP2488963A1 (en) * 2009-10-15 2012-08-22 Rogers Communications Inc. System and method for phrase identification
EP2524325A2 (en) 2010-01-11 2012-11-21 Apple Inc. Electronic text manipulation and display
US8704783B2 (en) * 2010-03-24 2014-04-22 Microsoft Corporation Easy word selection and selection ahead of finger
US9069416B2 (en) * 2010-03-25 2015-06-30 Google Inc. Method and system for selecting content using a touchscreen
US8719246B2 (en) * 2010-06-28 2014-05-06 Microsoft Corporation Generating and presenting a suggested search query
US9002701B2 (en) * 2010-09-29 2015-04-07 Rhonda Enterprises, Llc Method, system, and computer readable medium for graphically displaying related text in an electronic document
US8818981B2 (en) * 2010-10-15 2014-08-26 Microsoft Corporation Providing information to users based on context
US20120102401A1 (en) * 2010-10-25 2012-04-26 Nokia Corporation Method and apparatus for providing text selection
JP5087129B2 (en) * 2010-12-07 2012-11-28 株式会社東芝 Information processing apparatus and information processing method
US9645986B2 (en) 2011-02-24 2017-05-09 Google Inc. Method, medium, and system for creating an electronic book with an umbrella policy
KR20120102262A (en) * 2011-03-08 2012-09-18 삼성전자주식회사 The method for selecting a desired contents from text in portable terminal and device thererof
CN105955617B (en) * 2011-06-03 2019-07-12 谷歌有限责任公司 For selecting the gesture of text
US8612584B2 (en) 2011-08-29 2013-12-17 Google Inc. Using eBook reading data to generate time-based information
US9612670B2 (en) * 2011-09-12 2017-04-04 Microsoft Technology Licensing, Llc Explicit touch selection and cursor placement
US9128581B1 (en) * 2011-09-23 2015-09-08 Amazon Technologies, Inc. Providing supplemental information for a digital work in a user interface
US20150205490A1 (en) * 2011-10-05 2015-07-23 Google Inc. Content selection mechanisms
US8626545B2 (en) 2011-10-17 2014-01-07 CrowdFlower, Inc. Predicting future performance of multiple workers on crowdsourcing tasks and selecting repeated crowdsourcing workers
US9691381B2 (en) * 2012-02-21 2017-06-27 Mediatek Inc. Voice command recognition method and related electronic device and computer-readable medium
CN103294706A (en) * 2012-02-28 2013-09-11 腾讯科技(深圳)有限公司 Text searching method and device in touch type terminals
US9292192B2 (en) * 2012-04-30 2016-03-22 Blackberry Limited Method and apparatus for text selection
US9916396B2 (en) * 2012-05-11 2018-03-13 Google Llc Methods and systems for content-based search
EP2867756A4 (en) * 2012-06-29 2015-06-17 Microsoft Technology Licensing Llc Input method editor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832528A (en) * 1994-08-29 1998-11-03 Microsoft Corporation Method and system for selecting text with a mouse input device in a computer system
US6385602B1 (en) * 1998-11-03 2002-05-07 E-Centives, Inc. Presentation of search results using dynamic categorization
US20070136251A1 (en) * 2003-08-21 2007-06-14 Idilia Inc. System and Method for Processing a Query
US20060074883A1 (en) * 2004-10-05 2006-04-06 Microsoft Corporation Systems, methods, and interfaces for providing personalized search and information access
US8706748B2 (en) * 2007-12-12 2014-04-22 Decho Corporation Methods for enhancing digital search query techniques based on task-oriented user activity
US20090228842A1 (en) * 2008-03-04 2009-09-10 Apple Inc. Selecting of text using gestures
US20140081993A1 (en) * 2012-09-20 2014-03-20 Intelliresponse Systems Inc. Disambiguation framework for information searching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Eric J. Glover, Architecture of a metasearch engine that supports user information needs, InCIKM '99 Proceedings of the eighth international conference on Information and knowledge management, January 1999, ACM, Pg. 210-216 *
Lev Finkelstein, Placing search in context: the concept revisited, In: ACM Transactions on Information Systems (TOIS), January 2002, ACM, Pg. 116-126 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10331737B2 (en) 2005-10-26 2019-06-25 Cortica Ltd. System for generation of a large-scale database of hetrogeneous speech
US10387914B2 (en) 2005-10-26 2019-08-20 Cortica, Ltd. Method for identification of multimedia content elements and adding advertising content respective thereof
US10372746B2 (en) 2005-10-26 2019-08-06 Cortica, Ltd. System and method for searching applications using multimedia content elements
US10193990B2 (en) * 2005-10-26 2019-01-29 Cortica Ltd. System and method for creating user profiles based on multimedia content
US9485543B2 (en) 2013-11-12 2016-11-01 Google Inc. Methods, systems, and media for presenting suggestions of media content
US10341741B2 (en) 2013-11-12 2019-07-02 Google Llc Methods, systems, and media for presenting suggestions of media content
US9794636B2 (en) 2013-11-12 2017-10-17 Google Inc. Methods, systems, and media for presenting suggestions of media content
US9552395B2 (en) * 2013-11-13 2017-01-24 Google Inc. Methods, systems, and media for presenting recommended media content items
US20150134653A1 (en) * 2013-11-13 2015-05-14 Google Inc. Methods, systems, and media for presenting recommended media content items
US9733787B2 (en) * 2014-01-23 2017-08-15 Lg Electronics Inc. Mobile terminal and control method for the same
US20150205451A1 (en) * 2014-01-23 2015-07-23 Lg Electronics Inc. Mobile terminal and control method for the same
US20160048326A1 (en) * 2014-08-18 2016-02-18 Lg Electronics Inc. Mobile terminal and method of controlling the same
US10210146B2 (en) 2014-09-28 2019-02-19 Microsoft Technology Licensing, Llc Productivity tools for content authoring
US10402061B2 (en) 2014-09-28 2019-09-03 Microsoft Technology Licensing, Llc Productivity tools for content authoring
US10402410B2 (en) * 2015-05-15 2019-09-03 Google Llc Contextualizing knowledge panels
WO2016196697A1 (en) * 2015-06-03 2016-12-08 Microsoft Technology Licensing, Llc Graph-driven authoring in productivity tools
US10528597B2 (en) 2015-06-03 2020-01-07 Microsoft Technology Licensing, Llc Graph-driven authoring in productivity tools
US20170140055A1 (en) * 2015-11-17 2017-05-18 Dassault Systemes Thematic web corpus
US20170357696A1 (en) * 2016-06-10 2017-12-14 Apple Inc. System and method of generating a key list from multiple search domains
US10303771B1 (en) 2018-02-14 2019-05-28 Capital One Services, Llc Utilizing machine learning models to identify insights in a document
US10489512B2 (en) 2018-02-14 2019-11-26 Capital One Services, Llc Utilizing machine learning models to identify insights in a document

Also Published As

Publication number Publication date
EP3055787A1 (en) 2016-08-17
US20150100524A1 (en) 2015-04-09
WO2015054218A1 (en) 2015-04-16
US9436918B2 (en) 2016-09-06
TW201519075A (en) 2015-05-16
EP3055789A1 (en) 2016-08-17
CN105612517A (en) 2016-05-25
CN105637507A (en) 2016-06-01
WO2015053993A1 (en) 2015-04-16
KR20160067202A (en) 2016-06-13
CN105637507B (en) 2019-03-08

Similar Documents

Publication Publication Date Title
US9020924B2 (en) Suggesting and refining user input based on original user input
JP4806178B2 (en) Annotation management in pen-based computing systems
CA2635783C (en) Dynamic search box for web browser
US7962477B2 (en) Blending mobile search results
Liu et al. Opinion target extraction using word-based translation model
US8010537B2 (en) System and method for assisting search requests with vertical suggestions
KR20100135862A (en) Techniques for input recognition and completion
Deveaud et al. Accurate and effective latent concept modeling for ad hoc information retrieval
US20110106807A1 (en) Systems and methods for information integration through context-based entity disambiguation
US10175860B2 (en) Search intent preview, disambiguation, and refinement
US20070174257A1 (en) Systems and methods for providing sorted search results
Kang et al. based measurement of customer satisfaction in mobile service: Sentiment analysis and VIKOR approach
US9442928B2 (en) System, method and computer program product for automatic topic identification using a hypertext corpus
US8719246B2 (en) Generating and presenting a suggested search query
US20150100524A1 (en) Smart selection of text spans
US20120297294A1 (en) Network search for writing assistance
JP2008520037A (en) Auto-completion method and system for languages with ideograms and phonetic characters
JP5497022B2 (en) Proposal of resource locator from input string
US8073877B2 (en) Scalable semi-structured named entity detection
US20110184960A1 (en) Methods and systems for content recommendation based on electronic document annotation
US20140006012A1 (en) Learning-Based Processing of Natural Language Questions
US8666994B2 (en) Document analysis and association system and method
JP5556050B2 (en) Input support method, computer program, and server
US8332748B1 (en) Multi-directional auto-complete menu
US8868590B1 (en) Method and system utilizing a personalized user model to develop a search request

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KOHLMEIER, BERNHARD S.J.;CHILAKAMARRI, PRADEEP;SAAD, KRISTEN M.;AND OTHERS;SIGNING DATES FROM 20141006 TO 20150528;REEL/FRAME:035915/0387

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:036100/0048

Effective date: 20150702

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION