EP2643775A1 - Classement décomposable pour précalcul efficace - Google Patents

Classement décomposable pour précalcul efficace

Info

Publication number
EP2643775A1
EP2643775A1 EP11842627.9A EP11842627A EP2643775A1 EP 2643775 A1 EP2643775 A1 EP 2643775A1 EP 11842627 A EP11842627 A EP 11842627A EP 2643775 A1 EP2643775 A1 EP 2643775A1
Authority
EP
European Patent Office
Prior art keywords
ranking
preliminary
features
documents
final
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP11842627.9A
Other languages
German (de)
English (en)
Other versions
EP2643775A4 (fr
Inventor
Knut Magne Risvik
Michael Hopcroft
John G. Bennett
Karthik Kalyanaraman
Trishul Chilimbi
Vishesh Parikh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Publication of EP2643775A1 publication Critical patent/EP2643775A1/fr
Publication of EP2643775A4 publication Critical patent/EP2643775A4/fr
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Definitions

  • search engines have been developed to facilitate searching for electronic documents.
  • users may search for information and documents by entering search queries comprising one or more terms that may be of interest to the user.
  • search engine After receiving a search query from a user, a search engine identifies documents and/or web pages that are relevant based on the search query. Because of its utility, web searching, that is, the process of finding relevant web pages and documents for user issued search queries has ideally become the most popular service on the Internet today.
  • Search engines operate by crawling documents and indexing information regarding the documents in a search index.
  • the search engine employs the search index to identify documents relevant to the search query.
  • a ranking function may be employed to determine the most relevant documents to present to a user based on a search query.
  • Ranking functions have become increasingly complex such that hundreds of features are used to rank documents. Complex ranking functions, when used alone, are ineffective because of cost and time constraints.
  • Embodiments of the present invention relate to the generation of algorithms used in conjunction with a preliminary ranking stage of an overall ranking process.
  • the overall ranking process may include a matching stage, a preliminary ranking stage, and a final ranking stage.
  • the final ranking function is generally more expensive and time-consuming than the preliminary ranking function
  • the matching stage and the preliminary ranking stage function to limit the number of candidate documents that the final ranking function has to rank.
  • the preliminary ranking function used in the preliminary ranking stage is a simplified version of the final ranking function used in the final ranking stage.
  • the final ranking function is analyzed to identify ranking features that can be precomputed (e.g., document ranking features) or that are not easily computed in real-time after a query is received and the ranking features that are easily computed in real-time.
  • Ranking features not used in the final ranking function may also be used in the preliminary ranking function.
  • FIG. 1 is a block diagram of an exemplary computing environment suitable for use in implementing embodiments of the present invention
  • FIG. 2 is a block diagram of an exemplary system in which embodiments of the invention may be employed
  • FIG. 3 is a flow diagram showing a method for generating an algorithm used to provide preliminary rankings to a plurality of documents, in accordance with embodiments of the present invention
  • FIG. 4 is a flow diagram showing a method for calculating a preliminary ranking for documents, in accordance with embodiments of the present invention.
  • FIG. 5 is a flow diagram showing a method for utilizing ranking features from a final ranking stage in a preliminary ranking stage to determine preliminary rankings for documents, in accordance with embodiments of the present invention.
  • embodiments of the present invention provide for generating algorithms used in a preliminary ranking stage of an overall ranking process. Embodiments also provide for using the algorithm to calculate preliminary rankings for documents such that the number of documents sent to the final ranking component is greatly reduced.
  • the preliminary ranking function is generally a fast and low cost computation that is a useful estimate of the final ranking function.
  • the preliminary ranking function can be trusted to identify a reduced set of relevant documents that are worthy of the more costly final ranking stage.
  • ranking features that can be precomputed (e.g., document ranking features) or that are not easily computed in realtime after a query is received, such as static features and dynamic atom-isolated components that may be used by the final ranking function are identified as potential ranking features for use by the preliminary ranking function.
  • These identified ranking features include a combination of those that are easy to compute at query match time, those that are not easy to compute at query match time and that can be precomputed, those that are useful as measured by a metric of fidelity in estimating the final ranking, and those adaptive in terms of remaining useful even when the preliminary ranking function is modified.
  • Precomputed scores for atom/document pairs are stored in a search index and are extracted during computation of preliminary rankings. The documents that are found to be most relevant are sent to the final ranking stage. Fidelity measurements are utilized to ensure that the final ranking function and the preliminary ranking function are similarly ranking documents to ensure fidelity and low error rates between the two ranking stages.
  • an embodiment of the present invention is directed to a method for generating an algorithm used to provide preliminary rankings to a plurality of documents.
  • the method includes analyzing a final ranking function used to calculate final rankings for a plurality of documents. From the final ranking function, the method further includes identifying potential preliminary ranking features that include one or more static ranking features that are query independent and one or more dynamic atom- isolated components that are related to a single atom. Additionally, the method includes selecting from the potential preliminary ranking features one or more preliminary ranking features to use for a preliminary ranking function and using at least the one or more preliminary ranking features to generate an algorithm that is used to provide a preliminary ranking for the plurality of documents.
  • an aspect of the invention is directed to a method for calculating a preliminary ranking for documents.
  • the method includes identifying static ranking features that are query independent, and identifying dynamic atom-isolated components that are related to a single atom. Further, the method includes selecting a set of preliminary ranking features comprising one or more of the static ranking features and one or more of the dynamic atom-isolated components. For a first document, the method extracts data corresponding to the set of preliminary ranking features from a search index. Based on a search query, the method utilizes the extracted data to calculate a preliminary ranking of the first document.
  • a further embodiment of the invention is directed to one or more computer storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform a method for utilizing ranking features from a final ranking stage in a preliminary ranking stage to determine preliminary rankings for documents.
  • the method includes analyzing a final ranking function to identify a first subset of ranking features that includes query-independent ranking features and single atom ranking features and selecting a second subset of ranking features not used in the final ranking function. Further, the method includes, from the first subset and the second subset of ranking features, selecting one or more preliminary ranking features for use in calculating a preliminary ranking of a plurality of documents using a preliminary ranking function that limits a quantity of documents that are ranked using the final ranking function.
  • the method algorithmically identifies, using the preliminary ranking function, a subset of the plurality of documents.
  • the method additionally includes communicating document identifications corresponding to the subset of the plurality of documents to a final ranking stage that uses the final ranking function to calculate final rankings of each document in the subset of the plurality of documents.
  • FIG. 1 an exemplary operating environment for implementing embodiments of the present invention is shown and designated generally as computing device 100.
  • Computing device 100 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing device 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated.
  • the invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device.
  • program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types.
  • the invention may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, more specialty computing devices, etc.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
  • computing device 100 includes a bus 110 that directly or indirectly couples the following devices: memory 1 12, one or more processors 114, one or more presentation components 116, input/output (I/O) ports 1 18, input/output components 120, and an illustrative power supply 122.
  • Bus 1 10 represents what may be one or more busses (such as an address bus, data bus, or combination thereof).
  • busses such as an address bus, data bus, or combination thereof.
  • FIG. 1 is merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the present invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “handheld device,” etc., as all are contemplated within the scope of FIG. 1 and reference to "computing device.”
  • Computing device 100 typically includes a variety of computer-readable media.
  • Computer-readable media can be any available media that can be accessed by computing device 100 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer-readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer- readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium, which can be used to store the desired information and which can be accessed by computing device 100.
  • Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
  • Memory 112 includes computer-storage media in the form of volatile and/or nonvolatile memory.
  • the memory may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc.
  • Computing device 100 includes one or more processors that read data from various entities such as memory 1 12 or I/O components 120.
  • Presentation component(s) 116 present data indications to a user or other device.
  • Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc.
  • I O ports 1 18 allow computing device 100 to be logically coupled to other devices including I/O components 120, some of which may be built in.
  • I/O components 120 include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
  • FIG. 2 a block diagram is provided illustrating an exemplary system 200 in which embodiments of the present invention may be employed. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. [0023] Among other components not shown, the system 200 includes a user device
  • Each of the components shown in FIG. 2 may be any type of computing device, such as computing device 100 described with reference to FIG. 1, for example.
  • the components may communicate with each other via a network 208, which may include, without limitation, one or more local area networks (LANs) and/or wide area networks (WANs).
  • LANs local area networks
  • WANs wide area networks
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
  • any number of user devices, ranking servers, ranking generators, data stores, and search indexes may be employed within the system 200 within the scope of the present invention.
  • Each may comprise a single device or multiple devices cooperating in a distributed environment.
  • the ranking server 206 may comprise multiple devices arranged in a distributed environment that collectively provide the functionality of the ranking server 206 described herein. Additionally, other components not shown may also be included within the system 200, while components shown in FIG. 2 may be omitted in some embodiments.
  • the search index 220 employed by embodiments of the present invention indexes higher order primitives or "atoms" from documents, as opposed to simply indexing single terms.
  • an "atom" may refer to a variety of units of a query or a document. These units may include, for example, a term, an n-gram, an n-tuple, a k- near n-tuple, etc.
  • a term maps down to a single symbol or word as defined by the particular tokenizer technology being used.
  • a term in one embodiment is a single character. In another embodiment, a term is a single word or grouping of words.
  • An n- gram is a sequence of "n" number of consecutive or almost consecutive terms that may be extracted from a document.
  • An n-gram is said to be "tight” if it corresponds to a run of consecutive terms and is “loose” if it contains terms in the order they appear in the document, but the terms are not necessarily consecutive. Loose n-grams are typically used to represent a class of equivalent phrases that differ by insignificant words (e.g., "if it rains I'll get wet” and "if it rains then I'll get wet”).
  • An n-tuple is a set of "n” terms that co-occur (order independent or dependent) in a document.
  • a k-near n- tuple refers to a set of "n” terms that co-occur within a window of "k” terms in a document.
  • an atom is generally defined as a generalization of all of the above. Implementations of embodiments of the present invention may use different varieties of atoms, but as used herein, atoms generally describes each of the above- described varieties.
  • the user device 202 may be any type of computing device owned and/or operated by an end user that can access network 208.
  • the user device 202 may be a desktop computer, a laptop computer, a tablet computer, a mobile device, or any other device having network access.
  • an end user may employ the user device 202 to, among other things, access electronic documents maintained by the system, such as the ranking server 206 or the like.
  • the end user may employ a web browser on the user device 202 to access and view electronic documents from the ranking server 206.
  • documents are not stored on the ranking server 206, but may be stored in the data store 204.
  • the ranking server 206 is generally responsible for selecting ranking features to use for a preliminary ranking stage of an overall ranking process.
  • the overall ranking process comprises two or more ranking stages, such as a preliminary ranking stage and a final ranking stage.
  • the preliminary ranking stage utilizes one or more of the ranking features used in the final ranking stage, such as those ranking features that do not have atom interdependencies.
  • the second ranking stage is termed the "final ranking stage" of the "final ranking process”
  • the preliminary ranking stage may be a first ranking stage and the final ranking stage may be a second ranking stage.
  • a third ranking stage that is specialized may be employed in certain embodiments, and is contemplated to be within the scope of the present invention.
  • the overall ranking process is employed when a search query is received to pare the quantity of matching documents down to a manageable size.
  • the search engine may employ a staged process to select search results for a search query.
  • the search query is analyzed to identify atoms.
  • the atoms are then used during the various stages of the overall ranking process. These stages may be referred to as the L0 stage (matching stage) to query the search index and identify an initial set of matching documents that contain the atoms, or at least some of the atoms, from the search query.
  • This initial process may reduce the number of candidate documents from all documents indexed in the search index to those documents matching the atoms from the search query. For instance, a search engine may search through millions or even trillions of documents to determine those that are most relevant to a particular search query. Once the L0 matching stage is complete, the number of candidate documents is greatly reduced.
  • N stages may also be employed, including a preliminary ranking stage and a final ranking stage.
  • the preliminary ranking stage often identifies more candidate documents than is cost efficient to analyze in depth using the final ranking stage.
  • each earlier stage may utilize a subset of features used in the later stage, and may also used features not used in the later stage.
  • each earlier stage is basically an approximation of the ranking provided by the later stage, but that is less expensive and perhaps simplified.
  • the preliminary ranking stage also termed the LI stage, employs a simplified scoring function used to compute a preliminary score or ranking for candidate documents retained from the L0 matching stage described above.
  • the preliminary ranking component 210 is responsible for providing preliminary rankings for each of the candidate documents retained from the L0 matching stage.
  • the preliminary ranking stage is simplified when compared to the final ranking stage as it employs only a subset of the ranking features used by the final ranking stage. For instance, one or more, but likely not all, of the ranking features used in the final ranking stage are employed by the preliminary ranking stage. Additionally, features not employed by the final ranking stage may be employed by the preliminary ranking stage.
  • the ranking features used by the preliminary ranking stage do not have atom- interdependencies, such as term closeness and term co-occurrence.
  • the ranking features used in the preliminary ranking stage may include, for exemplary purposes only, static features and dynamic atom-isolated components.
  • Static features generally, are those components that only look into features that are query-independent. Examples of static features include page rank, spam ratings of a particular web page, etc.
  • Dynamic atom-isolated components are components that only look at features that are related to single atoms at a time. Examples may include, for instance, BM25f, frequency of a certain atom in a document, location (context) of the atom in the document (e.g., title, URL, anchor, header, body, traffic, class, attributes), etc.
  • the final ranking stage also termed the L2 stage, ranks the candidate documents provided to it by the preliminary ranking stage.
  • the algorithm used in conjunction with the final ranking stage is a more expensive operation with a larger number of ranking features when compared to the ranking features used in the preliminary ranking stage.
  • the final ranking algorithm is applied to a much smaller number of candidate documents.
  • the final ranking algorithm provides a set of ranked documents, and search results are provided in response to the original search query based on the set of ranked documents.
  • the ranking server 206 comprises various components, each of which provides functionality to the process of calculating preliminary rankings to candidate documents and selecting only those documents, through both ranking and pruning, that are relevant to a search query to pass on to a final ranking stage.
  • these components include a preliminary ranking component 210, a final ranking component 212, a feature selection component 214, an algorithm generating component 216, and a data extraction component 218.
  • Components not illustrated in FIG. 2 that may be used to provide preliminary rankings to documents and prune the remaining quantity of documents to a manageable size are also contemplated to be within the scope of the present invention. Further, not all of the components shown in relation to the ranking server 206 may be used, or in some embodiments, may be combined with other components.
  • the preliminary ranking component 210 is responsible for ranking a set of candidate documents and therefore decreasing the quantity of candidate documents that are passed on to the final ranking stage, which utilizes the final ranking function 212 to rank the smaller set of candidate documents. For instance, hundreds of millions or even a trillion documents are searched in the L0 matching stage. The number of relevant documents may be pruned to thousands of documents after the preliminary ranking stage, and further pruned to tens of documents after the final ranking stage. These documents may then be presented to the user on a search results page.
  • the simplified scoring function may serve as an approximation of the final ranking algorithm that will ultimately be used to rank documents. However, the simplified scoring function provides a less expensive operation than the final ranking algorithm allowing for a larger number of candidate documents to be processed quickly.
  • Candidate documents are pruned based on the preliminary score. For instance, only the top N documents having the highest preliminary scores may be retained.
  • the preliminary ranking component To calculate rankings of documents, the preliminary ranking component
  • the preliminary ranking component 210 utilizes preliminary ranking features, some of which are also used in the final ranking stage.
  • the preliminary scoring operates on, among other things, precomputed scores stored in the search index for document/atom pairs.
  • static features and isolated features such as dynamic atom-isolated components, may be used by the preliminary ranking component 210.
  • the preliminary ranking component 210 accesses, by way of the data extraction component 218, a search index 220 or other data, such as data stored in the data store 204 to extract data associated with the preliminary features used by the preliminary ranking component 210. In some instances, this data may be stored in the form of precomputed scores.
  • a particular atom for instance, may have one or more precomputed scores associated therewith in relation to various attributes corresponding to a particular document.
  • a first atom may be repeated 10 times in a particular document, and a second atom may be repeated 55 times in that same document.
  • the second atom may have a higher score than the first atom, as it is found more times in that document.
  • the system may be set up such that an atom found in a title is given a higher score than an atom found only in the URL.
  • Various rules may be incorporated into the preliminary scoring function. As such, precomputed scores are stored in a search index or other data store and this data can be extracted and used in the preliminary scoring function.
  • the LI or preliminary ranking function may not be query independent.
  • the preliminary ranking function depends on how many atoms there are in a particular query, on whether there are alternate interpretations or spellings, on how certain we are of those factors, on what language and country the query seems to be from, etc. So, while the preliminary ranking function draws heavily on precalculated query-independent features delivered as a summary rank for each atom, the preliminary ranking function may also combine them in a query-dependent manner.
  • a precomputed score may take into account a frequency of a particular atom in a document, how close the various instances of the atom are to one another, the context of the atom, such as where it is located in the document, etc.
  • an atom has more than one precomputed score associated with a particular document. For instance, an atom may have a precomputed score that takes into account the frequency of that atom in a document, and another precomputed score for the portions of the document in which the atom is found.
  • preliminary ranking features Prior to the preliminary ranking function calculating preliminary scores for the documents, preliminary ranking features are determined. Preliminary ranking features, as previously mentioned, may come from various sources. In one instance, ranking features used by the final ranking component 212 are analyzed. The ranking features used by the final ranking component 212 may generally be divided into three main categories. These categories include, at least, static features, dynamic atom-isolated components, and dynamic atom-correlation components, or those that have atom-interdependencies. The feature selection component 214, in one embodiment, performs the function of dividing the features into these categories. The feature selection component 214 may select those features that are static features or dynamic atom-isolated components as potential preliminary ranking features.
  • these features are even further analyzed, as not all of these features may be selected to be used in the preliminary ranking function.
  • Those features that are ultimately selected may be easy to compute (e.g., easy to use in the preliminary ranking function), useful as determined by a fidelity measurement between the preliminary ranking and the final ranking, and adaptive in terms of how the ranking feature performs when the preliminary ranking function is modified, etc.
  • the selected features may be computed at a low cost compared to other ranking features. While these features may be easy to compute, some may be difficult to compute in real-time when a query is received, and therefore may be used in the preliminary ranking function so that they can be precomputed and stored in a search index as a precomputed score.
  • one or more of the preliminary features used in the preliminary ranking function are manually selected. As such, this selection process requires at least some user interaction. Alternatively or in conjunction with the previous embodiment, at least some of the preliminary features are selected automatically, such as by way of a machine-learning tool.
  • the machine-learning tool may be incorporated into the feature selection component 214. The manual selection and the machine-learning tool may be used in conjunction with one another to select the preliminary features. Or, features may be manually selected and the machine-learning tool may then determine whether those features are helpful or not in calculating document rankings. This machine-learning environment allows for the usefulness of each feature in the preliminary ranking function. If a particular feature is found to not be particularly useful, it may be removed from the preliminary ranking function.
  • the algorithm generating component 216 generates an algorithm that calculates rankings for each document.
  • the algorithm is generated using identified preliminary ranking features, such as those selected as being easy to calculate and useful from the final ranking function. In one embodiment, features that are not necessarily used in the final ranking function but that have proven to be useful in the preliminary ranking function are also used.
  • fidelity measurement may compare the final ranking and the preliminary ranking associated with a particular document to determine how close the rankings are.
  • the first, or preliminary ranking stage operates as an estimate of the second, or final ranking stage. In the optimal situation, the preliminary ranking would always match the final ranking. This, however, is typically not the case. Fidelity can be measured in many ways, all of which are not described herein, but are contemplated to be within the scope of the present invention.
  • fidelity may be defined by, for some useful top number (e.g., 10, 100), that the preliminary ranking function will suggest the same elements or documents as would have been found if the final ranking function would have been used to rank all of the same documents ranked by the preliminary function.
  • fidelity may be measured by taking the top ten ranked documents, as ranked by the final ranking function, and determining how many of those documents were ranked in the top ten ranked documents as ranked by the preliminary ranking function. Therefore, if the preliminary ranking function ranks eight of the final ranking function's top ten documents in its top ten, the fidelity may be calculated to be 80% (8/10).
  • the eight of the top ten results produced by the preliminary ranking function may not be in the same order as the top ten results produced by the final ranking function. In some instance, this it taken into consideration in relation to the fidelity ranking, but in other instances it is not.
  • a fidelity measurement may determine how many candidate documents need to be returned from the preliminary ranking calculation so that a sufficient number of results (e.g., candidate documents) are returned as a result of the final ranking calculation.
  • a fidelity measurement may be used to determine a number of candidate documents returned from the preliminary ranking function to be sure the final ranking function is returning all of the preliminary ranking function's top ten results.
  • a threshold such as 99%. Therefore, for example, a goal can be that 99% of the time, the top ten results returned by the final ranking function are in the top 50 documents returned by the preliminary ranking function.
  • these numbers can vary and are given for illustrative purposes only.
  • Embodiments described herein enable agility of the final ranking function without requiring a full rebuild of the precomputed data used for the preliminary ranking stage.
  • the new pruning thresholds for the preliminary ranking stage may be determined for the desired error range. For instance, 99% of the time, the preliminary ranking stage gets the top ten ranked documents from the final ranking stage in the top 50.
  • a new ranking feature may be determined to increase the accuracy of the final ranking function. Any disagreements between the results of the new final ranking function and the preliminary ranking function may be fine tuned without recomputing the precomputed scores. As long as the preliminary ranking stage has done a good job by the old standard, it is likely to do a good job by the new standard, or the new/updated final ranking function.
  • a flow diagram is shown of a method 300 for generating an algorithm used to provide preliminary rankings to a plurality of documents.
  • a final ranking function is analyzed at step 310.
  • the final ranking function as described above, is used to calculate final rankings for a plurality of documents.
  • the final ranking function is expensive to perform and thus is employed for a limited number of candidate documents, such as those documents returned from the preliminary ranking function.
  • the final ranking function is referred to as "final,” one or more ranking stages may be employed subsequent to the final ranking stage. It is termed "final” because it is the last staged referred to herein.
  • potential preliminary ranking features are identified from the final ranking function.
  • These identified features may include static ranking features that are query independent and dynamic atom-isolated components that are related to only a single atom.
  • Those ranking features that are not identified as potential preliminary ranking features may be those ranking features that are dynamic atom-correlation components that have atom- interdependencies, such as term closeness, term co-occurrence, etc.
  • static ranking features e.g., page rank, spam ratings
  • Dynamic atom-isolated components only take into account those features that are related to single atoms at a time (e.g., frequency, context).
  • the preliminary ranking features are used in the preliminary ranking function to calculate rankings of candidate documents.
  • the preliminary ranking features include some of the ranking features identified in step 314, but also some features not used in the final ranking function, but that have proven to be useful and accurate in the preliminary ranking function.
  • the preliminary ranking features may be manually identified, thus requiring user interaction (e.g., human engineering).
  • the preliminary ranking features may be selected with the assistance of a machine-learning tool that evaluates the ease of calculation, usefulness, adaptiveness, etc., of a particular feature and then determines whether that feature should be used in the preliminary ranking function.
  • a combination of a manual selection and a machine-learning tool are utilized to select preliminary ranking features.
  • preliminary ranking features are selected based on many factors. These factors may include, for exemplary purposes only, an ease of use of the ranking features in the preliminary ranking function, a usefulness of the ranking features as determined by a fidelity measurement between the preliminary ranking and the final ranking of the documents, an adaptiveness of the ranking features when the preliminary ranking function is modified, a cost of computing the feature, or the like.
  • preliminary ranking features are easy to compute, they may also be difficult to compute in real-time once a query is received, and thus may be used for the preliminary rankings because they can be precomputed, eliminating the need for their computation in real-time. A combination of these factors may be considered.
  • an algorithm is generated from the preliminary ranking features to calculate preliminary rankings of documents.
  • the top-ranked documents (e.g., top 100, top 1000, top 2000) are sent to the final ranking function for final rankings.
  • the top-ranked documents from the final ranking function are those that may be presented to the user in response to the user's search query.
  • a search query is received from a user.
  • the algorithm generated at step 316 for the preliminary ranking function is used to algorithmic ally identify a subset of the documents that are most relevant to the search query. These candidate documents are communicated (e.g., by way of document identification) to a final ranking stage so that a final ranking function can assign a final ranking to the candidate documents and determine those that are most relevant to the search query. These results are presented to the user.
  • FIG. 4 is a flow diagram showing a method 400 for calculating a preliminary ranking for documents.
  • static ranking features are those that are query-independent and in some instance, may not be related to a search query at all. For instance, static features may include page rank, spam ratings, language of the page, etc.
  • dynamic atom-isolated components are identified. Dynamic atom-isolated components are ranking features that are related to single atoms at a time and how the atoms appear in the context of a particular document such that the precomputed score can be assigned to atom/document pairs ahead of receiving a search query and can be stored in, for instance, a search index.
  • the static features and dynamic atom-isolated components are at least partially identified from a final ranking function such that the preliminary ranking function is basically a simplified form of the final ranking function. Features not used by the final ranking function may also be used for the preliminary ranking function.
  • a set of preliminary ranking features are selected at step 414. These preliminary ranking features may be static ranking features and/or dynamic atom-isolated components.
  • data corresponding to the set of preliminary ranking features is extracted at step 416.
  • the data may be extracted from a search index, for example.
  • extracted data may include precomputed scores for the set of preliminary ranking features in association with a plurality of documents. Precomputed scores may be for specific atom/document pairs such that the precomputed score takes into account various features, or precomputed scores may be for an atom/document pair but for a particular feature, such as how many times that atom appears in a particular document.
  • the precomputed scores may be stored in a search index.
  • the extracted data is utilized at step 418 to calculate a preliminary ranking of the first document. For instance, as previously described, a preliminary ranking function may utilize an algorithm used to calculate preliminary rankings of documents.
  • the top N highest-ranked documents can be identified and sent on to the final ranking stage, wherein N can be any number and may vary. For instance, it may be determined whether a relevance, as determined by a preliminary ranking, of a first document exceeds a threshold. Based on the relevance of the first document exceeding the threshold, a document identification of the first document may be sent to a final ranking stage that assigns the first document with a final ranking.
  • the final ranking stage utilizes, in addition to the static features and dynamic atom-isolated components, dynamic atom-correlation components that are query-dependent in determining the final ranking of documents.
  • Dynamic atom-correlation components may be a frequency of a particular atom in a document or a contextual location of a particular atom in a document.
  • Contextual locations include, for instance, a title, anchor, header, body, traffic class, attributes, and uniform resource locator (URL) of a document.
  • URL uniform resource locator
  • a flow diagram illustrates a method 500 for utilizing ranking features from a final ranking stage in a preliminary ranking stage to determine preliminary rankings for documents.
  • a final ranking function is analyzed.
  • a first subset of ranking features is identified, and includes query-independent ranking features and single atom ranking features.
  • a second subset of ranking features is selected. These ranking features are not used in the final ranking function.
  • preliminary ranking features are selected from the first and second subsets of ranking features. These selected preliminary ranking features are used in calculating a preliminary ranking of documents using a preliminary ranking function that limits a quantity of documents that are ultimately ranked using the final ranking function.
  • a subset of documents is algorithmically identified at step 516 based on the preliminary ranking function.
  • the preliminary ranking function utilizes data associated with the first and second subsets of ranking features, such as precomputed scores of atom/document pairs, including scores related to a query-independent ranking feature (e.g., static feature) in association with a particular document.
  • the data may be extracted from a search index, such as a forward index (e.g., indexed by document identification) or a reverse index (e.g., indexed by atom).
  • Document identifications corresponding to the subset of documents resulting from the preliminary ranking stage are communicated at step 518 to a final ranking stage that calculates final rankings for the subset of documents such that the top- ranked documents from the final ranking stage are presented to a user based on the user's search query.
  • fidelity metrics are calculated between the preliminary rankings and final rankings for a group of documents to determine the accuracy of the preliminary ranking stage, which is generally a simplified version of the final ranking stage. Fidelity measurements are described in more detail above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne des procédés et des supports de stockage informatiques permettant de générer un algorithme utilisé pour fournir des classements préliminaires de documents candidats. Une fonction de classement final qui fournit les classements finaux pour les documents est analysée pour identifier des caractéristiques potentielles de classement préliminaire, telles que des caractéristiques de classement statique qui sont indépendantes de la requête et des composants d'atome isolé dynamiques qui sont liés à un seul atome. Des caractéristiques de classement préliminaire sont sélectionnées parmi les caractéristiques potentielles de classement préliminaire sur la base de nombreux facteurs. En utilisant ces caractéristiques sélectionnées, un algorithme est généré pour fournir un classement préliminaire des documents candidats avant que les documents les plus pertinents ne passent à la phase de classement final.
EP11842627.9A 2010-11-22 2011-11-07 Classement décomposable pour précalcul efficace Ceased EP2643775A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/951,659 US8478704B2 (en) 2010-11-22 2010-11-22 Decomposable ranking for efficient precomputing that selects preliminary ranking features comprising static ranking features and dynamic atom-isolated components
PCT/US2011/059650 WO2012071165A1 (fr) 2010-11-22 2011-11-07 Classement décomposable pour précalcul efficace

Publications (2)

Publication Number Publication Date
EP2643775A1 true EP2643775A1 (fr) 2013-10-02
EP2643775A4 EP2643775A4 (fr) 2018-01-24

Family

ID=46065287

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11842627.9A Ceased EP2643775A4 (fr) 2010-11-22 2011-11-07 Classement décomposable pour précalcul efficace

Country Status (4)

Country Link
US (2) US8478704B2 (fr)
EP (1) EP2643775A4 (fr)
CN (1) CN102521270B (fr)
WO (1) WO2012071165A1 (fr)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8478704B2 (en) * 2010-11-22 2013-07-02 Microsoft Corporation Decomposable ranking for efficient precomputing that selects preliminary ranking features comprising static ranking features and dynamic atom-isolated components
US9424351B2 (en) 2010-11-22 2016-08-23 Microsoft Technology Licensing, Llc Hybrid-distribution model for search engine indexes
US9342582B2 (en) 2010-11-22 2016-05-17 Microsoft Technology Licensing, Llc Selection of atoms for search engine retrieval
US8713024B2 (en) 2010-11-22 2014-04-29 Microsoft Corporation Efficient forward ranking in a search engine
US9529908B2 (en) 2010-11-22 2016-12-27 Microsoft Technology Licensing, Llc Tiering of posting lists in search engine index
US9195745B2 (en) 2010-11-22 2015-11-24 Microsoft Technology Licensing, Llc Dynamic query master agent for query execution
CN103279498A (zh) * 2013-05-08 2013-09-04 嘉兴电力局 基于组合条件的红外图谱快速查询方法
RU2580432C1 (ru) 2014-10-31 2016-04-10 Общество С Ограниченной Ответственностью "Яндекс" Способ для обработки запроса от потенциального несанкционированного пользователя на доступ к ресурсу и серверу, используемый в нем
RU2610280C2 (ru) 2014-10-31 2017-02-08 Общество С Ограниченной Ответственностью "Яндекс" Способ авторизации пользователя в сети и сервер, используемый в нем
CN107004026B (zh) * 2014-11-03 2020-09-22 艾玛迪斯简易股份公司 管理预先计算的搜索结果
US10546030B2 (en) * 2016-02-01 2020-01-28 Microsoft Technology Licensing, Llc Low latency pre-web classification
US9842041B1 (en) * 2016-11-29 2017-12-12 Toyota Jidosha Kabushiki Kaisha Approximation of datastore storing indexed data entries
US9983976B1 (en) 2016-11-29 2018-05-29 Toyota Jidosha Kabushiki Kaisha Falsification of software program with datastore(s)

Family Cites Families (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4769772A (en) 1985-02-28 1988-09-06 Honeywell Bull, Inc. Automated query optimization method using both global and parallel local optimizations for materialization access planning for distributed databases
US5193180A (en) 1991-06-21 1993-03-09 Pure Software Inc. System for modifying relocatable object code files to monitor accesses to dynamically allocated memory
US5467425A (en) 1993-02-26 1995-11-14 International Business Machines Corporation Building scalable N-gram language models using maximum likelihood maximum entropy N-gram models
US6173298B1 (en) 1996-09-17 2001-01-09 Asap, Ltd. Method and apparatus for implementing a dynamic collocation dictionary
US5983216A (en) 1997-09-12 1999-11-09 Infoseek Corporation Performing automated document collection and selection by providing a meta-index with meta-index values indentifying corresponding document collections
US6571251B1 (en) 1997-12-30 2003-05-27 International Business Machines Corporation Case-based reasoning system and method with a search engine that compares the input tokens with view tokens for matching cases within view
BE1012981A3 (nl) 1998-04-22 2001-07-03 Het Babbage Inst Voor Kennis E Werkwijze en systeem voor het weervinden van documenten via een elektronisch databestand.
NO992269D0 (no) 1999-05-10 1999-05-10 Fast Search & Transfer Asa S°kemotor med todimensjonalt skalerbart, parallell arkitektur
US6507829B1 (en) 1999-06-18 2003-01-14 Ppd Development, Lp Textual data classification method and apparatus
US6704729B1 (en) 2000-05-19 2004-03-09 Microsoft Corporation Retrieval of relevant information categories
US20030217052A1 (en) * 2000-08-24 2003-11-20 Celebros Ltd. Search engine method and apparatus
NO313399B1 (no) 2000-09-14 2002-09-23 Fast Search & Transfer Asa Fremgangsmate til soking og analyse av informasjon i datanettverk
AUPR082400A0 (en) 2000-10-17 2000-11-09 Telstra R & D Management Pty Ltd An information retrieval system
US20020091671A1 (en) 2000-11-23 2002-07-11 Andreas Prokoph Method and system for data retrieval in large collections of data
US6766316B2 (en) 2001-01-18 2004-07-20 Science Applications International Corporation Method and system of ranking and clustering for document indexing and retrieval
JP4342753B2 (ja) 2001-08-10 2009-10-14 株式会社リコー 文書検索装置、文書検索方法、プログラム及びコンピュータに読み取り可能な記憶媒体
US6901411B2 (en) 2002-02-11 2005-05-31 Microsoft Corporation Statistical bigram correlation model for image retrieval
US7039631B1 (en) 2002-05-24 2006-05-02 Microsoft Corporation System and method for providing search results with configurable scoring formula
US7111000B2 (en) * 2003-01-06 2006-09-19 Microsoft Corporation Retrieval of structured documents
US7382358B2 (en) 2003-01-16 2008-06-03 Forword Input, Inc. System and method for continuous stroke word-based text input
US7421418B2 (en) 2003-02-19 2008-09-02 Nahava Inc. Method and apparatus for fundamental operations on token sequences: computing similarity, extracting term values, and searching efficiently
US20040243632A1 (en) 2003-05-30 2004-12-02 International Business Machines Corporation Adaptive evaluation of text search queries with blackbox scoring functions
US7433893B2 (en) 2004-03-08 2008-10-07 Marpex Inc. Method and system for compression indexing and efficient proximity search of text data
US7254774B2 (en) 2004-03-16 2007-08-07 Microsoft Corporation Systems and methods for improved spell checking
US7580921B2 (en) 2004-07-26 2009-08-25 Google Inc. Phrase identification in an information retrieval system
US7584175B2 (en) 2004-07-26 2009-09-01 Google Inc. Phrase-based generation of document descriptions
US7305385B1 (en) 2004-09-10 2007-12-04 Aol Llc N-gram based text searching
US7461064B2 (en) 2004-09-24 2008-12-02 International Buiness Machines Corporation Method for searching documents for ranges of numeric values
US7805446B2 (en) 2004-10-12 2010-09-28 Ut-Battelle Llc Agent-based method for distributed clustering of textual information
US7689615B2 (en) 2005-02-25 2010-03-30 Microsoft Corporation Ranking results using multiple nested ranking
US20060248066A1 (en) 2005-04-28 2006-11-02 Microsoft Corporation System and method for optimizing search results through equivalent results collapsing
US20070250501A1 (en) 2005-09-27 2007-10-25 Grubb Michael L Search result delivery engine
US20070078653A1 (en) 2005-10-03 2007-04-05 Nokia Corporation Language model compression
US7596745B2 (en) 2005-11-14 2009-09-29 Sun Microsystems, Inc. Programmable hardware finite state machine for facilitating tokenization of an XML document
US7624118B2 (en) 2006-07-26 2009-11-24 Microsoft Corporation Data processing over very large databases
US7593934B2 (en) 2006-07-28 2009-09-22 Microsoft Corporation Learning a document ranking using a loss function with a rank pair or a query parameter
US7620634B2 (en) 2006-07-31 2009-11-17 Microsoft Corporation Ranking functions using an incrementally-updatable, modified naïve bayesian query classifier
US7805438B2 (en) 2006-07-31 2010-09-28 Microsoft Corporation Learning a document ranking function using fidelity-based error measurements
US7765215B2 (en) 2006-08-22 2010-07-27 International Business Machines Corporation System and method for providing a trustworthy inverted index to enable searching of records
US20080059489A1 (en) 2006-08-30 2008-03-06 International Business Machines Corporation Method for parallel query processing with non-dedicated, heterogeneous computers that is resilient to load bursts and node failures
US8401841B2 (en) 2006-08-31 2013-03-19 Orcatec Llc Retrieval of documents using language models
US7895210B2 (en) 2006-09-29 2011-02-22 Battelle Memorial Institute Methods and apparatuses for information analysis on shared and distributed computing systems
US7761407B1 (en) 2006-10-10 2010-07-20 Medallia, Inc. Use of primary and secondary indexes to facilitate aggregation of records of an OLAP data cube
US20080114750A1 (en) 2006-11-14 2008-05-15 Microsoft Corporation Retrieval and ranking of items utilizing similarity
US7783644B1 (en) 2006-12-13 2010-08-24 Google Inc. Query-independent entity importance in books
US7930290B2 (en) * 2007-01-12 2011-04-19 Microsoft Corporation Providing virtual really simple syndication (RSS) feeds
US20080208836A1 (en) 2007-02-23 2008-08-28 Yahoo! Inc. Regression framework for learning ranking functions using relative preferences
US7693813B1 (en) 2007-03-30 2010-04-06 Google Inc. Index server architecture using tiered and sharded phrase posting lists
US7702614B1 (en) 2007-03-30 2010-04-20 Google Inc. Index updating using segment swapping
US8583419B2 (en) 2007-04-02 2013-11-12 Syed Yasin Latent metonymical analysis and indexing (LMAI)
US7792846B1 (en) 2007-07-27 2010-09-07 Sonicwall, Inc. Training procedure for N-gram-based statistical content classification
WO2009039392A1 (fr) 2007-09-21 2009-03-26 The Board Of Trustees Of The University Of Illinois Système pour une recherche d'entité et procédé pour noter une entité dans une base de données de documents reliés
US8332411B2 (en) 2007-10-19 2012-12-11 Microsoft Corporation Boosting a ranker for improved ranking accuracy
US20090112843A1 (en) 2007-10-29 2009-04-30 International Business Machines Corporation System and method for providing differentiated service levels for search index
US20090132515A1 (en) 2007-11-19 2009-05-21 Yumao Lu Method and Apparatus for Performing Multi-Phase Ranking of Web Search Results by Re-Ranking Results Using Feature and Label Calibration
US7917503B2 (en) 2008-01-17 2011-03-29 Microsoft Corporation Specifying relevance ranking preferences utilizing search scopes
US7853599B2 (en) 2008-01-21 2010-12-14 Microsoft Corporation Feature selection for ranking
US8924374B2 (en) 2008-02-22 2014-12-30 Tigerlogic Corporation Systems and methods of semantically annotating documents of different structures
US8229921B2 (en) 2008-02-25 2012-07-24 Mitsubishi Electric Research Laboratories, Inc. Method for indexing for retrieving documents using particles
US8010482B2 (en) * 2008-03-03 2011-08-30 Microsoft Corporation Locally computable spam detection features and robust pagerank
US20090248669A1 (en) 2008-04-01 2009-10-01 Nitin Mangesh Shetti Method and system for organizing information
US20090254523A1 (en) 2008-04-04 2009-10-08 Yahoo! Inc. Hybrid term and document-based indexing for search query resolution
US8161036B2 (en) * 2008-06-27 2012-04-17 Microsoft Corporation Index optimization for ranking using a linear model
US8171031B2 (en) 2008-06-27 2012-05-01 Microsoft Corporation Index optimization for ranking using a linear model
US8458170B2 (en) 2008-06-30 2013-06-04 Yahoo! Inc. Prefetching data for document ranking
US8255391B2 (en) 2008-09-02 2012-08-28 Conductor, Inc. System and method for generating an approximation of a search engine ranking algorithm
US20100082617A1 (en) 2008-09-24 2010-04-01 Microsoft Corporation Pair-wise ranking model for information retrieval
JP4633162B2 (ja) 2008-12-01 2011-02-16 株式会社エヌ・ティ・ティ・ドコモ インデックス生成システム、情報検索システム、及びインデックス生成方法
US8341095B2 (en) 2009-01-12 2012-12-25 Nec Laboratories America, Inc. Supervised semantic indexing and its extensions
US8676827B2 (en) 2009-02-04 2014-03-18 Yahoo! Inc. Rare query expansion by web feature matching
US8620900B2 (en) 2009-02-09 2013-12-31 The Hong Kong Polytechnic University Method for using dual indices to support query expansion, relevance/non-relevance models, blind/relevance feedback and an intelligent search interface
US8527523B1 (en) * 2009-04-22 2013-09-03 Equivio Ltd. System for enhancing expert-based computerized analysis of a set of digital documents and methods useful in conjunction therewith
US8271499B2 (en) 2009-06-10 2012-09-18 At&T Intellectual Property I, L.P. Incremental maintenance of inverted indexes for approximate string matching
US8886641B2 (en) * 2009-10-15 2014-11-11 Yahoo! Inc. Incorporating recency in network search using machine learning
US9110971B2 (en) * 2010-02-03 2015-08-18 Thomson Reuters Global Resources Method and system for ranking intellectual property documents using claim analysis
US8478704B2 (en) * 2010-11-22 2013-07-02 Microsoft Corporation Decomposable ranking for efficient precomputing that selects preliminary ranking features comprising static ranking features and dynamic atom-isolated components

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2012071165A1 *

Also Published As

Publication number Publication date
CN102521270B (zh) 2015-04-01
WO2012071165A1 (fr) 2012-05-31
US20120130925A1 (en) 2012-05-24
EP2643775A4 (fr) 2018-01-24
CN102521270A (zh) 2012-06-27
US8805755B2 (en) 2014-08-12
US20130297621A1 (en) 2013-11-07
US8478704B2 (en) 2013-07-02

Similar Documents

Publication Publication Date Title
US8805755B2 (en) Decomposable ranking for efficient precomputing
US11803596B2 (en) Efficient forward ranking in a search engine
US8713024B2 (en) Efficient forward ranking in a search engine
US8260664B2 (en) Semantic advertising selection from lateral concepts and topics
US8204874B2 (en) Abbreviation handling in web search
US9342582B2 (en) Selection of atoms for search engine retrieval
US9043197B1 (en) Extracting information from unstructured text using generalized extraction patterns
US8909652B2 (en) Determining entity popularity using search queries
US8620907B2 (en) Matching funnel for large document index
CN105045781B (zh) 查询词相似度计算方法及装置、查询词搜索方法及装置
US20100185623A1 (en) Topical ranking in information retrieval
WO2011130008A2 (fr) Génération automatique de suggestions d'interrogation utilisant des sous-interrogations
WO2011097053A2 (fr) Génération et présentation de concepts latéraux
JP2009525520A (ja) 検索結果リストにおける電子文書を関連性に基づきランク付けおよびソートする評価方法、およびデータベース検索エンジン
US20080065620A1 (en) Recommending advertising key phrases
US8364672B2 (en) Concept disambiguation via search engine search results
JP2014532240A (ja) 情報の検索
JP5250009B2 (ja) サジェスチョンクエリ抽出装置及び方法、並びにプログラム
US20110099066A1 (en) Utilizing user profile data for advertisement selection
KR20120038418A (ko) 탐색 방법 및 디바이스
CN114391142A (zh) 使用结构化和非结构化数据的解析查询
US8161065B2 (en) Facilitating advertisement selection using advertisable units
Linnusmäki Increasing e-commerce conversion rate with relevant search results using Elasticsearch
JP3333186B2 (ja) 文書検索システム
JP2005031949A (ja) 情報検索方法、情報検索装置およびプログラム

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130422

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1183953

Country of ref document: HK

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC

RA4 Supplementary search report drawn up and despatched (corrected)

Effective date: 20180103

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 9/44 20180101ALI20171220BHEP

Ipc: G06F 17/30 20060101AFI20171220BHEP

17Q First examination report despatched

Effective date: 20181116

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20200307

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1183953

Country of ref document: HK