BACKGROUND

[0001]
Term mismatch can be a challenge when performing a search. For instance, a query and its relevant documents are often composed using different vocabularies and language styles, which can cause term mismatch. Conventional algorithms utilized by search engines to match documents to queries may be detrimentally impacted by term mismatch, and thus, query expansion (QE) is oftentimes employed to address such challenge. Query expansion can expand a query issued by a user with additional relevant terms, called expansion terms, so that more relevant documents can be retrieved.

[0002]
Various conventional QE techniques have been implemented for information retrieval (IR). Some traditional QE techniques based on automatic relevance feedback (e.g., explicit feedback and pseudorelevance feedback (PRF)) can enhance performance of IR. Yet, such techniques may be unable to be directly applied to a commercial web search engine because relevant documents may be unavailable. Moreover, generation of pseudorelevant documents can employ multiphase retrieval, which may be expensive and timeconsuming to perform in real time.

[0003]
QE techniques, developed recently, utilize search logs (e.g., clickthrough data). These techniques, called logbased QE, can also derive expansion terms for a query from a (pseudo)relevant document set. However, different from techniques based on automatic relevance feedback, the relevant set can be identified in logbased QE techniques from user clicks recorded in search logs. For example, the set of (pseudo)relevant documents of an input query can be formed by including the documents that have been previously clicked for the query. Many conventional logbased QE techniques use a global model that is precomputed from search logs. The model can capture the correlation between query terms and document terms, and can be used to generate expansion terms for the input query on the fly.

[0004]
Despite the effectiveness of the logbased QE techniques, such approaches can suffer from various problems. For instance, data sparseness can impact effectiveness of logbased QE techniques. A significant portion of queries may have few or no clicks in the search logs, as stated by Zipf's law. Moreover, ambiguity of search intent can detrimentally impact logbased QE techniques. For example, a term correlation model may fail to distinguish the search intent of the query term “book” in “school book” from that in “hotel booking”. Although the problem can be partially alleviated by using correlation models based on phrases and concepts, there may be scenarios where the search intent is unable to be correctly identified without use of global context. For instance, the query “why six bottles in one wrap” can be about a package, and the intent of the query “Acme baked bread” can concern looking for a bakery in California. In such cases, a (pseudo)relevant documents set of the input query, if available, can be more likely to preserve the original search intent than the global correlation model.
SUMMARY

[0005]
Described herein are various technologies that pertain to use of pathconstrained random walks for query expansion and/or query document matching. Clickthrough data from search logs can be represented as a computerimplemented labeled and directed graph. Pathconstrained random walks (PCRW) can be executed over the computerimplemented labeled and directed graph for query expansion and/or documentquery matching. The pathconstrained random walks can be executed over the labeled and directed graph based upon an input query. The labeled and directed graph can include a first set of nodes that are representative of queries included in the clickthrough data from the search logs. Moreover, the labeled and directed graph can include a second set of nodes that are representative of documents included in the clickthrough data from the search logs. The labeled and directed graph can further include a third set of nodes that are representative of words from the queries and the documents. The labeled and directed graph can also include edges between nodes that are representative of relationships between the queries, the documents, and the words. The pathconstrained random walks can include traversals over edges of the graph between nodes. Further, a score for a relationship between a target node and a source node representative of the input query can be computed based at least in part upon the pathconstrained random walks.

[0006]
In accordance with various embodiments, query expansion techniques based on pathconstrained random walks can be implemented. Accordingly, the target node of the pathconstrained random walks can be representative of a candidate query expansion term (e.g., the third set of nodes that are representative of the words from the queries and the documents can include the target node). Thus, the score for the relationship between the target node representative of the candidate query expansion term and the source node representative of the input query can be computed. Such score can be computed as a learned combination of the pathconstrained random walks on the labeled and directed graph between the target node representative of the candidate query expansion term and the source node representative of the input query. The score for the relationship can be a probability of picking the candidate query expansion term for the input query.

[0007]
In accordance with other embodiments, querydocument matching techniques based upon pathconstrained random walks over the labeled and directed graph can be implemented. Thus, the target node of the pathconstrained random walks can be representative of a candidate document (e.g., the second set of nodes that are representative of the documents included in the clickthrough data from the search logs can include the target node). Accordingly, the score for the relationship between the target node representative of the candidate document and the source node representative of the input query can be computed. The score can be computed as a learned combination of the pathconstrained random walks on the labeled and directed graph between the target node representative of the candidate document and the source node representative of the input query. Further, the score for the relationship can be a probability of the candidate document being relevant to the input query.

[0008]
Pursuant to various embodiments, the score for the relationship between the target node and the source node representative of the input query can be computed by determining respective values for the pathconstrained random walks between the target node and the source node representative of the input query. For instance, the pathconstrained random walks can traverse the edges of the graph between the nodes from the source node representative of the input query to the target node in accordance with differing path types. A path type can include a sequence of relations between the nodes in the graph for traversing as part of a corresponding pathconstrained random walk. Thus, the path type can be a sequence of edge labels for edges included in the labeled and directed graph that can be followed during execution of the corresponding pathconstrained random walk. Moreover, the respective values for the pathconstrained random walks that traverse the edges of the graph between the nodes from the source node representative of the input query to the target node in accordance with the differing path types can be combined to compute the score for the relationship between the target node and the source node representative of the input query.

[0009]
The above summary presents a simplified summary in order to provide a basic understanding of some aspects of the systems and/or methods discussed herein. This summary is not an extensive overview of the systems and/or methods discussed herein. It is not intended to identify key/critical elements or to delineate the scope of such systems and/or methods. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
BRIEF DESCRIPTION OF THE DRAWINGS

[0010]
FIG. 1 illustrates a functional block diagram of an exemplary system that executes pathconstrained random walks.

[0011]
FIG. 2 illustrates a functional block diagram of an exemplary system that executes pathconstrained random walks as part of a search.

[0012]
FIG. 3 illustrates an exemplary labeled and directed graph.

[0013]
FIG. 4 illustrates a functional block diagram of an exemplary system that constructs the labeled and directed graph based upon clickthrough data from search logs.

[0014]
FIGS. 58 illustrate various exemplary pathconstrained random walks between a source node that represents an input query Q and a target node that represents a candidate query expansion term w_{1}.

[0015]
FIG. 9 is a flow diagram that illustrates an exemplary methodology for using pathconstrained random walks.

[0016]
FIG. 10 is a flow diagram that illustrates an exemplary methodology for performing query expansion or querydocument matching using pathconstrained random walks.

[0017]
FIG. 11 illustrates an exemplary computing device.
DETAILED DESCRIPTION

[0018]
Various technologies pertaining to use of pathconstrained random walks for query expansion and/or querydocument matching are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that such aspect(s) may be practiced without these specific details. In other instances, wellknown structures and devices are shown in block diagram form in order to facilitate describing one or more aspects. Further, it is to be understood that functionality that is described as being carried out by certain system components may be performed by multiple components. Similarly, for instance, a component may be configured to perform functionality that is described as being carried out by multiple components.

[0019]
Moreover, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from the context, the phrase “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, the phrase “X employs A or B” is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form.

[0020]
As set forth herein, query expansion and/or querydocument matching based on pathconstrained random walks can be implemented. Clickthrough data from search logs can be represented as a labeled and directed graph. For query expansion, a probability of picking a candidate query expansion term for an input query is computed by a learned combination of pathconstrained random walks on the graph. Moreover, for query document matching, a probability of a candidate document being relevant to an input query can be computed by a learned combination of pathconstrained random walks on the graph.

[0021]
A principled framework that incorporates disparate models in a unified manner is provided herein. For instance, for query expansion, the framework can be generic by covering various QE models as special cases and flexible by enabling a variety of information to be combined in a unified manner. Moreover, the framework supports incorporating additional QE models (e.g., enabling QE model(s) to be later added or removed). Further, the pathconstrained random walkbased techniques provided herein can effectively expand rare queries (e.g., lowfrequency queries that are unseen in search logs) and provide enhanced performance as compared to conventional QE techniques.

[0022]
Referring now to the drawings, FIG. 1 illustrates a system 100 that executes pathconstrained random walks. For example, the system 100 can implement query expansion based upon the pathconstrained random walks. According to another example, the system 100 can implement querydocument matching based upon the pathconstrained random walks.

[0023]
The system 100 includes a data repository 102 that retains a labeled and directed graph 104. Search logs, which can include clicked querydocument pairs, can be represented as the labeled and directed graph 104, which includes three types of nodes representing respectively queries, documents, and words (e.g., candidate expansion terms). Thus, the labeled and directed graph 104 includes a first set of nodes that are representative of queries included in clickthrough data from the search logs, a second set of nodes that are representative of documents included in the clickthrough data from the search logs, and a third set of nodes that are representative of words from the queries and the documents. Moreover, the labeled and directed graph 104 includes edges between nodes that are representative of relationships between the queries, the documents, and the words. The edges between the nodes included in the labeled and directed graph 104 are labeled by respective relations. The edges in the labeled and directed graph 104 can further be assigned respective edge scores based upon relationspecific probabilistic models for the respective relations.

[0024]
The system 100 further includes a random walk component 106 that can receive an input query 108. The random walk component 106 can execute pathconstrained random walks over the labeled and directed graph 104 based upon the input query 108. The pathconstrained random walks executed by the random walk component 106 can include traversals over edges of the graph 104 between nodes. The pathconstrained random walks traverse the edges of the graph 104 between the nodes in accordance with predefined path types 110. Each of the predefined path types 110 can include a respective sequence of relations between the nodes in the graph 104 for traversing as part of a corresponding pathconstrained random walk executed by the random walk component 106.

[0025]
The pathconstrained random walks executed by the random walk component 106 over the labeled and directed graph 104 instantiate respective differing path types 110. The pathconstrained random walks executed by the random walk component 106 can begin at a source node representative of the input query 108. Moreover, the pathconstrained random walks can traverse edges of the graph 104 between nodes in accordance with the differing predefined path types 110. For instance, a given pathconstrained random walk can traverse edges of the graph 104 between nodes in accordance with a corresponding one of the path types 110, a disparate pathconstrained random walk can traverse edges of the graph 104 between nodes in accordance with a disparate corresponding one of the path types 110, and so forth. Further, the pathconstrained random walks can end at a target node.

[0026]
The system 100 also includes a relation evaluation component 112 that computes a score 114 for a relationship between a target node and the source node representative of the input query 108 based at least in part upon the pathconstrained random walks. For instance, the relation evaluation component 112 can determine respective values for the pathconstrained random walks between the target node and the source node representative of the input query 108, where the pathconstrained random walks traverse the edges of the graph 104 between the nodes from the source node representative of the input query 108 to the target node in accordance with the differing path types 110. Moreover, the relation evaluation component 112 can combine the respective values for the pathconstrained random walks to compute the score 114 for the relationship between the target node and the source node representative of the input query 108. According to various embodiments, weights can be assigned to the differing path types 110. Thus, the relation evaluation component 112 can combine the respective values for the pathconstrained random walks that traverse the edges of the graph 104 between the nodes from the source node representative of the input query 108 to the target node in accordance with the differing path types 110 as a function of the weights assigned to the differing path types 110.

[0027]
While much of the aforementioned discussion pertains to computing the score 114 for the relationship between the target node and the source node that represents the input query 108, it is to be appreciated that scores for relationships between substantially any number of target nodes and the source node that represents the input query 108 can similarly be computed based at least in part upon respective pathconstrained random walks. Moreover, such scores for the relationships between the target nodes and the source node can be ranked. For instance, a ranked list (e.g., of the target nodes) can be output based upon the respective scores for the corresponding relationships between the target nodes and the source node that represents the input query 108.

[0028]
Again, pursuant to various examples, the system 100 can implement query expansion based upon the pathconstrained random walks over the labeled and directed graph 104 executed by the random walk component 106. Accordingly, the third set of nodes of the labeled and directed graph 104 that are representative of the words from the queries and the documents can include the target node. Thus, the target node can be representative of a candidate query expansion term. Further, the score 114 for the relationship can be a probability of picking the candidate query expansion term for the input query 108.

[0029]
According to other examples, the system 100 can implement querydocument matching based upon the pathconstrained random walks over the labeled and directed graph 104 executed by the random walk component 106. Thus, the second set of nodes of the labeled and directed graph 104 that are representative of the documents included in the clickthrough data from the search logs can include the target node. Hence, the target node can be representative of a candidate document. Moreover, the score 114 for the relationship can be a probability of the candidate document being relevant to the input query 108.

[0030]
Now turning to FIG. 2, illustrated is a system 200 that executes pathconstrained random walks as part of a search. The system 200 includes the data repository 102, which retains the labeled and directed graph 104, and a search component 202. Further, the search component 202 can include the random walk component 106 and the relation evaluation component 112; yet, according to other examples (not shown), it is contemplated that the random walk component 106 and/or the relation evaluation component 112 can be separate from the search component 202.

[0031]
The search component 202 can execute substantially any type of search (e.g., web searches, desktop searches, etc.). The search component 202, for example, can be a search engine. Thus, by way of illustration, the search component 202 can be a web search engine, a desktop search engine, or the like; however, it is to be appreciated that the claimed subject matter is not limited to the foregoing illustrations.

[0032]
The search component 202 can receive the input query 108 (e.g., the input query 108 can desirably be input to the search component 202). Further, the random walk component 106 can execute the pathconstrained random walks over the labeled and directed graph 104 based upon the input query 108. The relation evaluation component 112 can compute a score for a relationship between a target node and a source node that represents the input query 108 based at least upon the pathconstrained random walks.

[0033]
Moreover, the search component 202 can include a rank component 204. It is contemplated that pathconstrained random walks can be executed over the labeled and directed graph 104 based upon the input query 108 for a plurality of target nodes. The relation evaluation component 112 can compute respective scores for the relationships between such target nodes and the source node that represents the input query 108 based upon the respective pathconstrained random walks. Further, the rank component 204 can output a ranked list based upon the respective scores for the corresponding relationships between target nodes and the source node that represents the input query 108. Moreover, the search component 202 can perform a search based upon the ranked list.

[0034]
In accordance with an example, query expansion can be implemented based upon the pathconstrained random walks over the labeled and directed graph 104 executed by the random walk component 106. Following this example, the rank component 204 can output a ranked list of candidate query expansion terms based upon respective scores for corresponding relationships between target nodes representative of the candidate query expansion terms and the source node representative of the input query 108.

[0035]
By way of another example, querydocument matching can be implemented based upon the pathconstrained random walks over the labeled and directed graph 104 executed by the random walk component 106. Accordingly, the rank component 204 can output a ranked list of candidate documents based upon respective scores for corresponding relationships between target nodes representative of the candidate documents and the source node representative of the input query 108.

[0036]
Reference is again made to the exemplary scenario where query expansion is implemented. Thus, the target node can represent a candidate query expansion term. The search component 202 can select the candidate query expansion term based at least in part upon the score for the relationship between the target node representative of the candidate query expansion term and the source node representative of the input query 108 (e.g., based upon a position of the candidate query expansion term in the ranked list output by the rank component 204). According to an example, responsive to selecting the candidate query expansion term, the search component 202 can execute a search over a plurality of documents based at least in part upon the candidate query expansion term. Pursuant to another example, responsive to selecting the candidate query expansion term, the search component 202 can cause the candidate query expansion term to be displayed as a suggested query (e.g., to a user on a display screen of a user device). Following this example, if the suggested query corresponding to the candidate query expansion term is chosen (e.g., based upon user input), the search component 202 can execute a search over a plurality of documents based at least in part upon the candidate query expansion term. By way of illustration, the search component 202 can cause a top K candidate query expansion terms in the ranked list output by the rank component 204 to be displayed as suggested queries, where K can be substantially any integer. Following this illustration, one or more of the suggested queries can be chosen (e.g., based upon user input); accordingly, the search component 202 can execute a search based at least in part upon the one or more suggested queries that are chosen.

[0037]
Moreover, reference is again made to the exemplary scenario where querydocument matching is implemented. Accordingly, the target node can represent a candidate document. The search component 202 can return the candidate document responsive to execution of a search over a plurality of documents. The candidate document, for instance, can be returned by the search component 202 based at least in part upon the score for the relationship between the target node representative of the candidate document and the source node representative of the input query 108.

[0038]
It is noted that many of the following examples set forth herein pertain to use of the pathconstrained random walks over the labeled and directed graph 104 for query expansion. It is to be appreciated, however, that such examples can be extended to scenarios where the pathconstrained random walks over the labeled and directed graph 104 are employed for query document matching.

[0039]
With reference to FIG. 3, illustrated is an exemplary labeled and directed graph 300 (e.g., the labeled and directed graph 104). The graph 300 includes a node 302 that represents an input query Q (e.g., a source node), nodes 304 that represent queries Q′ included in the clickthrough data from the search logs, nodes 306 that represent documents D included in the clickthrough data from the search logs, and nodes 308 that represent words w (collectively referred to herein as nodes 302308). Moreover, the graph 300 includes edges between the nodes 302308.

[0040]
For each path in the graph 300 that links the input query Q to a candidate expansion term w (e.g., one of the nodes 308, a target node, etc.), there is a path type π (e.g., one of the path types 110), defined by a sequence of edge labels. Each path type can be viewed as a particular process of generating w from Q. Further, a generation probability P(wQ,π) is computed by random walks along the paths that instantiate the path type π, referred to as pathconstrained random walks.

[0041]
Various logbased QE models can be formulated in the framework of pathconstrained random walks by defining particular path types. The pathconstrained random walks provide a generic and flexible modeling framework. For instance, the pathconstrained random walks can cover various logbased QE models as special cases, while allowing for incorporation of other QE models (e.g., later developed QE models). For example, a rich set of walk behaviors that support a variety of labeled edges can be defined, where different information can be used at different stages of the walk.

[0042]
Moreover, because different QE approaches often rely on different sources and are potentially complimentary, it may be desirable to combine them to address data sparseness and help disambiguate search intent. For example, while automatic feedback techniques using (pseudo)relevant documents may retain search intent but suffer from data sparseness especially for rare queries, techniques based on global term correlation models may be applicable to both common and rare queries but, due to the limited context information it captures, may lead to an unexpected shift of search intent. The pathconstrained random walks provide a flexible mathematical framework in which different QE features, specified by path types π, can be incorporated in a unified way. Formally, in the pathconstrained random walkbased QE approach set forth herein, a probability of picking w for a given Q, P(wQ), can be computed (e.g., by the relation evaluation component 112) by a learned combination of pathconstrained random walks on the graph 300 (e.g., P(wQ)=Σ_{πεB}λ_{π}P(wQ,π), where λ_{π}'s are the combination weights learned on training data). Accordingly, the use of pathconstrained random walks can enhance robustness of QE to data sparseness while helping disambiguate search intents.

[0043]
Consider the directed, edgelabeled graph G=(C,T) (e.g., the graph 300), where T⊂C×R×C is the set of labeled edges (also known as triples) (c,r,c′). Each triple represents an instance r(c,c′) of the relation rεR. For QE, a separate probabilistic model θ_{r }can be used for each relation r. A probabilistic model is used to assign a score to each edge. The score is the probability of reaching c′ from c with a onestep random walk with edge type r, P(c′c,θ_{r}).

[0044]
A path type in G is a sequence π=<r_{1}, . . . , r_{m}>. An instance of the path type is a sequence of nodes c_{0}, . . . , c_{m}, such that r_{i}(c_{i−1},c_{i}). Each path type specifies a realvalue feature. For a given node pair (s,t), where s is a source node and t is a target node, the value of the feature it is P(ts,π) (e.g., the probability of reaching t from s by a random walk that instantiates the path type, also known as a pathconstrained random walk). Specifically, suppose that the random walk has just reached c_{i }by traversing edges labeled r_{1}, . . . , r_{i }with Q=c_{0}. Then c_{i+1 }is drawn at random, according to θ_{r} _{ i+1 }, from nodes reachable by edges labeled r_{i+1}. A path type it is active for the pair (s,t) if P(ts,π)>0.

[0045]
Let B={⊥,π_{1}, . . . , π_{n}} be the set of path types of length no greater than l that occur in the graph 300 together with the dummy type ⊥, which represents the bias feature. For instance, P(ts,⊥)=1 may be set for nodes s,t. The score for whether the target node t is related to the source node s can be given by:

[0000]
$\begin{array}{cc}P\ue8a0\left(t\ue85cs\right)=\sum _{\pi \in B}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{\lambda}_{\pi}\ue89eP\ue8a0\left(t\ue85cs,\pi \right)& \left(1\right)\end{array}$

[0000]
In the foregoing, where λ_{π} is the weight of feature π. The model parameters to be learned are the vector λ=<λ_{π}>_{πεB}. Moreover, the construction of B and the estimation of λ can be application specific. For QE, the source node is the input query to be expanded Q (e.g., the node 302) and target node is a candidate expansion term w (e.g., one of the nodes 308). Thus, Equation (1) gives the probability of whether w is an appropriate expansion term of Q.

[0046]
With reference to FIG. 4, illustrated is a system 400 that constructs the labeled and directed graph 104 based upon clickthrough data 402 from search logs. The clickthrough data 402 can be retained in a data repository 404. It is contemplated that the data repository 404 can be the data repository 102 of FIG. 1; yet, the claimed subject matter is not so limited. The clickthrough data 402 can include query document pairs.

[0047]
Moreover, the system 400 includes a builder component 406 that constructs the labeled and directed graph 104 from the clickthrough data 402. The builder component 406 can further include a graph generation component 408 and an edge label component 410. The graph generation component 408 can generate nodes for documents, queries, and words. Further, the graph generation component 408 can create edges linking the nodes.

[0048]
The edge label component 410 can assign labels to the edges. More particularly, the edge label component 410 can label each edge in the graph by a respective relation. Further, the edge label component 410 can assign each edge in the labeled and directed graph 104 a respective edge score. The edge score for a given edge can be generated by the edge label component 410 based upon a relationspecific probabilistic model for the relation of the edge.

[0049]
The clickthrough data 402 includes a list of querydocument pairs. Each pair includes a query and a document which has one or more user clicks for the query. Thus, the graph generation component 408 can represent the search logs as a graph G=(C,T) (e.g., the labeled and directed graph 104, the graph 300 of FIG. 3). Again, the graph generation component 408 defines three types of nodes to represent respectively queries, documents, and words that occur in queries and documents. A query in the search logs, denoted by Q′, has clicked document(s). An input query to be expanded, denoted by Q, can be a new, lowfrequency query without clicked documents. Such a query can be referred to as a rare query. However, it is also contemplated that the input query to be expanded Q′ can alternatively be a query in the search logs that has clicked document(s). Q and Q′ are treated as different nodes in G (as shown in FIG. 3).

[0050]
The edge label component 410 labels each edge in the graph 104 by a relation r. Further, the edge label component 410 scores each edge in the graph 104 using a relationspecific model θ_{r}. The edge score is the probability of reaching a target node t from a source node s with a onestep random walk with edge type r, P(ts,θ_{r}). Examples of relations r and their corresponding scoring functions score (s→t;r) are shown below in Table 1.

[0000]
TABLE 1 

ID 
Relation r 
Scoring function 

1 
similar. Q2Q′ 
Cosine similarity between the term vectors of Q 


and Q′, where term weights are assigned using the 


BM25 function. 

2 
translate. Q2Q′ 
$\mathrm{log}\ue89e\prod _{{q}^{\prime}\in {Q}^{\prime}}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e\sum _{q\in Q}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e{P}_{\mathrm{tm}}\ue8a0\left({q}^{\prime}\ue89e\text{\ue85c}\ue89eq\right)\ue89e\frac{\mathrm{tf}\ue8a0\left(q;Q\right)}{\uf603Q\uf604}$


3 
click. Q2D 
$\mathrm{log}\ue89e\phantom{\rule{0.6em}{0.6ex}}\ue89eP\ue8a0\left(D\ue89e\text{\ue85c}\ue89eQ\right)=\mathrm{log}\ue89e\frac{\mathrm{click}\ue8a0\left(Q,D\right)}{\sum _{{D}_{i}\in D}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e\mathrm{click}\ue8a0\left(Q,{D}_{i}\right)}$


4 
click. D2Q 
$\mathrm{log}\ue89e\phantom{\rule{0.6em}{0.6ex}}\ue89eP\ue8a0\left(Q\ue89e\text{\ue85c}\ue89eD\right)=\mathrm{log}\ue89e\frac{\mathrm{click}\ue8a0\left(Q,D\right)}{\sum _{{Q}_{i}\in Q}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e\mathrm{click}\ue8a0\left({Q}_{i},D\right)}$


5 
generate. Q2w 
$\mathrm{log}\left(\left(1\alpha \right)\ue89e\frac{\mathrm{tf}\ue8a0\left(w;Q\right)}{\uf603Q\uf604}+\alpha \ue89e\frac{\mathrm{cf}\ue8a0\left(w\right)}{\uf603C\uf604}\right)$


6 
translate. Q2w 
$\mathrm{log}\ue89e\sum _{q\in Q}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e{P}_{\mathrm{tm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89eq\right)\ue89e\frac{\mathrm{tf}\ue8a0\left(q;Q\right)}{\uf603Q\uf604}$


7 
generate. Q′2w 
$\mathrm{log}\left(\left(1\alpha \right)\ue89e\frac{\mathrm{tf}\ue8a0\left(w;{Q}^{\prime}\right)}{\uf603{Q}^{\prime}\uf604}+\alpha \ue89e\frac{\mathrm{cf}\ue8a0\left(w\right)}{\uf603C\uf604}\right)$


8 
translate. Q′2w 
$\mathrm{log}\ue89e\sum _{{q}^{\prime}\in {Q}^{\prime}}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e{P}_{\mathrm{tm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89e{q}^{\prime}\right)\ue89e\frac{\mathrm{tf}\ue8a0\left({q}^{\prime};{Q}^{\prime}\right)}{\uf603{Q}^{\prime}\uf604}$


9 
click. Q′2D 
$\mathrm{log}\ue89e\phantom{\rule{0.6em}{0.6ex}}\ue89eP\ue8a0\left(D\ue89e\text{\ue85c}\ue89e{Q}^{\prime}\right)=\mathrm{log}\ue89e\frac{\mathrm{click}\ue8a0\left({Q}^{\prime},D\right)}{\sum _{{D}_{i}\in D}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e\mathrm{click}\ue8a0\left({Q}^{\prime},{D}_{i}\right)}$


10 
generate. D2w 
$\mathrm{log}\left(\left(1\beta \right)\ue89e\frac{\mathrm{tf}\ue8a0\left(w;D\right)}{\uf603D\uf604}+\beta \ue89e\frac{\mathrm{cf}\ue8a0\left(w\right)}{\uf603C\uf604}\right)$


11 
translate. D2w 
$\mathrm{log}\ue89e\sum _{{w}_{i}\in D}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e{P}_{\mathrm{tm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89e{w}_{i}\right)\ue89e\frac{\mathrm{tf}\ue8a0\left({w}_{i};D\right)}{\uf603D\uf604}$


12 
click. D2Q′ 
$\mathrm{log}\ue89e\phantom{\rule{0.6em}{0.6ex}}\ue89eP\ue8a0\left({Q}^{\prime}\ue89e\text{\ue85c}\ue89eD\right)=\mathrm{log}\ue89e\frac{\mathrm{click}\ue8a0\left({Q}^{\prime},D\right)}{\sum _{{Q}_{i}^{\prime}\in Q}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e\mathrm{click}\ue8a0\left({Q}_{i}^{\prime},D\right)}$


13 
generate. w2D 
$\mathrm{log}\ue89e\phantom{\rule{0.6em}{0.6ex}}\ue89eP\ue8a0\left(D\ue89e\text{\ue85c}\ue89ew\right)=\mathrm{log}\ue89e\frac{\phantom{\rule{0.3em}{0.3ex}}\ue89e{P}_{\mathrm{lm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89eD\right)\ue89eP\ue8a0\left(D\right)}{\sum _{{D}_{i}\in D}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e{P}_{\mathrm{lm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89e{D}_{i}\right)\ue89eP\ue8a0\left({D}_{i}\right)},$




$\mathrm{where}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e{P}_{\mathrm{lm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89eD\right)=\left(1\beta \right)\ue89e\frac{\mathrm{tf}\ue8a0\left(w;D\right)}{\uf603D\uf604}+\beta \ue89e\frac{\mathrm{cf}\ue8a0\left(w\right)}{\uf603C\uf604}$




$\mathrm{and}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89eP\ue8a0\left(D\right)=\frac{\sum _{Q\in Q}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e\mathrm{click}\ue8a0\left(Q,D\right)}{N}$


14 
generate. w2Q′ 
$\mathrm{log}\ue89e\phantom{\rule{0.6em}{0.6ex}}\ue89eP\ue8a0\left({Q}^{\prime}\ue89e\text{\ue85c}\ue89ew\right)=\mathrm{log}\ue89e\frac{\phantom{\rule{0.3em}{0.3ex}}\ue89e{P}_{\mathrm{lm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89e{Q}^{\prime}\right)\ue89eP\ue8a0\left({Q}^{\prime}\right)}{\sum _{{Q}_{i}^{\prime}\in Q}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e{P}_{\mathrm{lm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89e{Q}_{i}\right)\ue89eP\ue8a0\left(Q\right)},$




$\mathrm{where}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e{P}_{\mathrm{lm}}\ue8a0\left(w\ue89e\text{\ue85c}\ue89eQ\right)=\left(1\alpha \right)\ue89e\frac{\mathrm{tf}\ue8a0\left(w;Q\right)}{\uf603Q\uf604}+\alpha \ue89e\frac{\mathrm{cf}\ue8a0\left(w\right)}{\uf603C\uf604}$




$\mathrm{and}\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89eP\ue8a0\left(Q\right)=\frac{\sum _{D\in D}^{\phantom{\rule{0.3em}{0.3ex}}}\ue89e\mathrm{click}\ue8a0\left(Q,D\right)}{N}$



[0051]
As noted above, Table 1 sets forth examples of relations r and their corresponding scoring functions. As provide above, tƒ(q;Q) is the number of times term q occurs in query Q, and Q is the length of query Q. tƒ(w;D) is the number of times term w occurs in D, and D is the length of document D. The cƒ(w) and C values are analogously defined on the collection level, where the collection includes the set of documents in search logs. P_{tm}(·) is a word translation probability assigned by a translation model trained on querytitle pairs derived from the clickthrough data 402. P_{tm}(q′q) in #2 is also assigned by the same querytitle translation model based on the assumption that an appropriate expansion term q′ is likely to occur in the titles of the clicked documents. click (Q′,D) is the number of times document D is clicked for Q′ in search logs. In #11 and #12, D is the full set of documents in the search logs, Q is the full set of queries in the search logs, and N is the total number of clicks in the search logs (e.g., N=Σ_{QεQ}Σ_{DεD }click(Q,D)). Further, a and are model hyperparameters that control smoothing for query and document language models, respectively.

[0052]
When scoring each edge in the graph 104 using the relationspecific model θ_{r}, the edge label component 410 can compute the edge score as a probability, P(ts,θ_{r}), via softmax as follows:

[0000]
$\begin{array}{cc}P\ue8a0\left(t\ue85cs,{\theta}_{r}\right)=\frac{\mathrm{exp}\ue8a0\left(\mathrm{score}\ue8a0\left(s>t;r\right)\right)}{\sum _{{t}_{i}}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\mathrm{exp}\ue8a0\left(\mathrm{score}\ue8a0\left(s>{t}_{i};r\right)\right)}& \left(2\right)\end{array}$

[0000]
It is noted that conventional pathconstrained random walk models commonly lack θ_{r}, and the edge score is thus traditionally computed as:

[0000]
$P\ue8a0\left(t\ue85cs,r\right)=\frac{I\ue8a0\left(r\ue8a0\left(s,t\right)\right)}{\sum _{{t}^{\prime}}\ue89eI\ue8a0\left(r\ue8a0\left(s,{t}^{\prime}\right)\right)}$

[0000]
In the foregoing, I(r(s,t)) is an indicator function that takes value 1 if there exists an edge with type r that connects s to t. In contrast, introducing θ_{r }as set forth herein allows for incorporation of various models that have been developed for QE and document ranking models.

[0053]
The exemplary scoring functions in Table 1 are generally in four categories. The first category includes functions for the similar.* relation (e.g., #1), and is based on the BM25 model. The second category, which includes functions for the relations of generate.* (e.g., #4), uses unigram language models with Bayesian smoothing using Dirichlet priors. The third category, including functions for click.* (e.g., #3), uses a click model. The fourth category, including functions for translation.* (e.g., #5), uses translation models, where, if clickthrough data 402 is available for model training, the word translation probabilities P_{tm }are estimated on querydocument pairs by assuming that a query is parallel to the documents clicked on for that query.

[0054]
Again, reference is made to FIG. 3. Given the graph 300, any path type π that starts with the input query node Q (e.g., the node 302) and ends with a word node w (e.g., one of the nodes 308) defines a realvalue feature, which can be viewed as a QE model (or QE feature). The feature value is the probability of picking w as an expansion term P(wQ,π) by pathconstrained random walks of type it. Table 2 provides examples of path types, which can be used as features in the pathconstrained random walk model.

[0000]
TABLE 2 

ID 
path type π (Comments) 

TM1 
<translate. Q2w> (w is generated using clickthroughbased translation model 

from Q) 
TM2 
<generate. Q2w, generate. w2D, generate. D2w> (variant of TM1 

where translation model is trained via 2step random walks on word 

document graph) 
TM3 
<generate. Q2w, generate. w2D, generate. D2w, generate. w2D, 

generate. D2w> (variant of TM2 where 4step random walks are used) 
TM4 
<generate. Q2w, generate. w2Q′, generate. Q′2w> (variant of TM2 where 

random walks are performed on wordquery graph) 
TM5 
<generate. Q2w, generate. w2Q′, generate. Q′2w, generate. w2Q′, 

generate. Q′2w> (variant of TM4 where 4step random walks are used) 
SQ1 
<similar. Q2Q′, generate. Q′2w> (w is generated from similar queries Q′ of Q, 

where query similarity is based on BM25) 
SQ2 
<translate. Q2Q′, generate. Q′2w> (variant of SQ1 where query 

similarity is based on clickthroughbased translation model) 
SQ3 
<similar. Q2Q′, click. Q′2D, click. D2Q′, generate. Q′2w> (variant of 

SQ1 where similar query set is enriched by 2step random walks on query 

document graph) 
SQ4 
<similar. Q2Q′, click. Q′2D, click. D2Q′, click. Q′2D, click. D2Q′, 

generate. Q′2w> (variant of SQ3 where 4step random walks are used) 
SQ5 
<translate. Q2Q′, click. Q′2D, click. D2Q′, generate. Q′2w> (variant of 

SQ2 where similar query set is enriched by 2step random walks on query 

document graph) 
SQ6 
<translate. Q2Q′, click. Q′2D, click. D2Q′, click. Q′2D, click. D2Q′, 

generate. Q′2w> (variant of SQ5 where 4step random walks are used) 
RD1 
<similar. Q2Q′, click. Q′2D, generate. D2w> (w is generated from 

pseudorelevant documents D clicked for similar queries Q′ of Q) 
RD2 
<translate. Q2Q′, click. Q′2D, generate. D2w> (variant of RD1 where 

query similarity is computed via translation model) 
RD3 
<similar. Q2Q′, click. Q′2D, translate. D2w> (variant of RD1 where w is 

generated from D using translation model) 
RD4 
<similar. Q2Q′, click. Q′2D, click. D2Q′, click. Q′2D, generate. D2w> 

(variant of RD1 where set of D is enriched by 2step random walks on 

querydocument graph) 
RD5 
<similar. Q2Q′, click. Q′2D, click. D2Q′, click. Q′2D, click. D2Q′, 

click. Q′2D, generate. D2w> (variant of RD3 where 4step random walks 

are used) 
RD6 
<translate. Q2Q′, click. Q′2D, click. D2Q′, click. Q′2D, generate. D2w> 

(variant of RD2 where set of D is enriched by 2step random walks on 

querydocument graph) 
RD7 
<translate. Q2Q′, click. Q′2D, click. D2Q′, click. Q′2D, click. D2Q′, 

click. Q′2D, generate. D2w> (variant of RD6 where 4step random walks 

are used) 
RD8 
<click. Q2D, generate. D2w> (w is generated from pseudorelevant documents D 

clicked for query Q) 
RD9 
<click. Q2D, click. D2Q, click. Q2D, generate. D2w> (variant of RD8 

where the set of D is enriched by 2step random walks on querydocument 

graph) 
RD10 
<click. Q2D, click. D2Q, click. Q2D, click. D2Q, click. Q2D, 

generate. D2w> (variant of RD9 where 4step random walks are used) 


[0055]
Table 2 provides three categories of QE features: (1) TM features, which perform QE using translation models (e.g., the corresponding path types are specified by IDs from TM1 to TM5 in Table 2), (2) SQ features, which perform QE using similar queries (e.g., SQ1 to SQ6), and (3) RD features, which perform QE using (pseudo)relevant documents (e.g., RD1 to RD10).

[0056]
Many logbased QE techniques can use clickthroughbased translation models where term correlations are precomputed using querydocument pairs extracted from clickthrough data. In contrast to approaches based on thesauri either compiled manually or derived from document collections, the logbased techniques that use the translation models can explicitly capture correlation between query terms and document terms. An example of a logbased QE technique that uses a translation model is encoded by the path type TM1, <translate.Q2w>. In case there is not (enough) clickthrough data for model training, a technique using Markov chains can be employed, where the translation probability between two words is computed by random walks on a documentword graph; such technique can be encoded by the path types of TM2 and TM3 in Table 2.

[0057]
Rare queries oftentimes present a challenge for web search. The expansion of a rare query Q is often performed by adding terms from common queries Q′ which are similar to Q. The pathconstrained random walk model achieves this by a random walk that instantiates the path type SQ1, (similar.Q2Q′, generate.Q′2w). For instance, similar queries can be retrieved by performing random walks on a querydocument click graph. Thus, rare query expansion can be enhanced using a larger set of similar queries identified by repeatedly applying random walks following the edges with types click.Q2D and click.D2Q. SQ3 and SQ4 in Table 2 are two examples of such models.

[0058]
A set of relevant documents D of an input query Q that is seen in the search logs can be formed by collecting the documents that have clicks for that query. Thus, the relevance feedback QE method can be represented as e.g., RD8,

[0060]
If the input query is a rare query, the set of pseudorelevant documents can be formed through similar queries Q′ (e.g., queries that are similar to the input query) that are in the search logs, e.g., RD1,

 <similar.Q2Q′,click.Q′2D,generate.D2w>
To address the data sparseness problem, more pseudorelevant documents can be retrieved by performing random walks on a querydocument click graph, such as RD4 and RD5 in Table 2.

[0062]
FIGS. 58 illustrate various exemplary pathconstrained random walks between a source node 502 that represents an input query Q (e.g., the node 302 of FIG. 3) and a target node 504 that represents a candidate query expansion term w_{1 }(e.g., one of the nodes 308 of FIG. 3). FIGS. 58 depict respective portions of the labeled and directed graph 300 of FIG. 3. The examples set forth in FIGS. 58 show four differing path types. Yet, it is to be appreciated that the claimed subject matter is not limited to the illustrated examples.

[0063]
FIG. 5 depicts a pathconstrained random walk 500 that traverses edges of the labeled and directed graph from the source node 502 to the target node 504 in accordance with the path type TM1 from the Table 2. The pathconstrained random walk 500 is a onestep random walk. More particularly, the pathconstrained random walk 500 follows an edge 506 labeled by the relation translate.Q2w from the source node 502 to the target node 504.

[0064]
FIG. 6 depicts a pathconstrained random walk 600 that traverses edges of the labeled and directed graph from the source node 502 to the target node 504 in accordance with the path type SQ1 from the Table 2. The pathconstrained random walk 600 is a twostep random walk. In particular, the pathconstrained random walk 600 begins at the source node 502, follows an edge 602 labeled by the relation similar.Q2Q′ from the source node 502 to a node 604 that represents a similar query QA (e.g., one of the nodes 304 of FIG. 3), and then follows an edge 606 labeled by the relation generate.Q′2w from the node 604 that represents the similar query QA to the target node 504.

[0065]
FIG. 7 depicts a pathconstrained random walk 700 that traverses edges of the labeled and directed graph from the source node 502 to the target node 504 in accordance with the path type RD1 from the Table 2. The pathconstrained random walk 700 is a threestep random walk. In particular, the pathconstrained random walk 700 begins at the source node 502, follows an edge 702 labeled by the relation similar.Q2Q′ from the source node 502 to a node 704 that represents a similar query Q′_{B }(e.g., one of the nodes 304 of FIG. 3), then follows an edge 706 labeled by the relation click.Q′2D from the node 704 that represents the similar query Q′_{B }to a node 708 that represents a document D_{B }(e.g., one of the nodes 306 of FIG. 3), and then follows an edge 710 labeled by the relation generate. D2w from the node 708 that represents the document D_{B }to the target node 504.

[0066]
FIG. 8 depicts a pathconstrained random walk 800 that traverses edges of the labeled and directed graph from the source node 502 to the target node 504 in accordance with the path type TM4 from the Table 2. The pathconstrained random walk 800 is a threestep random walk. More particular, the pathconstrained random walk 800 begins at the source node 502, follows an edge 802 labeled by the relation generate.Q2w from the source node 502 to a node 804 that represents a word w_{C }(e.g., one of the nodes 308 of FIG. 3, representing a word other than the candidate query expansion term w_{1}), then follows an edge 806 labeled by the relation generate. w2Q′ from the node 804 that represents the word w_{C }to a node 808 that represents a similar query Q′_{C }(e.g., one of the nodes 304 of FIG. 3), and then follows an edge 810 labeled by the relation generate.Q′2w from the node 808 that represents the similar query Q′_{C }to the target node 504.

[0067]
Again, reference is made to FIG. 1. The random walk component 106 can implement the random walks as matrix multiplication. As an example, the task of retrieving similar queries can be executed by the random walk component 106 repeatedly applying random walks following click.Q2D and click.D2Q. Let N be the number of query nodes in G (e.g., the labeled and directed graph 104) and M be the number of document nodes. Let A be the N×M matrix with entries A_{Q,D}=P(DQ), called querydocument transition matrix, where the probability is calculated from clicks as in #3 in Table 1. Also, let B be the M×N matrix with entries B_{D,Q}=P(QD), where the probability is calculated from clicks as in #4 in Table 1. A and B are called transition matrices. Thus, using C=AB, the probability of walking from an initial query Q_{0 }to any other query Q in 2k steps can be computed. Moreover, the corresponding probability, which is used to measure querytoquery similarity, is given by P(QQ_{0})=C_{Q} _{ 0 } _{Q} ^{k}. Because the matrices A and B are sparse, the matrix product C=AB can be computed efficiently. As k increases, C^{k }becomes dense and the powers cannot be computed efficiently. However, as k increases, the search intent shifts from the initial query, as the probability spreads out over all queries. Thus, k can be set to 1 or 2, for example.

[0068]
For QE, the pathconstrained random walk model of Equation (1) evaluated by the relation evaluation component 112 can be rewritten as follows:

[0000]
$\begin{array}{cc}P\ue8a0\left(w\ue85cQ\right)=\sum _{\pi \in B}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e{\lambda}_{\pi}\ue89eP\ue8a0\left(w\ue85cQ,\pi \right)& \left(3\right)\end{array}$

[0000]
The foregoing is a weighted linear combination of path features π in B. Thus, the pathconstrained random walk model performs QE by ranking a set of combined paths, each for one pair of Q and w (e.g., a candidate expansion term).

[0069]
The following generally describes construction of B in Equation 3. Given the labeled and directed graph 300, the total number of path types B can grow exponentially with an increase of path length. Accordingly, a maximum path length can be set to substantially any integer (e.g., the maximum length can be set to 7 or substantially any other integer). Moreover, a predefined set of relations that are selective, such as shown in Table 1, can be utilized. Given a path type it, due to the number of nodes in G, even with a length limit, the total number of paths that instantiate π can be significant. For example, since a word can translate to any other word based on a smoothed translation model, any node pair (Q′, Q) can have a nonzeroscore relation translate.Q2Q′ (#2 in Table 1), thus making the transition matrix dense. For efficiency, multiplication of transition matrices can be kept sparse by retaining a subset of (partial) paths (e.g., top1000 (partial) paths) after each step of a random walk.

[0070]
Further, parameters λ_{π} (e.g., weights assigned to the differing path types 110) can be estimated by generating training data and performing parameter estimation using the training data. Training data used for the estimation of parameters λ_{π} in Equation (3) is denoted as D={(x_{i},y_{i})}, where x_{i }is a vector of the path features for the pair (Q_{i},w_{i}). That is, the jth component of x_{i }is P(w_{i}Q_{i},π_{i}), and y_{i }is a Boolean variable indicating whether w_{i }is an appropriate expansion term for Q.

[0071]
Assume a relevance judgment set is developed, for example. The set can include a set of queries. Each query is associated with a set of documents. Each querydocument pair has a relevant label. The effectiveness of a document ranking model Score(D,Q) can be evaluated on the set. Whether a word w is an appropriate expansion for a query Q can be determined by examining whether expanding Q with w leads to an enhanced document ranking result. For instance, the following ranking model can be utilized:

[0000]
$\begin{array}{cc}\mathrm{Score}\ue8a0\left(D,Q\right)=\alpha \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\mathrm{log}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89eP\ue8a0\left(w\ue85c{\theta}_{D}\right)+\sum _{q\in Q}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89eP\ue8a0\left(q\ue85c{\theta}_{Q}\right)\ue89e\mathrm{log}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89eP\ue8a0\left(q\ue85c{\theta}_{D}\right)& \left(4\right)\end{array}$

[0000]
As set forth in Equation 4, w is the expansion term under consideration, α is its weight, q is a term in the original query Q, and θ_{Q }and θ_{D }are query and document models, respectively. The query model P(qθ_{Q}) is estimated via MLE (maximum likelihood estimation) without smoothing as:

[0000]
$\begin{array}{cc}P\ue8a0\left(q\ue85c{\theta}_{Q}\right)=\frac{t\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89ef\ue8a0\left(q;Q\right)}{\uf603Q\uf604}& \left(5\right)\end{array}$

[0000]
In the foregoing, tƒ(q;Q) is the number of times q occurs in Q, and Q is the query length. The document model, e.g., P(qθ_{D}), can be estimated via MLE with Dirichlet smoothing as:

[0000]
$\begin{array}{cc}P\ue89e\left(q\ue85c{\theta}_{D}\right)=\frac{t\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89ef\ue8a0\left(w;D\right)+\mu \ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89eP\ue8a0\left(w\ue85cC\right)}{\uf603D\uf604+\mu}& \left(6\right)\end{array}$

[0000]
Accordingly, tƒ(w;D) is the number of times w occurs in D, D is the document length, μ is the Dirichlet prior (e.g., set to 2000), and P(wC) is the probability of w on the collection C, which can be estimated via MLE without smoothing.

[0072]
Equation (4) can be viewed as a simplified form of QE with a single term. It is used to label whether w is an appropriate expansion term for Q. To simplify the training data generation process, it can be assumed that w acts on the query independently from other expansion terms, and each expansion term is added into Q with equal weight, e.g., α=0.01 or α=−0.01.

[0073]
The training data can be generated as follows. For each query Q in the relevance judgment set, a set of candidate expansion terms {w_{i}} can be formed by collecting terms that occur in the documents that are paired with Q but do not occur in Q. Then w_{i }can be labeled as an appropriate expansion term for Q if it enhances the effectiveness of ranking document when α=0.01 and detrimentally impacts the effectiveness when α=−0.01. w_{i }can be negatively labeled if it produces an opposite effect or produces similar effect when α=0.01 or α=−0.01.

[0074]
Moreover, the parameters λ_{π} can be estimated from the training data as follows. Given training data D, the model parameters λ=<λ_{π}>_{πεB }can be optimized by maximizing the following objective:

[0000]
$\begin{array}{cc}\mathcal{F}\ue8a0\left(\lambda \right)=\sum _{\left(x,y\right)\in D}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89ef\ue8a0\left(x,y;\lambda \right){\alpha}_{1}\ue89e{\uf603\uf603\lambda \uf604\uf604}_{1}{\alpha}_{2}\ue89e{\uf603\uf603\lambda \uf604\uf604}_{2}^{2}& \left(7\right)\end{array}$

[0000]
In the above, α_{1 }and α_{2 }respectively control the strength of the L_{1}regularization (which helps with structure selection) and L_{2}regularization (which helps mitigate overfitting). ƒ(x,y;λ) is the loglikelihood of the training sample (x,y), and is defined as:

[0000]
$\begin{array}{cc}f\ue8a0\left(x,y;\lambda \right)=y\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89e\mathrm{log}\ue89e\phantom{\rule{0.3em}{0.3ex}}\ue89eP\ue8a0\left(x,\lambda \right)+\left(1y\right)\ue89e\mathrm{log}\ue8a0\left(1P\ue8a0\left(x,\lambda \right)\right)\ue89e\phantom{\rule{0.8em}{0.8ex}}\ue89e\text{}\ue89e\mathrm{Moreover},& \left(8\right)\\ P\ue8a0\left(x,\lambda \right)\equiv P\ue8a0\left(y=1\ue85cx,\lambda \right)=\frac{\mathrm{exp}\ue8a0\left({\lambda}^{T}\ue89ex\right)}{1+\mathrm{exp}\ue8a0\left({\lambda}^{T}\ue89ex\right)}& \left(9\right)\end{array}$

[0000]
is the modelpredicted probability. The maximization, for example, can be performed using the OWLQN (OrthantWise Limited memory QuasiNewton) algorithm, which is a version of LBFGS (limited memory BroydenFletcherGoldfarbShanno algorithm) designed to address nondifferentiable L_{1 }norm.

[0075]
The pathconstrained random walkbased model of Equation (3) can assign each path type a weight. Such a parameterization is called oneweightperpathtype. An alternative way of parameterizing the model is oneweightperedgelabel. For instance, the objective function and optimization procedure noted above can similarly be used for parameter estimation of a oneweightperedgelabel model. Because the model can be seen as the combination of the pathconstrained random walks with each path having its weight set to the product of the edge weights along the path, the gradient of edge weights can be calculated by first calculating the gradient with respect to the paths, and then applying the chain rule of derivative.

[0076]
In general, the techniques provided herein use search logs for QE for web search ranking. A QE technique based on pathconstrained random walks is described, where the search logs are represented as a labeled, directed graph, and the probability of selecting an expansion term for an input query is computed by a learned combination of constrained random walks on the graph. Such pathconstrained random walkbased approach for QE is generic and flexible, where various QE models can be incorporated as features, while also allowing for incorporation of additional (e.g., later developed) features, by defining path types with a rich set of walk behaviors. The pathconstrained random walk model also provides a principled mathematical framework in which different QE models (e.g., defined as path types or features) can be incorporated in a unified way, thus mitigating susceptible to sparseness of clickthrough data and ambiguous search intent of user queries.

[0077]
Moreover, as noted herein, while many of the aforementioned examples pertain to utilization of the pathconstrained random walks for query expansion, it is contemplated that the pathconstrained random walkbased technique set forth herein can alternatively be utilized for querydocument matching (e.g., used for web document ranking directly). For example, a relevance score of a query Q and a document D can be modeled as a probability, computed by a learned combination of pathconstrained random walks from Q to D, where different document ranking models can be incorporated as path types. Following this example, in addition to clickthrough data, other data sources can be incorporated to construct G, such as link graphs and the category structure of web documents.

[0078]
FIGS. 910 illustrate exemplary methodologies relating to use of pathconstrained random walks. While the methodologies are shown and described as being a series of acts that are performed in a sequence, it is to be understood and appreciated that the methodologies are not limited by the order of the sequence. For example, some acts can occur in a different order than what is described herein. In addition, an act can occur concurrently with another act. Further, in some instances, not all acts may be required to implement a methodology described herein.

[0079]
Moreover, the acts described herein may be computerexecutable instructions that can be implemented by one or more processors and/or stored on a computerreadable medium or media. The computerexecutable instructions can include a routine, a subroutine, programs, a thread of execution, and/or the like. Still further, results of acts of the methodologies can be stored in a computerreadable medium, displayed on a display device, and/or the like.

[0080]
FIG. 9 illustrates a methodology 900 for using pathconstrained random walks. At 902, an input query can be received. At 904, pathconstrained random walks can be executed over a computerimplemented labeled and directed graph based upon the input query. At 906, a score for a relationship between a target node and a source node representative of the input query can be computed based at least in part upon the pathconstrained random walks.

[0081]
Now turning to FIG. 10, illustrated is a methodology 1000 for performing query expansion or querydocument matching using pathconstrained random walks. At 1002, pathconstrained random walks can be executed over a computerimplemented labeled and directed graph based upon an input query. At 1004, respective values for the pathconstrained random walks that traverse edges of the graph between nodes in accordance with differing predefined path types can be determined. At 1006, the respective values for the pathconstrained random walks that traverse the edges of the graph between the nodes in accordance with the differing predefined path types can be combined to compute a score for a relationship between a target node and a source node representative of the input query.

[0082]
Referring now to FIG. 11, a highlevel illustration of an exemplary computing device 1100 that can be used in accordance with the systems and methodologies disclosed herein is illustrated. For instance, the computing device 1100 may be used in a system that executes pathconstrained random walks for query expansion and/or query document matching. By way of another example, the computing device 1100 may be used in a system that constructs labeled and directed graph based upon clickthrough data from search logs. The computing device 1100 includes at least one processor 1102 that executes instructions that are stored in a memory 1104. The instructions may be, for instance, instructions for implementing functionality described as being carried out by one or more components discussed above or instructions for implementing one or more of the methods described above. The processor 1102 may access the memory 1104 by way of a system bus 1106. In addition to storing executable instructions, the memory 1104 may also store a labeled and directed graph, scores for relationships, ranked lists, clickthrough data, and so forth.

[0083]
The computing device 1100 additionally includes a data store 1108 that is accessible by the processor 1102 by way of the system bus 1106. The data store 1108 may include executable instructions, a labeled and directed graph, scores for relationships, ranked lists, clickthrough data, etc. The computing device 1100 also includes an input interface 1110 that allows external devices to communicate with the computing device 1100. For instance, the input interface 1110 may be used to receive instructions from an external computer device, from a user, etc. The computing device 1100 also includes an output interface 1112 that interfaces the computing device 1100 with one or more external devices. For example, the computing device 1100 may display text, images, etc. by way of the output interface 1112.

[0084]
It is contemplated that the external devices that communicate with the computing device 1100 via the input interface 1110 and the output interface 1112 can be included in an environment that provides substantially any type of user interface with which a user can interact. Examples of user interface types include graphical user interfaces, natural user interfaces, and so forth. For instance, a graphical user interface may accept input from a user employing input device(s) such as a keyboard, mouse, remote control, or the like and provide output on an output device such as a display. Further, a natural user interface may enable a user to interact with the computing device 1100 in a manner free from constraints imposed by input device such as keyboards, mice, remote controls, and the like. Rather, a natural user interface can rely on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, machine intelligence, and so forth.

[0085]
Additionally, while illustrated as a single system, it is to be understood that the computing device 1100 may be a distributed system. Thus, for instance, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device 1100.

[0086]
As used herein, the terms “component” and “system” are intended to encompass computerreadable data storage that is configured with computerexecutable instructions that cause certain functionality to be performed when executed by a processor. The computerexecutable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices.

[0087]
Further, as used herein, the term “exemplary” is intended to mean “serving as an illustration or example of something.”

[0088]
Various functions described herein can be implemented in hardware, software, or any combination thereof. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computerreadable medium. Computerreadable media includes computerreadable storage media. A computerreadable storage media can be any available storage media that can be accessed by a computer. By way of example, and not limitation, such computerreadable storage media can comprise RAM, ROM, EEPROM, CDROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and bluray disc (BD), where disks usually reproduce data magnetically and discs usually reproduce data optically with lasers. Further, a propagated signal is not included within the scope of computerreadable storage media. Computerreadable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection, for instance, can be a communication medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of communication medium. Combinations of the above should also be included within the scope of computerreadable media.

[0089]
Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Fieldprogrammable Gate Arrays (FPGAs), Programspecific Integrated Circuits (ASICs), Programspecific Standard Products (ASSPs), Systemonachip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.

[0090]
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable modification and alteration of the above devices or methodologies for purposes of describing the aforementioned aspects, but one of ordinary skill in the art can recognize that many further modifications and permutations of various aspects are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the details description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.