EP1924926A2 - Methods and systems for transductive data classification and data classification methods using machine learning techniques - Google Patents

Methods and systems for transductive data classification and data classification methods using machine learning techniques

Info

Publication number
EP1924926A2
EP1924926A2 EP07809394A EP07809394A EP1924926A2 EP 1924926 A2 EP1924926 A2 EP 1924926A2 EP 07809394 A EP07809394 A EP 07809394A EP 07809394 A EP07809394 A EP 07809394A EP 1924926 A2 EP1924926 A2 EP 1924926A2
Authority
EP
European Patent Office
Prior art keywords
documents
label
unlabeled
classifier
data points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP07809394A
Other languages
German (de)
French (fr)
Other versions
EP1924926A4 (en
Inventor
Mauritius A.R. Schmidtler
Christopher K. Harris
Roland Borrey
Anthony Sarah
Nicola Caruso
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tungsten Automation Corp
Original Assignee
Kofax Image Products Inc
Kofax Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/752,634 external-priority patent/US7761391B2/en
Priority claimed from US11/752,691 external-priority patent/US20080086432A1/en
Priority claimed from US11/752,673 external-priority patent/US7958067B2/en
Priority claimed from US11/752,719 external-priority patent/US7937345B2/en
Application filed by Kofax Image Products Inc, Kofax Inc filed Critical Kofax Image Products Inc
Publication of EP1924926A2 publication Critical patent/EP1924926A2/en
Publication of EP1924926A4 publication Critical patent/EP1924926A4/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present invention relates generally to methods and apparatus for data classification. More particularly, the present invention provides improved transductive machine learning methods. The present invention also relates to novel applications using machine learning techniques.
  • a method for classification of data includes receiving labeled data points, each of the labeled data points
  • a method for classification of data according to another embodiment of the present invention ⁇ 0 includes providing computer executable program code to be deployed to and executed on a computer system.
  • the program code comprises instructions for: accessing stored labeled data points in a memory of a computer, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a !5 designated category; accessing unlabeled data points from a memory of a computer; accessing at least one predetermined cost factor of the labeled data points and unlabeled data points from a memory of a computer; training a Maximum Entropy Discrimination (MED) transductive classifier through iterative calculation using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein 10 for each iteration of the calculation the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point prior probability is adjusted according
  • a data processing apparatus includes: at least one memory for storing: (i) labeled data points wherein each of the labeled data points having at least one label indicating whether the data point is a training example for data points being included in a designated category or a training example for data points being excluded from a designated category; (ii) unlabeled data points; and (iii) at least one predetermined cost factor of the labeled data points and unlabeled data points; and a transductive classifier trainer to iteratively teach the transductive classifier using transductive Maximum Entropy Discrimination (MED) using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein at each iteration of the MED calculation the cost factor of the unlabeled data point is adjusted as a function of an expected label value and a data point label prior probability is adjusted according to an estimate of a data point class membership probability; wherein a classifier trained by the transductive classifier trainer is used to classify
  • MED transductive Maximum
  • An article of manufacture comprises a program storage medium readable by a computer, the medium tangibly embodying one or more programs of instructions executable by a computer to perform a method of data classification comprising: receiving labeled data points, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category; receiving unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; training a transductive classifier with iterative Maximum Entropy Discrimination (MED) calculation using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein at each iteration of the MED calculation the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point prior probability is adjusted according to an estimate of a data point class membership probability; applying the trained classifier to classify at
  • a method for classification of unlabeled data includes receiving labeled data points, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data' points being excluded from a designated category; receiving labeled and unlabeled data points; receiving prior label probability information of labeled data points and unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; determining the expected labels for each labeled and unlabeled data point according to the label prior probability of the data point; and repeating the following substeps until substantial convergence of data values:
  • a classification of the input data points, or derivative thereof, is output to at least one of a user, another system, and another process.
  • a method for classifying documents includes receiving at least one labeled seed document having a known confidence level of label assignment; receiving unlabeled documents; receiving at least one predetermined cost factor; training a transductive classifier through iterative calculation using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value; after at least some of the iterations, storing confidence scores for the unlabeled documents; and outputting identifiers of the unlabeled documents having the highest confidence scores to at least one of a user, another system, and another process.
  • a method for analyzing documents associated with legal discovery includes receiving documents associated with a legal matter; performing a document classification technique on the documents; and outputting identifiers of at least some of the documents based on the classification thereof.
  • a method for cleaning up data includes receiving a plurality of labeled data items; selecting subsets of the data items for each of a plurality of categories; setting an uncertainty for the data items in each subset to about zero; setting an uncertainty for the data items not in the subsets to a predefined value that is not about zero; training a transductive classifier through iterative calculation using the uncertainties, the data items in the subsets, and the data items not in the subsets as training examples; applying the trained classifier to each of the labeled data items to classify each of the data items; and outputting a classification of the input data items, or derivative thereof, to at least one of a user, another system, and another process.
  • a method for verifying an association of an invoice with an entity includes training a classifier based on an invoice format associated with a first entity; accessing a plurality of invoices labeled as being associated with at least one of the first entity and other entities; performing a document classification technique on the invoices using the classifier; and outputting an identifier of at least one of the invoices having a high probability of not being associated with the first entity.
  • a method for managing medical records includes training a classifier based on a medical diagnosis; accessing a plurality of medical records; performing a document classification technique on the medical records using the classifier; and outputting an identifier of at least one of the medical records having a low probability of being associated with the medical diagnosis.
  • a method for face recognition includes receiving at least one labeled seed image of a face, the seed image having a known confidence level; receiving unlabeled images; receiving at least one predetermined cost factor; training a transductive classifier through iterative calculation using the at least one predetermined cost factor, the at least one seed image, and the unlabeled images, wherein for each' iteration of the calculations the cost factor is adjusted as a function of an expected label value; after at least some of the iterations, storing confidence scores for the unlabeled seed images; and outputting identifiers of the unlabeled images having the highest confidence scores to at least one of a user, another system, and another process.
  • a method for analyzing prior art documents includes training a classifier based on a search query; accessing a plurality of prior art documents; performing a document classification technique on at least some of the prior art documents using the classifier; and outputting identifiers of at least some of the prior art documents based on the classification thereof.
  • a method for adapting a patent classification to a shift in document content includes receiving at least one labeled seed document; receiving unlabeled documents; training a transductive classifier using the at least one, seed document and the unlabeled documents; classifying the unlabeled documents having a confidence level above a predefined threshold into a plurality of existing categories using the classifier; classifying the unlabeled documents having a confidence level below the predefined threshold into at least one new category using the classifier; reclassifying at least some of the categorized documents into the existing categories and the at least one new category using the classifier; and outputting identifiers of the categorized documents to at least one of a user, another system, and another process.
  • a method for matching documents to claims includes training a classifier based on at least one claim of a patent or patent application; accessing a plurality of documents; performing a document classification technique on at least some of the documents using the classifier; and outputting identifiers of at least some of the documents based on the classification thereof.
  • a method for classifying a patent or patent application includes training a classifier based on a plurality of documents known to be, in a particular patent classification; receiving at least a portion of a patent or patent application; performing a document classification technique on the at least the portion of the patent or patent application using the classifier; and outputting a classification of the patent or patent application, wherein the document classification technique is a yes/no classification technique.
  • a method for classifying a patent or patent application includes performing a document classification technique on at least the portion of a patent or patent application using a classifier that was trained based on at least one document associated with a particular patent classification, wherein the document classification technique is a yes/no classification technique; and outputting a classification of the patent or patent application.
  • a method for adapting to a shift in document content includes receiving at least one labeled seed document; receiving unlabeled documents; receiving at least one predetermined cost factor; training a transductive classifier using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents; classifying the unlabeled documents having a confidence level above a predefined threshold into a plurality of categories using the classifier; reclassifying at least some of the categorized documents into the categories using the classifier; and outputting identifiers of the categorized documents to at least one of a user, another system, and ⁇ another process.
  • a method for separating documents includes receiving labeled data; receiving a sequence of unlabeled documents; adapting probabilistic classification rules using transduction based on the labeled data and the unlabeled documents; updating weights used for document separation according to the probabilistic classification rules; determining locations of separations in the sequence of documents; outputting indicators of the determined locations of the separations in the sequence to at least one of a user, another system, and another process; and flagging the documents with codes, the codes correlating to the indicators.
  • a method for document searching includes receiving a search query; retrieving documents based on the search query; outputting the documents; receiving user-entered labels for at least some of the documents, the labels being indicative of a relevance of the document to the search query; training a classifier based on the search query and the user-entered labels; performing a document classification technique on the documents using the classifier for reclassifying the documents; and outputting identifiers of at least some of the documents based on the classification thereof.
  • Fig! 1 is a depiction of a chart plotting the expected label as a function of the classification score as obtained by employing MED discriminative learning applied to label induction.
  • Fig: 2 is a depiction of a series of plots showing calculated iterations of the decision function obtained by transductive MED learning.
  • Fig. 3 is depiction of a series of plots showing calculated iterations of the decision function obtained by the improved transductive MED learning of one embodiment of the present invention.
  • Fig. 4 illustrates a control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using a scaled cost factor.
  • Fig. '5 illustrates a control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using user defined prior probability information.
  • Fig. 6 illustrates a detailed control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using Maximum Entropy Discrimination with .scaled cost factors and prior probability information.
  • Fig. 7 is a network diagram illustrating a network architecture in which the various embodiments described herein may be implemented.
  • Fig. 8 is a system diagram of a representative hardware environment associated with a user device.
  • Fig. 9 illustrates a block diagram representation of the apparatus of one embodiment of the present invention.
  • Fig. 10 illustrates, in a flowchart, a classification process performed by in accordance with one, embodiment.
  • Fig. 11 illustrates, in a flowchart, a classification process performed by in accordance with one. embodiment.
  • Fig. 1 12 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 13 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 14 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 15 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 16 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 17 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 18 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 19 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 1 20 illustrates, in a flowchart, a classification process performed by in accordance with
  • Fig.' 21 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • FigJ 22 illustrates a control flow diagram showing the method of one embodiment of the present invention applied to a first document separating system.
  • Fig.; 23 illustrates a control flow diagram showing the method of one embodiment of the present invention applied to a second separating system.
  • Fig. 24 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 25 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. ,26 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. '27 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 28 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • Fig. 29 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
  • T-Ie 1 interest and need for classification of textual data has been particularly strong, and several methods of classification have been employed. A discussion of classification methods for textual data follows:
  • computers are called upon to classify (or recognize) objects to an ever increasing extent.
  • computers may 'use optical character recognition to classify handwritten or scanned numbers and letters, pattern recognition to classify an image, such as a face, a fingerprint, a fighter plane, etc., or speech recognition to classify a sound, a voice, etc.
  • Text classification may be used to organize textual information objects into a hierarchy of predetermined classes or categories for example. In this way, finding (or navigating to) textual information objects related to a particular subject matter is simplified. Text classification may be used to route appropriate textual information objects to appropriate people or locations. In this way, an information service can route textual information objects covering diverse subject matters (e.g., business, sports, the stock market, football, a particular company, a particular football team) to people having diverse interests.
  • diverse subject matters e.g., business, sports, the stock market, football, a particular company, a particular football team
  • Text classification may be used to filter textual information objects so that a person is not annoyed by unwanted textual content (such as unwanted and unsolicited e-mail, also referred to as junk e-mail, or "spam").
  • unwanted textual content such as unwanted and unsolicited e-mail, also referred to as junk e-mail, or "spam"
  • rule-based system may be used to effect such types of classification. Basically, rule-based systems use production rules of the form:
  • the conditions may include whether the textual information includes certain words or phrases, has a certain syntax, or has certain attributes. For example, if the textual content has the word “close”, the phrase “nasdaq” and a number, then it is classified as "stock market” text.
  • classifiers Over the last decade or so, other types of classifiers have been used increasingly. Although these classifiers do not use static, predefined logic, as do rule-based classifiers, they have outperformed rule-based classifiers in many applications. Such classifiers typically include a learning element and a performance element. Such classifiers may include neural networks, Bayesian networks, and support vector machines. Although each of these classifiers is known, each is briefly introduced below for the reader's convenience.
  • classifiers having learning and performance elements outperform rule-based classifiers, in many applications.
  • these classifiers may include neural networks, Bayesian networks, and support vector machines.
  • a neural network is basically a multilayered, hierarchical arrangement of identical processing elements, also referred to as neurons.
  • Each neuron can have one or more inputs but only one output.
  • Each neuron input is weighted by a coefficient.
  • the output of a neuron is typically a function of the sum of its weighted inputs and a bias value.
  • This function also referred to as an activation function, is typically a sigmoid function. That is, the activation function may be S-shaped, monotonically increasing and asymptotically approaching fixed values (e.g., +1, 0, -1) as its input(s) respectively approaches positive or negative infinity.
  • the sigmoid function and the individual neural weight and bias values determine the response or "excitability" of the neuron to input signals.
  • the output of a neuron in one layer may be distributed as an input to one or more neurons in a next layer.
  • a typical neural network may include an input layer and two (2) distinct layers; namely, an input layer, an intermediate neuron layer, and an output neuron layer. Note that the nodes of the input layer are not neurons. Rather, the nodes of the input layer have only one input and basically provide the input, unprocessed, to the inputs of the next layer.
  • the input layer could have 300 neurons (i.e., one for each pixel of the input) and the output array could have 10 neurons (i.e., one for each of the ten digits).
  • the use of neural networks generally involves two (2) successive steps. First, the neural network is initialized and trained on known inputs having known output values (or classifications). Once the neural network is trained, it can then be used to classify unknown inputs.
  • the neural network may be initialized by setting the weights and biases of the neurons to random values, typically generated from a Gaussian distribution.
  • the neural network is then trained using a succession of inputs having known outputs (or classes).
  • the values of the neural weights and biases are adjusted (e.g., in accordance with the known back-propagation technique) such that the output of the neural network of each individual training pattern approaches or matches the known output.
  • a gradient descent in weight space is used to minimize the output error. In this way, learning using successive training inputs converges towards a locally optimal solution for the weights and biases. That is, the weights and biases are adjusted to minimize an error.
  • the system is not typically trained to the point where it converges to an optimal solution. Otherwise, the system would be "over trained” such that it would be too specialized to the training data and might not be good at classifying inputs which differ, in some way, from those in the training set. Thus, at various times during its training, the system is tested on a set of validation data. Training is halted when the system's performance on the validation set no longer improves.
  • the neural network can be used to classify unknown inputs in accordance with the weights and biases determined during training. If the neural network can classify the unknown input with confidence, one of the outputs of the neurons in the output layer will be much higher than the others.
  • Bayesian networks use hypotheses as intermediaries between data (e.g., input feature vectors) and predictions (e.g., classifications).
  • the probability of each hypothesis, given the data (“P(hypo
  • a prediction is made from the hypotheses using posterior probabilities of the hypotheses to weight the individual predictions of each of the hypotheses.
  • Hj is the i 1 * 1 hypothesis.
  • a most probable hypothesis Hj that maximizes the probability of Hj given D is referred to as a maximum a posterior hypothesis (or "H MAP ") and 5 may be expressed as follows:
  • the first term of the numerator represents the probability that the data would have been observed given the hypothesis i.
  • the second term represents the prior probability assigned to the given hypothesis i.
  • a Bayesian network includes variables and directed edges between the variables, thereby defining a directed acyclic graph (or "DAG"). Each variable can assume any of a finite number of mutually exclusive states. For each variable A, having parent variables Bi, . . . B n , there 1 is an attached probability table (P(A
  • a variable “MML” may represent a "moisture of my lawn” and may have states “wet” and “dry”.
  • the MML variable may have "rain” and “my sprinkler on” parent variables each having "Yes” and “No” states.
  • Another variable, “MNL” may represent a "moisture of my neighbor's lawn” and may have states “wet” and “dry”.
  • the MNL variable may share the "rain” parent variable. In this example, a prediction may be whether my lawn is "wet” or "dry”.
  • This prediction may depend of the hypotheses (i) if it rains, my lawn will be wet with probability (xi) and (ii) if my sprinkler was on, my lawn will be wet with probability (X 2 ).
  • the probability that it has rained or that my sprinkler was on may depend on other variables. For example, if my neighbor's lawn is wet and they don't have a sprinkler, it is more likely that it has rained.
  • conditional probability tables in Bayesian networks may be trained, as was the case with neural networks.
  • the learning process may be shortened.
  • prior probabilities for the conditional probabilities are usually unknown, in which case a uniform prior is used.
  • One embodiment of the present invention may perform at least one (1) of two (2) basic functions, namely generating parameters for a classifier, and classifying objects, such as textural information objects.
  • parameters are generated for a classifier based on a set of training examples.
  • a set of feature vectors may be generated from a set of training examples. The features of the set of feature vectors may be reduced.
  • the parameters to be generated may include a defined monotonic (e.g., sigmoid) function and a weight vector.
  • the weight vector may be determined by means of SVM training (or by another, known, technique).
  • the monotonic (e.g., sigmoid) function may be defined by means of an optimization method.
  • The? text classifier may include a weight vector and a defined monotonic (e.g., sigmoid) function. Basically, the output of the text classifier of the present invention may be expressed as:
  • O c a classification output for category c
  • w c a weight vector parameter associated with category c
  • x is a (reduced) feature vector based on the unknown textual information object
  • a and B are adjustable parameters of a monotonic (e.g., sigmoid) function.
  • the calculation of the output from expression (2) is quicker than the calculation of the output from expression (1).
  • the classifier may (i) convert a textual information object to a feature vector, and (ii) reduce the feature vector to a reduced feature vector having less elements.
  • Inductive machine learning is used to ascribe properties or relations to types based on tokens (i.e. 1 , on one or a small number of observations or experiences); or to formulate laws based on limited observations of recurring patterns. Inductive machine learning involves reasoning from observed training cases to create general rules, which are then applied to the test cases.
  • Transductive machine learning is a powerful method that does not surfer from these disadvantages.
  • Transductive machine techniques may be capable of learning from a very small set of labeled training examples, automatically adapting to drifting classification concepts, and automatically correcting the labeled training examples. These advantages make transductive machine learning an interesting and valuable method for a large variety of commercial applications.
  • Transduction learns patterns in data. It extends the concept of inductive learning by learning not only from labeled data but also from unlabeled data. This enables transduction to learn patterns that are not or only partly captured in the labeled data. As a result transduction can, in contrast to rule based systems or systems based on inductive learning, adapt to dynamically changing environments. This capability enables transduction to be utilized for document discovery, data cleanup, and addressing drifting classification concepts, among other things.
  • Support Vector Machines is one employed method of text classification, and such method approaches the problem of the large number of solutions and the resulting r generalization problem by deploying constraints on the possible solutions utilizing concepts of regularization theory. For example, a binary SVM classifier selects from all hyperplanes that separate the training data correctly as solution the hyperplane that maximizes the margin. i
  • the constraint on the training data memorizes the , data, whereas the regularization ensures appropriate generalization.
  • Inductive classification learns from training examples that have known labels, i.e. every training example's class membership is known. Where inductive classification learns from known labels, transductive classification determines the classification rules from labeled as well as unlabeled data.
  • An example of transductive SVM classification is shown in table 1.
  • Require Data matrix X of labeled training examples and their labels Y .
  • Require Data matrix X' of the unlabeled training examples.
  • Require A list of all possible labels assignments of the unlabeled training examples
  • Table 1 shows the principle of a transductive classification with Support Vector Machines: The solution is given by the hypeiplane that yields the maximum margin over all possible label assignments of the unlabeled data. The possible label assignments grow exponentially in the number of unlabeled data and for practically applicable solutions, the algorithm in Table 1 must be approximated. An example of such an approximation is described in T. Joachims, Transductive inference for text classification using support vector machines, Technical report, Universitaet Dortmund, LAS VIII, 1999 (Joachims).
  • a label expectation of zero can be obtained by a fixed class prior probability equal to 1/2 or a class prior probability that is a random variable with an uniform prior distribution, i.e. an unknown class prior probability. Accordingly, in applications with known class prior probabilities that are not equal to 1/2 the algorithm could be improved by incorporating this additional information.
  • MED Maximum Entropy Discrimination
  • Inductive MED classification assumes a prior distribution over the parameters of the decision function, a prior distribution over the bias term, and a prior distribution over margins. It selects as a final distribution over these parameters the one that is closest to the prior distributions and yields an expected decision function that classifies the data points correctly.
  • the problem is formulated as follows: Find the distribution over hyperplane parameters p(&), the bias , the data points classification margins p( ⁇ ) whose combined probability distribution has a minimal Kullback Leibler divergence KL to the combined respective prior distributions p 0 , i.e.
  • Transductive MED classification Require Data Matrix AOf labeled and unlabeled training examples.
  • the label prior distribution is a ⁇ function, thus, effectively fixing the label to be either +1 or -1.
  • the label induction step determines the label probability distribution given a fixed probability distribution for the hyperplane parameters. Using the margin and label priors introduced above yields the following objective function for the label induction step (see Table 2)
  • unlabeled data points outside the margin i.e. ⁇ s ⁇ > ⁇
  • data points close to the margin, i.e. ⁇ 1 yield the highest absolute expected label values
  • the M step of the transductive classification algorithm of Jaakkola determines the probability distributions for the hyperplane parameters, the bias term, and margins of the data points that are closest to the respective prior distribution under the constraints
  • Vt Sl (y,)-( ⁇ ,) ⁇ 0, (5)
  • s is the t— th data point classification score, (j/,) its expected label and ( ⁇ ) its expected margin.
  • the expected label for unlabeled data lies in the interval (-1, +1) and is estimated in the label induction step.
  • unlabeled data have to fulfill tighter classification constraints than labeled data since the classification score is scaled by the expected label.
  • unlabeled data close to the separating hyperplane have the most stringent classification constraints since their score as well as the absolute value of their expected label
  • the M step's full objective function given the prior distributions mentioned above is
  • the first term is derived from the Gaussian hyperplane parameters prior distribution
  • the second term is the margin prior regularization term
  • the last term is the bias prior regularization term derived from a Gaussian prior with zero mean and variance .
  • the prior distribution over the bias term can be interpreted as a prior distribution over class prior probabilities. Accordingly, the regularization term that corresponds to the bias prior distribution constrains the weight of the positive to negative examples. According to Eq. 6, the contribution of the bias term is minimized in case the collective pull of the positive examples on the hyperplane equals the collective pull of the negative examples.
  • the collective constraint on the Lagrange Multipliers owing to the bias prior is weighted by the expected label of the data points and is, therefore, less restrictive for unlabeled data than for labeled data. Thus, unlabeled data have the ability of influencing the final solution stronger than the labeled data.
  • unlabeled data have to fulfill stricter classification constraints than the labeled data and their cumulative weight to the solution is less constrained than for labeled data.
  • unlabeled data with an expected label close to zero that lie within the margin of the current M step influence the solution the most.
  • the resulting net effect of formulating the E and M step this way is illustrated by applying this algorithm to the dataset shown in Fig. 2.
  • the dataset includes two labeled examples, a negative example (x) at x-position -1 and a positive example (+) at +1, and six unlabeled examples (o) between -1 and +1 along the x- axis.
  • the cross (x) denotes a labeled negative example, the plus sign (+) a labeled positive example, and the circles (o) unlabeled data.
  • the different plots show separating hyperplanes determined at various iterations of the M step.
  • the one unlabeled data point with a negative x-value is closer than any other unlabeled data to this separating hyperplane.
  • the M step suffers from a kind of short sightedness, where the unlabeled data point closest to the current separating hyperplane determines the final position of the plane the most and the data points further away are not very, important.
  • One preferred approach of the present invention employs transductive classification using the framework of Maximum Entropy Discrimination (MED). It should be understood that 10 various embodiments of the present invention, while applicable for classification may also be applicable to other MED learning problems using transduction, including, but not limited to transductive MED regression and graphical models.
  • MED Maximum Entropy Discrimination
  • the final solution is the expectation of all possible solutions according to the probability distribution that is closest to the assumed prior probability distribution under the constraint that the expected solution describes the training data correctly.
  • the prior probability distribution over solutions maps to a regularization term, i.e. by choosing a specific prior distribution one has selected a specific
  • Discriminative estimation as applied by Support Vector Machines is effective in learning from few examples. This method and apparatus of one embodiment of the present invention has this in common with Support Vector Machines and does not attempt to estimate more
  • the method and apparatus of one embodiment of the present invention using Maximum Entropy Discrimination bridges the gap between pure discriminative, e.g. Support Vector Machine learning, and generative model estimation.
  • the method of one embodiment of the present invention as shown in Table 3 is an improved transductive MED classification algorithm that does not have the instability problem of the method discussed in Jaakkola, referenced herein. Differences include, but are not limited to, that in one embodiment of the present invention every data point has its own cost factor proportional to its absolute label expectation value Ky)I-
  • each data points label prior probability is updated after each M step according to the estimated class membership probability as function of the data point's distance to the decision function.
  • unlabeled data have small cost factors yielding an expected label as function of the classification score that is very flat (see Fig. 1) and, accordingly, to some extent all unlabeled data are allowed to pull on the hyperplane, albeit only with small weight.
  • the prior distribution over decision function parameters incorporates important prior knowledge of the specific classification problem at hand.
  • Other prior distributions of decision function parameters important for classification problem are for example a multinomial distribution, a Poisson distribution, a Cauchy distribution (Breit-Wigner), a Maxwell- Boltzman distribution or a Bose-Einstein distribution.
  • the prior distribution over the threshold b of the decision function is given by a Gaussian distribution with mean ⁇ b and variance ⁇ j
  • S 1 is the / — th data point's classification score determined in the previous M step and P 01 Cy 1 ) the data point's binary label prior probability.
  • the section herein entitled M STEP describes the algorithm to solve the M step objective function. Also, the section herein entitled E STEP describes the E step algorithm.
  • the step EstimateClassProbability in line 5 of Table 3 uses the training data to determine the calibration parameters to turn classification scores into class membership probabilities, i.e. the probability of the class given the score
  • class membership probabilities i.e. the probability of the class given the score
  • Relevant methods for estimating the score calibration to probabilities are described in J. Platt, Probabilistic outputs for support vector machines and comparison to regularized likelihood methods, pages 61-74, 2000 (Platt) and B. Zadrozny and C. Elkan, Transforming classifier scores into accurate multi-class probability estimates, 2002 (Zadrozny).
  • the cross (x) denotes a labeled negative example, the plus sign (+) a labeled positive example, and the circles (o) unlabeled data.
  • the different plots show separating hyperplanes determined at various iterations of the M step.
  • the 20-th iteration shows the final solution elected by the improved transductive MED classifier.
  • Fig. 3 shows the improved transductive MED classification algorithm applied to the toy dataset introduced above.
  • the method 100 begins at step 102 and at step 104 accesses stored data 106.
  • the data is stored at a memory location and includes labeled data, unlabeled data and at least one predetermined cost factor.
  • the data 106 includes data points having assigned labels.
  • the assigned labels identify whether a labeled data point is intended to be included within a particular category or excluded from a particular category.
  • step 108 determines the label prior probabilities of the data point using the label information of data point. Then, at step 110 the expected labels of the data point are determined according to the label prior probability.
  • step 112 includes iterative training of the transductive MED classifier by the scaling of the cost factor unlabeled data points. In each iteration of the calculation the unlabeled data points' cost factors are scaled. As such, the MED classifier learns through repeated iterations of calculations.
  • the trained classifier then accessed input data 114 at step 116. The trained classifier can then complete the step of classifying input data at step 118 and terminates at step 120.
  • the unlabeled data of 106 and the input data 114 may be derived from a single source.
  • the input data/unlabeled data can be used in the iterative process of 112 which is then used to classify at 118.
  • the input data 114 maybe include a feedback mechanism to supply the input data to the stored data at 106 such that the MED classifier of 112 can dynamically learn from new data that is input.
  • a control flow diagram is illustrated showing another method of classification of unlabeled data of one embodiment of the present invention including user defined prior probability information.
  • the method 200 begins at step 202 and at step 204 accesses stored data 206.
  • the data 206 includes labeled data, unlabeled data, a predetermined cost factor, and prior probability information provided by a user.
  • the labeled data of 206 includes data points having assigned labels. The assigned labels identify whether the labeled data point is intended to be included within a particular category or excluded from a particular category.
  • expected labels are calculated from the data of 206.
  • the expected labels then used in step 210 along with labeled data, unlabeled data and cost factors to conduct iterative training of a transductive MED classifier.
  • the iterative calculations of 210 scale the cost factors of the unlabeled data at each calculation. The calculations continue until the classifier is properly trained.
  • the trained classifier then accessed input data at 214 from input data 212.
  • the trained classifier can then complete the step of classifying input data at step 216.
  • the input data and the unlabeled data may derive from a single source and may be put into the system at both 206 and 212.
  • the input data 212 can influence the training at 210 such that the process my dynamically change over time with continuing input data.
  • a monitor may determine whether or not the system has reached convergence. Convergence may be determined when the change of the hyperplane between each iteration of the MED calculation falls below a predetermined threshold value. In an alternative embodiment of the present invention, the threshold value can be determined when the change of the determined expected label falls below a predetermined threshold value. If convergence is reached, then the iterative training process may cease. Referring particularly to Fig. 6, illustrated is a more detailed control flow diagram of the iterative training process of at least one embodiment of the method of the present invention.
  • the process 300 commences at step 302 and at step 304 data is accessed from data 306 and may include labeled data, unlabeled data, at least one predetermined cost factor, and prior probability information.
  • the labeled data points of 306 include a label identifying whether the' data point is a training example for data points to be included in the designated category or a training example for data points to be excluded form a designated category.
  • the prior probability information of 306 includes the probability information of labeled data sets and unlabeled data sets.
  • step 308 expected labels are determined from the data from the prior probability information of 306.
  • step 310 the cost factor is scaled for each unlabeled data set proportional to the absolute value of the expected label of a data point.
  • An MED classifier is then trained in step 312 by determining the decision function that maximizes the margin between the included training and excluded training examples utilizing the labeled as well as the unlabeled data as training examples according to their expected labels.
  • step 314 classification scores are determined using the trained classifier of 312.
  • classification scores are calibrated to class membership probability.
  • label prior probability information is updated according to the class membership probability.
  • An MED calculation is preformed in step 320 to determine label and margin probability distributions, wherein the previously determined classification scores are used in the MED calculation.
  • step 322 new expected labels are computed at step 322 and the expected labels are updated in step 324 using the computations from step 322.
  • step 326 the method determines whether convergence has been achieved. If so, the method terminates at step 328. If convergence is not reached, another iteration of the method is completed starting with step 310. Iterations are repeated until convergence is reached thus resulting in an iterative training of the MED classifier. Convergence may be reached when change of the decision function between each iteration of the MED calculation falls below a predetermined value. In an alternative embodiment of the present invention, convergence may be reached when the change of the determined expected label value falls below a predetermined threshold value.
  • Fig. 7 illustrates a network architecture 700, in accordance with one embodiment.
  • a plurality of remote networks 702 are provided including a first remote network 704 and a second remote network 706.
  • a gateway 707 may be coupled between the remote networks 702 and a proximate network 708.
  • the networks 704, 706 may each take any form including, but not limited to a LAN, a WAN such as the Internet, PSTN, internal telephone network, etc.
  • the gateway 707 serves as an entrance point from the remote networks 702 to the proximate network 708.
  • the gateway 707 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 707, and a switch, which furnishes the actual path in and out of the gateway 707 for a given packet.
  • At least one data server 714 coupled to the proximate network 708, and which is accessible from the remote networks 702 via the gateway 707. It should be noted that' the data server(s) 714 may include any type of computing device/groupware. Coupled to eaclj data server 714 is a plurality of user devices 716. Such user devices 716 may include a desktop computer, lap-top computer, hand-held computer, printer or any other type of logic. It should be noted that a user device 717 may also be directly coupled to any of the networks, in one embodiment. ;
  • a facsimile machine 720 or series of facsimile machines 720 may be coupled to one or more of the networks 704, 706, 708.
  • databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 704, 706, 708.
  • a network element may refer to any component of a network.
  • Fig. 8 shows a representative hardware environment associated with a user device 716 of Fig. 7, in 'accordance with one embodiment.
  • Such Fig. illustrates a typical hardware configuration of a workstation having a centraJ processing unit 810, such as a microprocessor, and a number of other units interconnected via a system bus 812.
  • a centraJ processing unit 810 such as a microprocessor
  • the workstation shown in Fig. 8 includes a Random Access Memory (RAM) 814, Read Only Memory
  • ROM Read Only Memory
  • I/O adapter 818 for connecting peripheral devices such as disk storage units 820 to the bus 812
  • user interface adapter 822 for connecting a keyboard 824, a mouse 826, a speaker 828, a microphone 832, and/or other user interface devices such as a touch screen and a digital camera (not shown) to the bus 812
  • communication adapter 834 for connecting the workstation to a communication network 835 (e.g., a data processing
  • a display adapter 836 for connecting the bus 812 to a display device 838.
  • One embodiment of the present invention comprises in memory device 814 for storing labeled data 416.
  • the labeled data points 416 each include a label indicating 5 whether the data point is a training example for data points being included in the designated category or a training example for data points being excluded from a designated category.
  • Memory 814 also stores unlabeled data 418, prior probability data 420 and the cost factor data 422.
  • the processor 810 accesses the data from the memory 814 and using transductive MED calculations trains a binary classifier enable it to classify unlabeled data.
  • the processor 810 uses iterative transductive calculation by using the cost factor and training examples from labeled and unlabeled data and scaling that cost factor as a function of expected label value thus effecting the data of the cost factor data 422 which is then re-input into processor 810.
  • the cost factor 422 changes with each iteration of the MED classification by the processor 810. Once the processor 810 adequately trains an MED classifier, the processor can then construct the classifier to classify the unlabeled data into classified data 424.
  • Transductive SVM and MED formulations of the prior art lead to an exponential growth of i0 possible label assignments and approximations have to be developed for practical applications.
  • a different formulation of the transductive MED classification is introduced that does not suffer from an exponential growth of possible label assignments and allows a general closed form solution.
  • ⁇ X is the dot product between the separating hyperplane' s weight vector and the
  • this embodiment of the present invention finds a separating hyperplane that is a compromise of being closest to the chosen prior distribution, separating the labeled data correctly, and having no unlabeled data between the margins.
  • the advantage is that no prior distribution over labels has to be introduced, thus, avoiding the problem of exponentially growing label assignments.
  • using the prior distributions given in the Eqs. 7, 8, and 9 for the hyperplane parameters, the bias, and the'margins yields the following partition function
  • G 2 ⁇ ,.U 1 ⁇ 9
  • G 3 G, -2G 2 ,
  • The' objective function 3 can be solved by applying similar techniques as in the case of known labels as discussed in the section herein entitled M Step. The difference is that matrix G 3 ⁇ ' in the quadratic form of the maximum margin term has now off-diagonal terms.
  • MED can be applied to solve classification of data, in general, any kind of discriminant function and prior distributions, regression and graphical models (T. Jebara, Machine Learning Discriminative and Generative, Kluwer Academic Publishers) (Jebara).
  • the applications of the embodiments of the present invention can be formulated as pure inductive learning problems with known labels as well as a transductive learning problem with labeled as well as unlabeled training examples.
  • the improvements to the transductive MED classification algorithm described in Table 3 are applicable as well to general transductive MED classification, transductive MED regression, transductive MED learning of graphical models.
  • the word "classification" may include regression or graphical models.
  • Vt:0 ⁇ t ⁇ c, ⁇ , ⁇ 0, ⁇ , ⁇ l 0.
  • the basis equals the expected bias , ⁇ , (y,) + ⁇ b yielding
  • the gap can also be measured as a way to determine numerical convergence.
  • the method of this alternate embodiment differs in that only one example can be optimized at a time. Therefore the training heuristic is to alternate between the examples in I 0 and all of the examples every other time.
  • s is the t — th datapoint's classification score determined in the previous M step.
  • the Lagrange Multipliers X 1 are determined by maximizing 3 £ .
  • Eq. 35 cannot be solved analytically, but has to be determined by applying e.g. a linear search for each unlabeled example's Lagrange Multiplier that satisfies Eq. 35.
  • labeled data points are received at step 1002, where each of the labeled data points has at least one label which indicates whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category.
  • unlabeled data points are received at step 1004, as well as at least one predetermined cost factor of the labeled data points and unlabeled data points.
  • the data points may contain any medium, e.g. words, images, sounds, etc. Prior probability information of labeled and unlabeled data points may also be received.
  • the label of the included training example may be mapped to a first numeric value, e.g.
  • the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc.
  • the labeled data points, unlabeled data points, input data points, and at least one predetermined cost factor of the labeled data points and unlabeled data points may be stored in a memory of a computer.
  • a transductive MED classifier is trained through iterative calculation using said at least one cost factor and the labeled data points and the unlabeled data points as training examples. For each iteration of the calculations, the unlabeled data point cost factor is adjusted as a function of an expected label value, e.g.
  • the transductive classifier may learn using prior probability information of the labeled and unlabeled data, which further improves stability.
  • the iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
  • the trained classifier is applied to classify at least one of the unlabeled data points, the labeled data points, and input data points.
  • Input data points may be received before or after the classifier is trained, or may not be received at all.
  • the decision function that minimizes the BCL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data points as learning examples according to their expected label. Alternatively, the decision function may be determined with minimal KL divergence using a multinomial distribution for the decision function parameters.
  • a classification of the classified data points, or a derivative thereof is output to at least one of a user, another system, and another process.
  • the system may be remote or local.
  • Examples of the derivative of the classification may be, but are not limited to, the classified data points themselves, a representation or identifier of the classified data points or host file/document, etc.
  • computer executable program code is deployed to and executed on a computer system.
  • This program code comprises instructions for accessing stored labeled data points in a memory of a computer, where each of said labeled data points has at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category.
  • the computer code comprises instructions for accessing unlabeled data points from a memory of a computer as well as accessing at least one predetermined cost factor of the labeled data points and unlabeled data points from a memory of a computer. Prior probability information of labeled and unlabeled data points stored in a memory of a computer may also be accessed.
  • the label of the included training example may be mapped to a first numeric value, e.g. +1, etc.
  • the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc.
  • the program code comprises instructions for training a transductive classifier through iterative calculation, using the at least one stored cost factor and stored labeled data points and, stored unlabeled data points as training examples. Also, for each iteration of the calculation, the unlabeled data point cost factor is adjusted as a function of the expected label value of the data point, e.g. the absolute value of the expected label of a data point. Also, for each iteration, the prior probability information may be adjusted according to an estimate of a data point class membership probability. The iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
  • the program code comprises instructions for applying the trained classifier to classify at least one of the unlabeled data points, the labeled data points, and input data points, as well as instructions for outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process.
  • the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
  • a data processing apparatus comprises at least one memory for storing: (i) labeled data points, wherein each of said labeled data points have at least one label indicating whether the data point is a training example for data points being included in a designated category or a training example for data points being excluded from a designated category; (ii) unlabeled data points; and (iii) at least one predetermined cost factor of the labeled data points and unlabeled data points.
  • the memory may also store prior probability information of labeled and unlabeled data points.
  • the label of the included training example may be mapped to a first numeric value, e.g. +1, etc.
  • the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc.
  • the data processing apparatus comprises a transductive classifier trainer to iteratively teach the transductive classifier using transductive Maximum Entropy Discrimination (MED) using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples.
  • MED transductive Maximum Entropy Discrimination
  • the cost factor of the unlabeled data point is adjusted as a function of the expected label value of the data point, e.g. the absolute value of the expected label of a data point, etc.
  • the prior probability information may be adjusted according to an estimate of a data point class membership probability.
  • the apparatus may further comprise a means for determining the convergence of data values, e.g. when the change of the decision function of the transductive classifier calculation falls below a predetermined threshold value, when the change of the determined expected label values falls below a predetermined threshold value, etc., and terminating calculations upon determination of convergence.
  • a trained classifier is used to classify at least one of the unlabeled data points, the labeled data points, and input data points.
  • the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined by a processor utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
  • a classification of the classified data points, or derivative thereof is output to at least one of a user, another system, and another process.
  • an article of manufacture comprises a program storage medium readable by a computer, where the medium tangibly embodies one or more programs of instructions executable by a computer to perform a method of data classification.
  • labeled data points are received, where each of the labeled data points has at least one label which indicates whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category.
  • unlabeled data points are received, as well as at least one predetermined cost factor of the labeled data points and unlabeled data points.
  • Prior probability information of labeled and unlabeled data points may also be stored in a memory of a computer.
  • the label of the included training example may be mapped to a first numeric value, e.g. +1, etc.
  • the label of the excluded training example may be mapped to a second numeric value, e.g. -1 , etc.
  • a transductive classifier is trained with iterative Maximum Entropy Discrimination (MED) calculation using the at least one stored cost factor and the stored labeled data points and the unlabeled data points as training examples.
  • MED Maximum Entropy Discrimination
  • the unlabeled data point cost factor is adjusted as a function of an expected label value of the data point, e.g. the absolute value of the expected label of a data point, etc.
  • the prior probability information may be adjusted according to an estimate of a data point class membership probability.
  • the iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
  • input data points are accessed from the memory of a computer, and the trained classifier is applied to classify at least one of the unlabeled data points, the labeled data points, and input data points.
  • the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
  • a classification of the classified data points, or a derivative thereof is output to at least one of a 5 user, another system, and another process.
  • a method for classification of unlabeled data in a computer-based system is presented.
  • labeled data points are received, each of said labeled data points having at least one label indicating whether the data point is a training example for data
  • labeled and unlabeled data points are received, as are prior label probability information of labeled data points and unlabeled data points. Further, at least one 15 predetermined cost factor of the labeled data points and unlabeled data points is received.
  • the expected labels for each labeled and unlabeled data point are determined according to the label prior probability of the data point. The following substeps are repeated until substantial convergence of data values: .0
  • a classification of the input data points, or derivative thereof is output to at least one of a user, another system, and another process.
  • Convergence may be reached when the change of the decision function falls below a predetermined threshold value. Additionally, convergence may also be reached when the change of the determined expected label value falls below a predetermined threshold value.
  • the label of the included training example may have any value, for example, a value of +1, and the label of the excluded training example may have any value, for example, a valu'e of -1. !
  • a method for classifying documents is presented in Fig. 11.
  • at least one seed document having a known confidence level is received in step 1100, as well as unlabeled documents and at least one predetermined cost factor.
  • the seed, document and other items may be received from a memory of a computer, from a user, from a network connection, etc., and may be received after a request from the system performing the method.
  • the at least one seed document may have a label indicative of whether the document is included in a designated category, may contain a list of keywords, or have any other attribute that may assist in classifying documents.
  • a transductive classifier is trained through iterative calculation using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value.
  • a data point label prior probability for the labeled and unlabeled documents may also be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
  • step 1104 confidence scores are stored for the unlabeled documents, and identifiers of the unlabeled documents having the highest confidence scores are output in step 1106 to at least one of a user, another system, and another process.
  • the identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
  • confidence scores may be stored after each of the iterations, wherein an identifier of the unlabeled document having the highest confidence score after each iteration is output.
  • One embodiment of the present invention is capable of discovering patterns that link the initial document to the remaining documents.
  • the task of discovery is one area where this pattern discovery proves particularly valuable. For instance, in pre-trial legal discovery, a large amount of documents have to be researched with regard to possible connections to the lawsuit at hand. The ultimate goal is to find the "smoking gun.”
  • a common task for inventors, patent examiners, as well as patent lawyers is to evaluate the novelty of a technology through prior art search. In particular the task is to search all published patents and other publications and find documents within this set that might be related to the specific technology that is examined with regard to its novelty.
  • the task of discovery involves finding a document or a set of documents within a set of data. Given an initial document or concept, a user may want to discover documents that are related to the initial document or concept. However, the notion of relationship between the initial document or concept and the target documents, i.e. the documents that are to be discovered, is only well understood after the discovery has taken place. By learning from labeled and unlabeled documents, concepts, etc., the present invention can learn patterns and relationships between the initial document or documents and the target documents. In .another embodiment of the present invention, a method for analyzing documents associated with legal discovery is presented in Fig. 12. In use, documents associated with a legal matter are received in step 1200.
  • Such documents may include electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Additionally, a document classification technique is performed on the documents in step 1202. Further, identifiers of at least some of the documents are output in step 1204 based on the classification thereof. As an option, a representation of links between the documents may also be output
  • the document classification technique may include any type of process, e.g. a transductive process, etc.
  • a transductive classifier is trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the documents associated with the legal matter.
  • the cost factor is preferably adjusted as a function of an expected label value, and the trained classifier is used to classify the received documents.
  • This process may further comprise receiving a data point label prior probability for the labeled and unlabeled documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
  • the document classification technique may include one or more of a support vector machine process and a maximum entropy discrimination process.
  • a classifier is trained based on a search query in step 1300.
  • a plurality of prior art documents are accessed in step 1302.
  • Such prior art may include any information that has been made available to the public in any form before a given date.
  • Such prior art may also or alternatively include any information that has not been made available to the public in any form before a given date.
  • Illustrative prior art documents may be any type of documents, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, portions of a website, etc.
  • a document classification technique is performed on at least some of the prior art documents in step 1304 using the classifier, and identifiers of at least some of the prior art documents are output in step 1306 based on the classification thereof.
  • the document classification technique may include one or more of any process, including a support vector machine process, a maximum entropy discrimination process, or any inductive or transductive technique described above. Also or alternatively, a representation of links between the documents may also be output. In yet another embodiment, a relevance score of at least some of the prior art documents is output based on the classification thereof.
  • the search query may include at least a portion of a patent disclosure.
  • Illustrative patent disclosures include a disclosure created by an inventor summarizing the invention, a provisional patent application, a nonprovisional patent application, a foreign patent or patent application, etc.
  • the search query includes at least a portion of a claim from a patent or patent application.
  • the search query includes at least a portion of an abstract of a patent or patent application.
  • the search query includes at least a portion of a summary from a patent or patent application.
  • Fig. 27 illustrates a method for matching documents to claims.
  • a classifier is trained based on at least one claim of a patent or patent application. Thus, one or more claims, or a portion thereof, may be used to train the classifier.
  • a plurality of documents are accessed. Such documents may include prior art documents, documents describing potentially infringing or anticipating products, etc.
  • a document classification technique is performed on at least some of the documents using the classifier.
  • identifiers of at least some of the documents are output based on the classification thereof.
  • a relevance score of at least some of the documents may also be output based on the classification thereof.
  • An embodiment of the present invention may be used for the classification of patent applications.
  • patents and patent applications are currently classified by subject matter using the United States Patent Classification (USPC) system.
  • USPC United States Patent Classification
  • This task is currently performed manually, and therefore is very expensive and time consuming.
  • Such manual classification is also subject to human errors. Compounding the complexity of such a task is that the patent or patent application may be classified into multiple classes.
  • Fig'. 28 depicts a method for classifying a patent application according to one embodiment.
  • a classifier is trained based on a plurality of documents known to be in a particular patent classification. Such documents may typically be patents and patent applications (or portions thereof), but could also be summary sheets describing target subject matter of the particular patent classification.
  • documents may typically be patents and patent applications (or portions thereof), but could also be summary sheets describing target subject matter of the particular patent classification.
  • step 2802 at least a portion of a patent or patent application is received. The portion may include the claims, summary, abstract, specification, title, etc.
  • a document classification technique is performed on the at least the portion of the patent or patent application using the classifier.
  • a classification of the patent or patent application is output. As an option, a user may manually verify the classification of some or all of the patent applications.
  • Thejdocument classification technique is preferably a yes/no classification technique. In other words, if the probability that the document is in the proper class is above a threshold, the decision is yes, the document belongs in this class. If the probability that the documents is in the proper class is below a threshold, the decision is no, the document does not belong in this class.
  • Fig. 29 depicts yet another method for classifying a patent application.
  • a document classification technique is performed on at least the portion of a patent or patent application using a classifier that was trained based on at least one document associated with a particular patent classification.
  • the document classification technique is preferably a yes/no classification technique.
  • a classification of the patent or patent application is output.
  • the respective method may be repeated using a different classifier that was trained based on a plurality of documents known to be in a different patent classification.
  • classification of a patent should be based on the claims.
  • one approach uses the Description of a patent to train, and classify an application based on its Claims.
  • Another approach uses the Description and Claims to train, and classify based on the Abstract.
  • whatever portion of a patent or application is used to train that same type of content is used when classifying, i.e., if the system is trained on claims, the classification is based on claims.
  • the document classification technique may include any type of process, e.g. a transductive process, etc.
  • the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the prior art documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and the trained classifier may be used to classify the prior art documents.
  • a data point label prior probability for the seed document and prior art documents may also be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
  • the seed document may be any document, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, a website, a patent disclosure, etc.
  • Fig. 14 describes one embodiment of the present invention.
  • a set of data is read. The discovery of documents within this set that are relevant to the user is desired.
  • an initial seed document or documents are labeled.
  • the documents may be any type of documents, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, a website, etc. It is also possible to seed the transduction process with a string of different key words or a document provided by the user.
  • training a transductive classifier is trained using the labeled data as well as the set of unlabeled data in the given set. At each label induction step during the iterative transduction process the confidence scores determined during label induction are stored.
  • the documents that achieved high confidence scores at the label induction steps are displayed in step 1408 for the user. These documents with high confidence scores represent documents relevant to the user for purposes of discovery.
  • the display may be in chronological order of the label induction steps starting with the initial seed document to the final set of documents discovered at the last label induction step.
  • the cleanup and classification technique may include any type of process, e.g. a transductive process, etc.
  • any inductive or transductive technique described above may be used.
  • the keys of the entries in the database are utilized as labels associated with some confidence level according to the expected cleanliness of the database.
  • the labels together with the associated confidence level, i.e. the expected labels, are then used to train a transductive classifier that corrects the labels (keys) in order to achieve a more consistent organization of the data in the database. For example, invoices have to be first classified according to the company or person that originated the invoice in order to enable automatic data extraction, e.g.
  • training examples are needed to set up an automatic classification system.
  • training examples provided by the customer often contain misclassified documents or other noise ⁇ e.g. fax cover sheets -- that have to be identified and removed prior to training the automatic classification system in order to obtain accurate classification.
  • misclassified documents or other noise ⁇ e.g. fax cover sheets -- that have to be identified and removed prior to training the automatic classification system in order to obtain accurate classification.
  • the Patent Office undergoes a continuous reclassif ⁇ cation process, in which they (1) evaluate an existing branch of their taxonomy for confusion, (2) re-structure that taxonomy to evenly distributed overly congested nodes, and (3) reclassify existing patents into the new structure.
  • the transductive learning methods presented herein may be used by the Patent Office, and the companies they outsource to do this work, to revaluate their taxonomy, and assist them in (1) build a new taxonomy for a given main classification, and (2) reclassifying existing patents. Transduction learns from labeled and unlabeled data, whereby the transition from labeled to unlabeled data is fluent.
  • a method for cleaning up data is presented in Fig. 15.
  • a plurality of labeled data items are received in step 1500, and subsets of the data items for each of a plurality of categories are selected in step 1502. Additionally, an uncertainty for the data items in each subset is set in step 1504 to about zero, and an uncertainty for the data items not in the subsets is set in step 1506 to a predefined value that is not about zero.
  • a transductive classifier is trained in step 1508 through iterative calculation using the uncertainties, the data items in the subsets, and the data items not in the subsets as training examples, and the trained classifier is applied to each of the labeled data items in step 1510 to classify each of the data items.
  • a classification of the input data items, or derivative thereof is output in step 1512 to at least one of a user, another system, and another process.
  • the subsets may be selected at random and may be selected and verified by a user.
  • the label of at least some of the data items may be changed based on the classification.
  • identifiers of data items having a confidence level below a predefined threshold after classification thereof may be output to a user.
  • the identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
  • two choices to start a cleanup process are presented to the user at step 1600.
  • One choice is fully automatic cleanup at step 1602, where for each concept or category a specified number of documents are randomly selected and assumed to be correctly organized.
  • a number of documents can be flagged for manual review and verification that one or more label assignments for each concept or category is being correctly organized.
  • An estimate of the noise level in the data is received at step 1606.
  • the transductive classifier is trained in step 1610 using the verified (manually verified or randomly selected) data and the unverified data in step 1608. Once training is finished the documents are reorganized according to the new labels. Documents with low confidence levels in their label assignments below a specified threshold are displayed for the user for manual review in step 1612. Documents with confidence levels in their label assignments above a specified threshold are automatically corrected according to transductive label assignments in step 1614.
  • a method for managing medical records is presented in Fig. 17.
  • a classifier is trained based on a medical diagnosis in step 1700, and a plurality of medical records is accessed in step 1702.
  • a document classification technique is performed on the medical records in step 1704 using the classifier, and an identifier of at least one of the medical records having a low probability of being associated with the medical diagnosis is output in step 1706.
  • the document classification technique may include any :type of process, e.g. a transductive process, etc., and may include one or more of any inductive or transductive technique described above, including a support vector machine process, a maximum entropy discrimination process, etc.
  • the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the medical records, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and the trained classifier may be used to classify the medical records.
  • a data point label prior probability for the seed document and medical records may also be received, wherein for each
  • Another embodiment of the present invention accounts for dynamic, shifting classification concepts. For example, in forms processing applications documents are classified using the layout information and/or the content information of the documents to classify the documents for further processing.
  • transductive classification adapts to these changes automatically yielding the same or comparable classification accuracy despite the drifting classification concepts. This is in contrast to rule based systems or inductive classification methods that, without manually adjustments, will start to suffer in classification accuracy owing to the concept drift.
  • invoice processing which traditionally involves inductive learning, or rule- based systems are used that utilize invoice layout. Under these traditional systems, if a change in the layout occurs the systems have to be manually reconfigured by either labeling new training data or by determining new rules.
  • transduction makes the manual reconfiguration unnecessary by automatically adapting to the small changes in layout of the invoices.
  • transductive classification may be applied to the analysis of customer complaints in order to monitor the changing nature of such complaints. For example, a company can automatically link product changes with customer complaints.
  • Transduction may also be used in the classification of news articles. For example, news articles on the war on terror starting with articles about the terrorist attacks on September 1 1, 2001 over the war in Afghanistan to news stories about the situation in today's Iraq can be automatically identified using transduction.
  • the classification of organisms can change over time through evolution by creating new species of organisms and other species becoming extinct.
  • This and other principles of a classification schema or taxonomy can be dynamic, with classification concepts shifting or changing over time.
  • Fig. 18 shows an embodiment of the invention using transduction given drifting classification concepts.
  • Document set Dj enters the system at time tj, as shown in step 1802.
  • a transductive classifier Ci is trained using labeled data and the unlabeled data accumulated so far, and in step 1806 the documents in set Dj are classified. If the manual mode is used, documents with a confidence level below a user supplied threshold as determined in step 1808 are presented to the user for manual review in step 1810.
  • a document with a confidence level triggers the creation of a new category that is added to the system, and the document is then assigned to the new category.
  • Documents with a confidence level above the chosen threshold are classified into the current categories 1 to N in steps 1820A-B. All documents in the current categories that have been classified prior to step t; into the current categories are reclassified by the classifier Cj in step 1822, and all documents that are no longer classified into the previously assigned categories are moved to new categories in steps 1824 and 1826.
  • a method for adapting to a shift in document content is presented in Fig. 19.
  • Document content may include, but is not limited to, graphical content, textual content, layout, numbering, etc.
  • Examples of shift may include temporal shift, style shift (where 2 or more people work on one or more documents), shift in process applied, shift in layout, etc.
  • step 1900 at least one labeled seed document is received, as well as unlabeled documents and at least one predetermined cost factor.
  • the documents may include, but are not limited to, customer complaints, invoices, form documents, receipts, etc.
  • a transductive classifier is trained in step 1902 using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents.
  • step 1904 the unlabeled documents having a confidence level above a predefined threshold are classified into a plurality of categories using the classifier, and at least some of the categorized documents are reclassified in step 1906 into the categories using the classifier.
  • identifiers of the categorized documents are output in step 1908 to at least one of a user, another system, and another process.
  • the identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Further, product changes may be linked with customer complaints, etc.
  • an unlabeled document having a confidence level below the predefined threshold may be moved into one or more new categories.
  • the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and using the trained classifier to classify the unlabeled documents.
  • a data point label prior probability for the seed document and unlabeled documents may be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
  • a method for adapting a patent classification to a shift in document content is presented in Fig. 20.
  • step 2000 at least one labeled seed document is received, as well as unlabeled documents.
  • the unlabeled documents may include any types of documents, e.g. patent applications, legal filings, information disclosure forms, document amendments, etc.
  • the seed document(s) may include patent(s), patent application(s), etc.
  • a transductive classifier is trained in step 2002 using the at least one seed document and the unlabeled documents, and the unlabeled documents having a confidence level above a predefined threshold are classified into a plurality of existing categories using the classifier.
  • the .classifier may be any type of classifier, e.g. a transductive classifier, etc.
  • the document classification technique may be any technique, e.g. a support vector machine process, a maximum entropy discrimination process, etc.
  • any inductive or transductive technique described above may be used.
  • the unlabeled documents having a confidence level below the predefined threshold are classified into at least one new category using the classifier, and at least some of the categorized documents are reclassified in step 2006 into the existing categories and the at least one new category using the classifier.
  • identifiers of the categorized documents are output in step 2008 to at least one of a user, another system, and another process.
  • the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the search query, and the documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and the trained classifier may be used to classify the documents.
  • a data point label prior probability for the search query and documents may be received, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
  • Yet another embodiment of the present invention accounts for document drift in the field of document separation.
  • Document separation involves the processing of mortgage documents.
  • Loan folders consisting of a sequence of different loan documents, e.g. loan applications, approvals, requests, amounts, etc. are scanned and the different documents within the sequence of images have to be determined before further processing.
  • the documents used are not static but can change over time. For example, tax forms used within a loan folder can change over time owing to legislation changes.
  • Document separation solves the problem of finding document or subdocument boundaries in a sequence of images.
  • Common examples that produce a sequence of images are digital scanners or Multi Functional Peripherals (MFPs).
  • MFPs Multi Functional Peripherals
  • transduction can be utilized in Document separation in order to handle the drift of documents and their boundaries over time.
  • Static separation systems like rule based systems or systems based on inductive learning solutions cannot adapt automatically to drifting separation concepts.
  • the performance of these static separation systems degrade over time whenever a drift occurs.
  • one In order to keep the performance on its initial level, one either has to manually adapt the rules (in the case of a rule based system), or has to manually label new documents and relearn the system (in case of an inductive learning solution). Either way is time and cost expensive.
  • Applying transduction to Document separation allows the development of a system that automatically adapts to the drift in the separation concepts.
  • a method for separating documents is presented in Fig. 21.
  • labeled data are received, and in step 2102 a sequence of unlabeled documents is received.
  • Such ' data and documents may include legal discovery documents, office actions, web page data, attorney-client correspondence, etc.
  • probabilistic classification rules are adapted using transduction based on the labeled data and the unlabeled documents, and in step 2106 weights used for document separation are updated according to the probabilistic classification rules.
  • locations of separations in the sequence of documents are determined, and in step 2110 indicators of the determined locations of the separations in the sequence are output to at least one of a user, another system, and another process.
  • the indicators may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
  • the documents are flagged with codes, the codes correlating to the indicators.
  • Fig. 22 shows an implementation of the classification method and apparatus of the present invention used in association with document separation.
  • Automatic document separation is used for reducing the manual effort involved in separating and identifying documents after digital scanning.
  • One such document separation method combines classification rules to automatically separate sequences of .pages by using inference algorithms to reduce the most likely separation from all of the available information, using the classifications methods described therein.
  • the classification method of transductive MED of the present invention is employed in document separation. More particularly, document pages 2200 are inserted into a digital scanner 2202 or MFP and are converted into a sequence of digital images 2204.
  • the document pages may be pages from any type of document, e.g.
  • the sequence of digital images is input at step 2206 to dynamically adapt probabilistic classification rules using transduction.
  • Step 2206 utilizes the sequence of images 2204 as unlabeled data and labeled data 2208.
  • the weight in the probabilistic network is updated and is used for automatic document separation according to dynamically adapted classification rules.
  • the output step 2212 is a dynamic adaptation of automatic insertion of separation images such that a sequence of digitized pages 2214 is interleaved with automatic images of separator sheets 2216 at step 2212 automatically inserts the separator sheet images into the image sequence.
  • the software generated separator pages 2216 may also indicate the type of document that immediately follows or proceeds the separator page 2216.
  • the system described here automatically adapts to drifting separation concepts of the documents that occur over time without suffering from a decline in separation accuracy as would static systems like rule based or inductive machine learning based solutions.
  • drifting separation or classification concepts in form processing applications are, as mentioned earlier, changes to documents owing to new legislation.
  • the system as shown in Fig. 22 may be modified to a system as shown in Fig. 23 where the pages 2300 are inserted into a digital scanner 2302 or MFP converted into a sequence of digital images 23O4.
  • the sequence of digital images is input at step 2306 to dynamically adapt probabilistic classification rules using transduction.
  • Step 2306 utilizes the sequence of images 2304, as unlabeled data and labeled data 2308.
  • Step 2310 updates weights in the probabilistic network used for automatic document separation according to dynamically adapted classification rules employed.
  • step 2312 instead of inserting separator sheet images as described in Fig. 18, step 2312 dynamically adapts the automated insertion of separation information and flags the document images 2314 with a coded description.
  • the document page images can be input into an imaging processed database 2316 and the documents can be accessed by the software identifiers.
  • Yet another embodiment of the present invention is able to perform face recognition using transduction.
  • the use of transduction has many advantages, for example the need of a relatively small number of training examples, the ability to use unlabeled examples in training, etc.
  • transductive face recognition may be implemented for criminal detection.
  • the Department of Homeland Security must ensure that terrorists are not allowed onto commercial airliners.
  • Part of an airport's screening process may be to take a picture of each passenger at the airport security checkpoint and attempt to recognize that person.
  • the system could initially be trained using a small number of examples from the limited photographs available of possible terrorists. There may also be more unlabeled photographs of the same terrorist available in other law-enforcement databases that may also be used in training.
  • a transductive trainer would take advantage of not only the initially sparse data to create a functional face-recognition system but would also use unlabeled examples from other sources to increase performance. After processing the photograph taken at the airport security checkpoint, the transductive system would be able to recognize the person in question more accurately than a comparable inductive system. 5
  • a method for face recognition is presented in Fig. 24.
  • step 2400 at least one labeled seed image of a face is received, the seed image having a known confidence level.
  • the at least one seed image may have a label indicative of whether the image is included in a designated category. Additionally, in step 2400 unlabeled images are
  • a transductive classifier is trained through iterative calculation using the at least one predetermined cost factor, the at least one seed image, and the unlabeled images, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected
  • step 2404 confidence scores are stored for the unlabeled seed images.
  • identifiers of the unlabeled documents having the highest confidence scores are output to at least one of a user, another system, and another process.
  • identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
  • confidence scores may be stored after each of the iterations, wherein an identifier of the unlabeled images having the highest confidence score after each iteration is output. Additionally, a data point label prior probability for the labeled and unlabeled image may be received,
  • a third unlabeled image of a face e.g., from the above airport security example, may be received, the third unlabeled image may be compared to at least some of the images having the highest confidence scores, and an identifier of the third unlabeled image may be output if 0 a confidence that the face in the third unlabeled image is the same as the face in the seed image.
  • Ye.t another embodiment of the present invention enables a user to improve their search results by providing feedback to the document discovery system.
  • An embodiment of the present invention enables the user to review the suggested results from the search engine and inform the engine of the relevance of one ore more of the retrieved results, e.g. "close, but not exactly what I wanted," "definitely not,” etc. As the user provides feedback to the engine, better results are prioritized for the user to review.
  • a method for document searching is presented in Fig. 25.
  • a search query is received.
  • the search query may be any type of query, including case- sensitive queries, Boolean queries, approximate match queries, structured queries, etc.
  • documents based on the search query are retrieved.
  • the documents are output, and in step 2506 user-entered labels for at least some of the documents are received, the labels being indicative of a relevance of the document to the search query. For example, the user may indicate whether a particular result returned from the query is relevant or not.
  • a classifier is trained based on the search query and the user-entered labels
  • a document classification technique is performed on the documents using the classifier for reclassifying the documents.
  • identifiers of at least some of the documents are output based on the classification thereof.
  • the identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
  • the reclassified documents may also be output, with those documents having a highest confidence being output first.
  • the document classification technique may include any type of process, e.g. a transductive process, a support vector machine process, a maximum entropy discrimination process, etc. Any inductive or transductive technique described above may be used.
  • the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the search query, and the documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and the trained classifier may be used to classify the documents.
  • a data point label prior probability for the search query and documents may be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
  • a further embodiment of the present invention may be used for improving ICR/OCR, and speech recognition.
  • speech recognition programs and systems require the operator to repeat a number of words to train the system.
  • the present invention can initially monitor the voice of a user for a preset period of time to gather "unclassified" content, e.g., by listening in to phone conversations. As a result, when the user starts training the recognition system, the system utilizes transductive learning to utilize the monitored speech to assist in building a memory model.
  • a method for verifying an association of an invoice with an entity is presented in Fig. 26.
  • a classifier is trained based on an invoice format associated with a first entity.
  • the invoice format may refer to either or both of the physical layout of markings on the invoice, or characteristics such as keywords, invoice number, client name, etc. on the invoice.
  • a plurality of invoices labeled as being associated with at least one of the first entity and other entities are accessed, and in step 2604 a document classification technique is performed on the invoices using the classifier.
  • any inductive or transductive technique described above may be used as a document classification technique.
  • the document classification technique may include a transductive process, support vector machine process, a maximum entropy discrimination process, etc.
  • an identifier of at least one of the invoices having a high probability of not being associated with the first entity is output.
  • the classifier may be any type of classifier, for example, a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the invoices, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the invoices.
  • a data point label prior probability for the seed document and invoices may be received, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
  • a transductive classifier is trained through iterative classification using at least one cost factor, the labeled data points, and the unlabeled data points as training examples. For each iteration of the calculations, the unlabeled date point cost factor is adjusted as a function of an expected label value. Additionally, for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
  • the workstation may have resident thereon an operating system such as the Microsoft Windows® Operating System (OS), a MAC OS, or UNIX operating system. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned.
  • OS Microsoft Windows® Operating System
  • a preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology.
  • Object oriented programming (OOP) which has become increasingly used to develop complex applications, may be used.
  • transductive learning uses transductive learning to overcome the problem of very sparse data sets which plague inductive face-recognition systems.
  • This aspect of transductive learning is not limited to this application and may be used to solve other machine-learning problems that arise from sparse data.

Abstract

A system, method, data processing apparatus, and article of manufacture are provided for classifying data. Data classification methods using machine learning techniques are also disclosed.

Description

METHODS AND SYSTEMS FOR TRANSDUCTIVE DATA
CLASSIFICATION AND DATA CLASSIFICATION METHODS USING
MACHINE LEARNING TECHNIQUES
FIELD OF THE INVENTION
The present invention relates generally to methods and apparatus for data classification. More particularly, the present invention provides improved transductive machine learning methods. The present invention also relates to novel applications using machine learning techniques.
BACKGROUND
How to handle data has gained in importance in the information age and more recently with the explosion of electronic data in all walks of life including, among others, scanned documents, web material, search engine data, text data, images, audio data files, etc.
One area just starting to be explored is the non-manual classification of data. In many classification methods the machine or computer must learn based upon manually input and created rule sets and/or manually created training examples. In machine learning where training examples are used, the number of learning examples is typically small compared to the number of parameters that have to be estimated, i.e. the number of solutions that satisfy the constraints given by the training examples is large. A challenge of machine learning is to find a solution that generalizes well despite the lack of constraints. There is thus a need for overcoming these and/or other issues associated with the prior art.
What is further needed are practical applications for machine learning techniques of all types. SUMMARY
In a computer-based system, a method for classification of data according to one embodiment of the present invention includes receiving labeled data points, each of the labeled data points
5 having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category; receiving unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; training a transductive classifier using Maximum Entropy Discrimination (MED) through iterative
0 calculation using the at least one cost factor and the labeled data points and the unlabeled data points as training examples, wherein for each iteration of the calculations the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point label prior probability is adjusted according to an estimate of a data point class membership probability; applying the trained classifier to classify at least one of the unlabeled data points,
5 the labeled data points, and input data points; and outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process.
A method for classification of data according to another embodiment of the present invention Ϊ0 includes providing computer executable program code to be deployed to and executed on a computer system. The program code comprises instructions for: accessing stored labeled data points in a memory of a computer, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a !5 designated category; accessing unlabeled data points from a memory of a computer; accessing at least one predetermined cost factor of the labeled data points and unlabeled data points from a memory of a computer; training a Maximum Entropy Discrimination (MED) transductive classifier through iterative calculation using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein 10 for each iteration of the calculation the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point prior probability is adjusted according to an estimate of a data point class membership probability; applying the trained classifier to classify at least one of the unlabeled data points, the labeled data points, and input data points; and outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process.
A data processing apparatus according to another embodiment of the present invention includes: at least one memory for storing: (i) labeled data points wherein each of the labeled data points having at least one label indicating whether the data point is a training example for data points being included in a designated category or a training example for data points being excluded from a designated category; (ii) unlabeled data points; and (iii) at least one predetermined cost factor of the labeled data points and unlabeled data points; and a transductive classifier trainer to iteratively teach the transductive classifier using transductive Maximum Entropy Discrimination (MED) using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein at each iteration of the MED calculation the cost factor of the unlabeled data point is adjusted as a function of an expected label value and a data point label prior probability is adjusted according to an estimate of a data point class membership probability; wherein a classifier trained by the transductive classifier trainer is used to classify at least one of the unlabeled data points, the labeled data points, and input data points; wherein a classification of the classified data points, or derivative thereof, is output to at least one of a user, another system, and another process.
An article of manufacture according to another embodiment of the present invention comprises a program storage medium readable by a computer, the medium tangibly embodying one or more programs of instructions executable by a computer to perform a method of data classification comprising: receiving labeled data points, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category; receiving unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; training a transductive classifier with iterative Maximum Entropy Discrimination (MED) calculation using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein at each iteration of the MED calculation the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point prior probability is adjusted according to an estimate of a data point class membership probability; applying the trained classifier to classify at least one of the unlabeled data points, the labeled data points, and input data points; and outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process. [
In a computer-based system, a method for classification of unlabeled data according to another embodiment of the present invention includes receiving labeled data points, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data' points being excluded from a designated category; receiving labeled and unlabeled data points; receiving prior label probability information of labeled data points and unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; determining the expected labels for each labeled and unlabeled data point according to the label prior probability of the data point; and repeating the following substeps until substantial convergence of data values:
• generating a scaled cost value for each unlabeled data point proportional to the 1 absolute value of the data point's expected label;
•, training a classifier by determining the decision function that minimizes the KL , divergence to the prior probability distribution of the decision function parameters given the included training and excluded training examples utilizing the labeled as i well as the unlabeled data as training examples according to their expected label;
• determining the classification scores of the labeled and unlabeled data points using the trained classifier; calibrating the output of the trained classifier to class membership probability; updating the label prior probabilities of the unlabeled data points according to the determined class membership probabilities; • determining the label and margin probability distributions using Maximum Entropy Discrimination (MED) using the updated label prior probabilities and the previously determined classification scores;
• computing new expected labels using the previously determined label probability distribution; and
• updating expected labels for each data point by interpolating the new expected labels with the expected label of previous iteration.
A classification of the input data points, or derivative thereof, is output to at least one of a user, another system, and another process.
A method for classifying documents according to another embodiment of the present invention includes receiving at least one labeled seed document having a known confidence level of label assignment; receiving unlabeled documents; receiving at least one predetermined cost factor; training a transductive classifier through iterative calculation using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value; after at least some of the iterations, storing confidence scores for the unlabeled documents; and outputting identifiers of the unlabeled documents having the highest confidence scores to at least one of a user, another system, and another process.
A method for analyzing documents associated with legal discovery according to another embodiment of the present invention includes receiving documents associated with a legal matter; performing a document classification technique on the documents; and outputting identifiers of at least some of the documents based on the classification thereof.
A method for cleaning up data according to another embodiment of the present invention includes receiving a plurality of labeled data items; selecting subsets of the data items for each of a plurality of categories; setting an uncertainty for the data items in each subset to about zero; setting an uncertainty for the data items not in the subsets to a predefined value that is not about zero; training a transductive classifier through iterative calculation using the uncertainties, the data items in the subsets, and the data items not in the subsets as training examples; applying the trained classifier to each of the labeled data items to classify each of the data items; and outputting a classification of the input data items, or derivative thereof, to at least one of a user, another system, and another process.
A method for verifying an association of an invoice with an entity according to another embodiment of the present invention includes training a classifier based on an invoice format associated with a first entity; accessing a plurality of invoices labeled as being associated with at least one of the first entity and other entities; performing a document classification technique on the invoices using the classifier; and outputting an identifier of at least one of the invoices having a high probability of not being associated with the first entity.
A method for managing medical records according to another embodiment of the present invention includes training a classifier based on a medical diagnosis; accessing a plurality of medical records; performing a document classification technique on the medical records using the classifier; and outputting an identifier of at least one of the medical records having a low probability of being associated with the medical diagnosis.
A method for face recognition according to another embodiment of the present invention includes receiving at least one labeled seed image of a face, the seed image having a known confidence level; receiving unlabeled images; receiving at least one predetermined cost factor; training a transductive classifier through iterative calculation using the at least one predetermined cost factor, the at least one seed image, and the unlabeled images, wherein for each' iteration of the calculations the cost factor is adjusted as a function of an expected label value; after at least some of the iterations, storing confidence scores for the unlabeled seed images; and outputting identifiers of the unlabeled images having the highest confidence scores to at least one of a user, another system, and another process.
A method for analyzing prior art documents according to another embodiment of the present invention includes training a classifier based on a search query; accessing a plurality of prior art documents; performing a document classification technique on at least some of the prior art documents using the classifier; and outputting identifiers of at least some of the prior art documents based on the classification thereof.
I
A method for adapting a patent classification to a shift in document content according to another embodiment of the present invention includes receiving at least one labeled seed document; receiving unlabeled documents; training a transductive classifier using the at least one, seed document and the unlabeled documents; classifying the unlabeled documents having a confidence level above a predefined threshold into a plurality of existing categories using the classifier; classifying the unlabeled documents having a confidence level below the predefined threshold into at least one new category using the classifier; reclassifying at least some of the categorized documents into the existing categories and the at least one new category using the classifier; and outputting identifiers of the categorized documents to at least one of a user, another system, and another process.
A method for matching documents to claims according to another embodiment of the present invention includes training a classifier based on at least one claim of a patent or patent application; accessing a plurality of documents; performing a document classification technique on at least some of the documents using the classifier; and outputting identifiers of at least some of the documents based on the classification thereof.
A method for classifying a patent or patent application according to another embodiment of the present invention includes training a classifier based on a plurality of documents known to be, in a particular patent classification; receiving at least a portion of a patent or patent application; performing a document classification technique on the at least the portion of the patent or patent application using the classifier; and outputting a classification of the patent or patent application, wherein the document classification technique is a yes/no classification technique.
A method for classifying a patent or patent application according to another embodiment of the present invention includes performing a document classification technique on at least the portion of a patent or patent application using a classifier that was trained based on at least one document associated with a particular patent classification, wherein the document classification technique is a yes/no classification technique; and outputting a classification of the patent or patent application.
A method for adapting to a shift in document content according to another embodiment of the present invention includes receiving at least one labeled seed document; receiving unlabeled documents; receiving at least one predetermined cost factor; training a transductive classifier using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents; classifying the unlabeled documents having a confidence level above a predefined threshold into a plurality of categories using the classifier; reclassifying at least some of the categorized documents into the categories using the classifier; and outputting identifiers of the categorized documents to at least one of a user, another system, and ^another process.
A method for separating documents according to another embodiment of the present i invention includes receiving labeled data; receiving a sequence of unlabeled documents; adapting probabilistic classification rules using transduction based on the labeled data and the unlabeled documents; updating weights used for document separation according to the probabilistic classification rules; determining locations of separations in the sequence of documents; outputting indicators of the determined locations of the separations in the sequence to at least one of a user, another system, and another process; and flagging the documents with codes, the codes correlating to the indicators.
A method for document searching according to another embodiment of the present invention includes receiving a search query; retrieving documents based on the search query; outputting the documents; receiving user-entered labels for at least some of the documents, the labels being indicative of a relevance of the document to the search query; training a classifier based on the search query and the user-entered labels; performing a document classification technique on the documents using the classifier for reclassifying the documents; and outputting identifiers of at least some of the documents based on the classification thereof. BRIEF DESCRIPTION OF THE DRAWINGS
Fig! 1 is a depiction of a chart plotting the expected label as a function of the classification score as obtained by employing MED discriminative learning applied to label induction.
Fig: 2 is a depiction of a series of plots showing calculated iterations of the decision function obtained by transductive MED learning.
Fig. 3 is depiction of a series of plots showing calculated iterations of the decision function obtained by the improved transductive MED learning of one embodiment of the present invention.
Fig. 4 illustrates a control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using a scaled cost factor.
Fig. '5 illustrates a control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using user defined prior probability information.
Fig. 6 illustrates a detailed control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using Maximum Entropy Discrimination with .scaled cost factors and prior probability information.
Fig. 7 is a network diagram illustrating a network architecture in which the various embodiments described herein may be implemented.
Fig. 8 is a system diagram of a representative hardware environment associated with a user device.
Fig. 9 illustrates a block diagram representation of the apparatus of one embodiment of the present invention. Fig. 10 illustrates, in a flowchart, a classification process performed by in accordance with one, embodiment.
Fig. 11 illustrates, in a flowchart, a classification process performed by in accordance with one. embodiment.
Fig.1 12 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 13 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 14 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 15 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 16 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 17 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 18 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 19 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment. Fig.1 20 illustrates, in a flowchart, a classification process performed by in accordance with
I one embodiment.
Fig.' 21 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment. t
I
FigJ 22 illustrates a control flow diagram showing the method of one embodiment of the present invention applied to a first document separating system.
Fig.; 23 illustrates a control flow diagram showing the method of one embodiment of the present invention applied to a second separating system.
Fig. 24 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 25 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. ,26 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. '27 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 28 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
Fig. 29 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment. DETAILED DESCRIPTION
The following description is the best mode presently contemplated for carrying out the present invention. This description is made for the purpose of illustrating the general principles of the present invention and is not meant to limit the inventive concepts claimed herein. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations.
I
( Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and as defined in dictionaries, treatises, etc.
Text Classification
T-Ie1 interest and need for classification of textual data has been particularly strong, and several methods of classification have been employed. A discussion of classification methods for textual data follows:
To increase their utility and intelligence, machines, such as computers for example, are called upon to classify (or recognize) objects to an ever increasing extent. For example, computers may 'use optical character recognition to classify handwritten or scanned numbers and letters, pattern recognition to classify an image, such as a face, a fingerprint, a fighter plane, etc., or speech recognition to classify a sound, a voice, etc.
Machines have also been called upon to classify textual information objects, such as a textual computer file or document for example. The applications for text classification are diverse and important. For example, text classification may be used to organize textual information objects into a hierarchy of predetermined classes or categories for example. In this way, finding (or navigating to) textual information objects related to a particular subject matter is simplified. Text classification may be used to route appropriate textual information objects to appropriate people or locations. In this way, an information service can route textual information objects covering diverse subject matters (e.g., business, sports, the stock market, football, a particular company, a particular football team) to people having diverse interests. Text classification may be used to filter textual information objects so that a person is not annoyed by unwanted textual content (such as unwanted and unsolicited e-mail, also referred to as junk e-mail, or "spam"). As can be appreciated from these few examples, there are many exciting and important applications for text classification.
Rule Based Classification
In some instances, textual content must be classified with absolute certainty, based on certain accepted logic. A rule-based system may be used to effect such types of classification. Basically, rule-based systems use production rules of the form:
IF condition, THEN fact.
The conditions may include whether the textual information includes certain words or phrases, has a certain syntax, or has certain attributes. For example, if the textual content has the word "close", the phrase "nasdaq" and a number, then it is classified as "stock market" text.
Over the last decade or so, other types of classifiers have been used increasingly. Although these classifiers do not use static, predefined logic, as do rule-based classifiers, they have outperformed rule-based classifiers in many applications. Such classifiers typically include a learning element and a performance element. Such classifiers may include neural networks, Bayesian networks, and support vector machines. Although each of these classifiers is known, each is briefly introduced below for the reader's convenience.
Classifiers Having Learning and Performance Elements As just mentioned at the end of the previous section, classifiers having learning and performance elements outperform rule-based classifiers, in many applications. To reiterate, these classifiers may include neural networks, Bayesian networks, and support vector machines.
Neural Networks
A neural network is basically a multilayered, hierarchical arrangement of identical processing elements, also referred to as neurons. Each neuron can have one or more inputs but only one output. Each neuron input is weighted by a coefficient. The output of a neuron is typically a function of the sum of its weighted inputs and a bias value. This function, also referred to as an activation function, is typically a sigmoid function. That is, the activation function may be S-shaped, monotonically increasing and asymptotically approaching fixed values (e.g., +1, 0, -1) as its input(s) respectively approaches positive or negative infinity. The sigmoid function and the individual neural weight and bias values determine the response or "excitability" of the neuron to input signals.
In the hierarchical arrangement of neurons, the output of a neuron in one layer may be distributed as an input to one or more neurons in a next layer. A typical neural network may include an input layer and two (2) distinct layers; namely, an input layer, an intermediate neuron layer, and an output neuron layer. Note that the nodes of the input layer are not neurons. Rather, the nodes of the input layer have only one input and basically provide the input, unprocessed, to the inputs of the next layer. If, for example, the neural network were to be used for recognizing a numerical digit character in a 20 by 15 pixel array, the input layer could have 300 neurons (i.e., one for each pixel of the input) and the output array could have 10 neurons (i.e., one for each of the ten digits).
The use of neural networks generally involves two (2) successive steps. First, the neural network is initialized and trained on known inputs having known output values (or classifications). Once the neural network is trained, it can then be used to classify unknown inputs. The neural network may be initialized by setting the weights and biases of the neurons to random values, typically generated from a Gaussian distribution. The neural network is then trained using a succession of inputs having known outputs (or classes). As theltraining inputs are fed to the neural network, the values of the neural weights and biases are adjusted (e.g., in accordance with the known back-propagation technique) such that the output of the neural network of each individual training pattern approaches or matches the known output. Basically, a gradient descent in weight space is used to minimize the output error. In this way, learning using successive training inputs converges towards a locally optimal solution for the weights and biases. That is, the weights and biases are adjusted to minimize an error.
In practice, the system is not typically trained to the point where it converges to an optimal solution. Otherwise, the system would be "over trained" such that it would be too specialized to the training data and might not be good at classifying inputs which differ, in some way, from those in the training set. Thus, at various times during its training, the system is tested on a set of validation data. Training is halted when the system's performance on the validation set no longer improves.
Once training is complete, the neural network can be used to classify unknown inputs in accordance with the weights and biases determined during training. If the neural network can classify the unknown input with confidence, one of the outputs of the neurons in the output layer will be much higher than the others.
Bayesian Networks
Typically, Bayesian networks use hypotheses as intermediaries between data (e.g., input feature vectors) and predictions (e.g., classifications). The probability of each hypothesis, given the data ("P(hypo|data)"), may be estimated. A prediction is made from the hypotheses using posterior probabilities of the hypotheses to weight the individual predictions of each of the hypotheses. The probability of a prediction X given data D may be expressed as: i P[X I m = 2 KX I Λ WA/f,- 1 P) = i WΓJWJ I #>
where Hj, is the i1*1 hypothesis. A most probable hypothesis Hj that maximizes the probability of Hj given D (P(H1-ID)) is referred to as a maximum a posterior hypothesis (or "HMAP ") and 5 may be expressed as follows:
P(X|D) ~ P(X|HMAP)
Using Bayes' rule, the probability of a hypothesis H.sub.i given data D may be expressed as: [0
PM I D) * P<D>
The probability of the data D remains fixed. Therefore, to find HMAP, the numerator must be maximized. 15
The first term of the numerator represents the probability that the data would have been observed given the hypothesis i. The second term represents the prior probability assigned to the given hypothesis i.
£0 A Bayesian network includes variables and directed edges between the variables, thereby defining a directed acyclic graph (or "DAG"). Each variable can assume any of a finite number of mutually exclusive states. For each variable A, having parent variables Bi, . . . Bn, there1 is an attached probability table (P(A|Bι, . . . Bn). The structure of the Bayesian network encodes the assumptions that each variable is conditionally independent of its non- 15 descendants, given its parent variables.
Assuming that the structure of the Bayesian network is known and the variables are observable, only the set of conditional probability tables need be learned. These tables can be estimated directly using statistics from a set of learning examples. If the structure is known but some variables are hidden, learning is analogous to neural network learning discussed above.
An example of a simple Bayesian network is introduced below. A variable "MML" may represent a "moisture of my lawn" and may have states "wet" and "dry". The MML variable may have "rain" and "my sprinkler on" parent variables each having "Yes" and "No" states. Another variable, "MNL" may represent a "moisture of my neighbor's lawn" and may have states "wet" and "dry". The MNL variable may share the "rain" parent variable. In this example, a prediction may be whether my lawn is "wet" or "dry". This prediction may depend of the hypotheses (i) if it rains, my lawn will be wet with probability (xi) and (ii) if my sprinkler was on, my lawn will be wet with probability (X2). The probability that it has rained or that my sprinkler was on may depend on other variables. For example, if my neighbor's lawn is wet and they don't have a sprinkler, it is more likely that it has rained.
As discussed above, the conditional probability tables in Bayesian networks may be trained, as was the case with neural networks. Advantageously, by allowing prior knowledge to be provided for, the learning process may be shortened. Unfortunately, however, prior probabilities for the conditional probabilities are usually unknown, in which case a uniform prior is used.
One embodiment of the present invention may perform at least one (1) of two (2) basic functions, namely generating parameters for a classifier, and classifying objects, such as textural information objects.
Basically, parameters are generated for a classifier based on a set of training examples. A set of feature vectors may be generated from a set of training examples. The features of the set of feature vectors may be reduced. The parameters to be generated may include a defined monotonic (e.g., sigmoid) function and a weight vector. The weight vector may be determined by means of SVM training (or by another, known, technique). The monotonic (e.g., sigmoid) function may be defined by means of an optimization method. The? text classifier may include a weight vector and a defined monotonic (e.g., sigmoid) function. Basically, the output of the text classifier of the present invention may be expressed as:
1 o< = i +^t5T*)**
where:
Oc =a classification output for category c;
wc =a weight vector parameter associated with category c;
x=is a (reduced) feature vector based on the unknown textual information object; and
A and B are adjustable parameters of a monotonic (e.g., sigmoid) function.
The calculation of the output from expression (2) is quicker than the calculation of the output from expression (1).
Depending on the form of the object to be classified, the classifier may (i) convert a textual information object to a feature vector, and (ii) reduce the feature vector to a reduced feature vector having less elements.
Transductive Machine Learning
The current state of the art in commercially used automatic classification systems is either rule based or utilizes inductive machine learning, i.e. using manually labeled training examples. Both methods typically entail a large manual setup effort compared to transductive methods. The solutions provided by rule based systems or inductive methods are static solutions that cannot adapt to drifting classification concepts without manual effort.
Inductive machine learning is used to ascribe properties or relations to types based on tokens (i.e.1, on one or a small number of observations or experiences); or to formulate laws based on limited observations of recurring patterns. Inductive machine learning involves reasoning from observed training cases to create general rules, which are then applied to the test cases.
I
Particularly, preferred embodiments use transductive machine learning techniques. Transductive machine learning is a powerful method that does not surfer from these disadvantages.
Transductive machine techniques may be capable of learning from a very small set of labeled training examples, automatically adapting to drifting classification concepts, and automatically correcting the labeled training examples. These advantages make transductive machine learning an interesting and valuable method for a large variety of commercial applications.
Transduction learns patterns in data. It extends the concept of inductive learning by learning not only from labeled data but also from unlabeled data. This enables transduction to learn patterns that are not or only partly captured in the labeled data. As a result transduction can, in contrast to rule based systems or systems based on inductive learning, adapt to dynamically changing environments. This capability enables transduction to be utilized for document discovery, data cleanup, and addressing drifting classification concepts, among other things.
The following is an explanation of one embodiment of transductive classification utilizing Support Vector Machine (SVM) classification as well as the Maximum Entropy Discrimination (MED) framework.
Support Vector Machines Support Vector Machines (SVM) is one employed method of text classification, and such method approaches the problem of the large number of solutions and the resulting r generalization problem by deploying constraints on the possible solutions utilizing concepts of regularization theory. For example, a binary SVM classifier selects from all hyperplanes that separate the training data correctly as solution the hyperplane that maximizes the margin. i
The maximum margin regularization under the constraint that training data is classified correctly addresses the aforementioned learning problem of selecting the appropriate trade- off between generalization and memorization: The constraint on the training data memorizes the , data, whereas the regularization ensures appropriate generalization. Inductive classification learns from training examples that have known labels, i.e. every training example's class membership is known. Where inductive classification learns from known labels, transductive classification determines the classification rules from labeled as well as unlabeled data. An example of transductive SVM classification is shown in table 1.
Principle of transductive SVM classification
Require: Data matrix X of labeled training examples and their labels Y . Require: Data matrix X' of the unlabeled training examples. Require: A list of all possible labels assignments of the unlabeled training examples
IKX]-
1 : MaximumMargin = 0
2: Y = 0 {Included label assignment of unlabeled training examples.}
3: for all label assignments Y- X ≤ i ≤ n in the list of label assignments do 4: CurrentMaximumMargin ^= MαximizeMaigin(X,Y,X',Yf)
5: if CurrentMαximumMaigin > MaximumMargin then
6: MαximumMsαgin := CurrentMαximumMβrgin
7: Ϋ := Y;
8: end if 9: end for Table 1
Table 1 shows the principle of a transductive classification with Support Vector Machines: The solution is given by the hypeiplane that yields the maximum margin over all possible label assignments of the unlabeled data. The possible label assignments grow exponentially in the number of unlabeled data and for practically applicable solutions, the algorithm in Table 1 must be approximated. An example of such an approximation is described in T. Joachims, Transductive inference for text classification using support vector machines, Technical report, Universitaet Dortmund, LAS VIII, 1999 (Joachims).
The uniform distribution over label assignments in Table 1 implies that an unlabeled data point has a probability of 1/2 to be a positive example of the class and a probability of 1/2 of being a negative example, i.e. its two possible label assignments of y = +1 (positive example) and 3/ = — 1 (negative example) are equally likely and the resulting expected label is zero. A label expectation of zero can be obtained by a fixed class prior probability equal to 1/2 or a class prior probability that is a random variable with an uniform prior distribution, i.e. an unknown class prior probability. Accordingly, in applications with known class prior probabilities that are not equal to 1/2 the algorithm could be improved by incorporating this additional information. For example, instead of using a uniform distribution over label assignments in Table 1, one could elect to prefer some label assignments over others according to the class prior probability. However, the trade-off between a smaller margin solution with a likely label assignment and a higher margin solution but less likely label assignment is difficult. The probability of label assignments and the margin are on different scales.
Maximum Entropy Discrimination
Another method of classification, Maximum Entropy Discrimination (MED) (see e.g. T.
Jebara, Machine Learning Discriminative and Generative, Kluwer Academic Publishers) (Jebara) does not encounter the problems associated with SVMs since the decision function regularization term as well as the label assignment regularization term are both derived from prior probability distributions over solutions and, thus are both on the same probabilistic scale. Accordingly, if the class priors and, thus, the label priors are known, transductive MED classification is superior to transductive SVM classification, since it allows for the incorporation of prior label knowledge in a principled way.
Inductive MED classification assumes a prior distribution over the parameters of the decision function, a prior distribution over the bias term, and a prior distribution over margins. It selects as a final distribution over these parameters the one that is closest to the prior distributions and yields an expected decision function that classifies the data points correctly.
Formally, for example given a linear classifier, the problem is formulated as follows: Find the distribution over hyperplane parameters p(&), the bias , the data points classification margins p(χ) whose combined probability distribution has a minimal Kullback Leibler divergence KL to the combined respective prior distributions p0 , i.e.
min = KL{p(β)p{.γ)p(b)\\pQ(©)Po{γ)p0(b))y (1)
Pv^l. P(ThPC)
subject to the constraint
where the " ' is the dot product between the separating hyperplane' s weight vector and the t — th data point's feature vector. Since the label assignments >>ι are known and fixed, no prior distribution over the binary label assignments is needed. Accordingly, a straightforward method to generalize inductive MED classification to transductive MED classification is to treat the binary label assignments as parameters that are constrained by a prior distribution over possible label assignments. An example of transductive MED is shown in Table 2.
Transductive MED classification Require: Data Matrix AOf labeled and unlabeled training examples.
Require: Label prior probabilities po(y) for labeled and unlabeled training examples.
1: (Y) := ExpectedLabel(po(y)) {Expected label determined from the training examples' label prior probabilities.} 5 2: while —^converged do
3: W := MinimizeKLDivergence(X ,(Y))
4: r := InduceLabels(W,X,po(y))
6. end while 10
Table 2
For the labeled data, the label prior distribution is a δ function, thus, effectively fixing the label to be either +1 or -1. For the unlabeled data, a label prior probability po(y) is assumed 15 that assigns to every unlabeled data point a positive label of y = +\ with a probability of P0 (y) and a negative label of y = -l with a probability of l — po(y) - Assuming a noninformative label prior (po(y) = 1/2), yields a transductive MED classification analogous to the transductive SVM classification discussed above.
-0 As in the case of the transductive SVM classification, a practically applicable implementation of such an MED algorithm must approximate the search through all possible label assignments. The method described in T. Jaakkola, M. Meila, and T. Jebara, Maximum entropy discrimination, Technical Report AITR- 1668, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 1999 (Jaakkola) elects as an approximation to
15 decompose the procedure into two steps, similar to an Expectation Maximization (EM) formulation. In this formulation, there are two problems to solve. The first, analogous to the M step in EM algorithms, is similar to the maximization of the margin while classifying all data points correctly according to the current best guess of label assignments. The second step, .analogous to the E step, uses the classification results determined in the M step and estimates new values for each example's class membership. This second step we call label induction. A general description is shown in Table 2.
The specific implementation of the method of Jaakkola, referenced herein, assumes a Gaussian with zero mean and unit variance for the hyperplane parameters, a Gaussian with zero mean and variance for the bias parameter, a margin prior of the form exp[-c(l -/)]with γ a data point's margin and C the cost factor, and a binary label prior probability of po(y) for unlabeled data as discussed above. For the following discussion of the transductive classification algorithm Jaakkola, referenced herein, a label prior probability of 1/2 is assumed for reasons of simplicity and without loss of generality.
The label induction step determines the label probability distribution given a fixed probability distribution for the hyperplane parameters. Using the margin and label priors introduced above yields the following objective function for the label induction step (see Table 2)
3(;t) = ∑(4 + Iog(l -Vc))-logcosh(ΛΛ), (3)
where λ,ϊs the /— th training example Lagrange Multiplier, and S1 its classification score determined in the previous M-step, and c the cost factor. The first two terms in the sum over the training examples is derived from the margin prior distribution, whereas the third term is given by the label prior distribution. By maximizing 3 the Lagrange Multipliers are determined and, thus, the label probability distributions for the unlabeled data. As can be seen from Eq. 3 the data points contribute independently to the objective function and, thus, each Lagrange Multiplier can be determined irrespective of every other Lagrange Multiplier. For example, in order to maximize the contribution of an unlabeled data point with a high absolute value of its classification score |j,| a small Lagrange Multiplier λ, is required, whereas an unlabeled data point with a small value of Is11 maximizes its contribution to 3 with a large Lagrange Multiplier. On the other hand, the expected label (y) of an unlabeled data point as a function of its classification score s and its Lagrange Multiplier λ is
Fig, 1 shows the expected label (y) as a function of the classification score s using the cost factor of c = 5 and c = 1.5 . The Lagrange Multipliers used in the generation of Fig. 1 have been determined by solving Eq. 3 using a cost factor of c = 5 and c = 1.5 . As can be seen from Fig. 1, unlabeled data points outside the margin, i.e. \s\ > \ , have expected labels {y) close to zero, data points close to the margin, i.e. ∞ 1 , yield the highest absolute expected label values, and data points close to the hyperplane, i.e. |jj <e, yield |(j/)| <e . The reason for this unintuitive label assignment of (^) — > 0 for JjJ —»• oo lies within the elected discriminative approach that attempts to stay as close as possible to the prior distribution as long as the classification constraints are fulfilled. It is not an artifact of the approximation elected by the known method of Table 2, i.e. an algorithm that exhaustively searches through all possible label assignments and, thus, has the guarantee to find the global optimum also assigns unlabeled data outside the margin expected labels either close to or equal to zero. Again, as mentioned above, that is expected from a discriminative point of view. Data points outside the margin are not important for separating the examples and, thus, all individual probability distributions of these data points revert back to their prior probability distribution.
The M step of the transductive classification algorithm of Jaakkola, referenced herein, determines the probability distributions for the hyperplane parameters, the bias term, and margins of the data points that are closest to the respective prior distribution under the constraints
Vt : Sl (y,)-(χ,) ≥ 0, (5) where s, is the t— th data point classification score, (j/,) its expected label and (^) its expected margin. For labeled data, the expected label is fixed and either (j/) = +l or (>>) = — 1. The expected label for unlabeled data lies in the interval (-1, +1) and is estimated in the label induction step. According to Eq. 5 unlabeled data have to fulfill tighter classification constraints than labeled data since the classification score is scaled by the expected label. Furthermore, given the dependence of the expected label as a function of the classification score, referring to Fig. 1, unlabeled data close to the separating hyperplane have the most stringent classification constraints since their score as well as the absolute value of their expected label |(.y,)| is small. The M step's full objective function given the prior distributions mentioned above is
^λ) = -\Yd{y,)(yl)λ,λl.K{X,,XM∑{λ^\Q%{\-λJc))~kσbYJ{yllf. (6)
* 1,1' I *• I
The first term is derived from the Gaussian hyperplane parameters prior distribution, the second term is the margin prior regularization term and the last term is the bias prior regularization term derived from a Gaussian prior with zero mean and variance . The prior distribution over the bias term can be interpreted as a prior distribution over class prior probabilities. Accordingly, the regularization term that corresponds to the bias prior distribution constrains the weight of the positive to negative examples. According to Eq. 6, the contribution of the bias term is minimized in case the collective pull of the positive examples on the hyperplane equals the collective pull of the negative examples. The collective constraint on the Lagrange Multipliers owing to the bias prior is weighted by the expected label of the data points and is, therefore, less restrictive for unlabeled data than for labeled data. Thus, unlabeled data have the ability of influencing the final solution stronger than the labeled data.
In summary, at the M step of the transductive classification algorithm of Jaakkola, referenced herein, unlabeled data have to fulfill stricter classification constraints than the labeled data and their cumulative weight to the solution is less constrained than for labeled data. In addition, unlabeled data with an expected label close to zero that lie within the margin of the current M step influence the solution the most. The resulting net effect of formulating the E and M step this way is illustrated by applying this algorithm to the dataset shown in Fig. 2. The dataset includes two labeled examples, a negative example (x) at x-position -1 and a positive example (+) at +1, and six unlabeled examples (o) between -1 and +1 along the x- axis. The cross (x) denotes a labeled negative example, the plus sign (+) a labeled positive example, and the circles (o) unlabeled data. The different plots show separating hyperplanes determined at various iterations of the M step. The final solution elected by the transductive MED classifier of Jaakkaola, referenced herein, misclassifies the positive labeled training example. Fig. 2 shows several iterations of the M step. At the first iteration of the M step, no unlabeled data are considered and the separating hyperplane is located a x = 0. The one unlabeled data point with a negative x-value is closer than any other unlabeled data to this separating hyperplane. At the following label induction step, it will get assigned the smallest
\{y)\ and, accordingly, at the next M step it has the most power to push the hyperplane towards the positive labeled example. The specific shape of the expected label (y) as a function of the classification score determined by the chosen cost factor (see Fig. 1) combined with the particular spacing of the unlabeled data points creates a bridge effect, where at each consecutive M step the separating hyperplane is moving closer and closer towards the positive labeled example. Intuitively, the M step suffers from a kind of short sightedness, where the unlabeled data point closest to the current separating hyperplane determines the final position of the plane the most and the data points further away are not very, important. Finally, owing to the bias prior term that restricts the collective pull of unlabeled data less than the collective pull of the labeled data, the separating hyperplane moves beyond the positive labeled example yielding a final solution, 15— th iteration in Fig. 2, that misclassifies the positive labeled example. A bias variance of cr*=l and a cost factor of c = 10 has been used in Fig. 2. With Cr^=I any cost factor in the range 9.8 <c < 13 results in a final hyperplane that misclassifies the one positive labeled example. Cost factors outside the interval 9.8 < c < 13 yield separating hyperplanes anywhere between the two labeled examples. This instability of this algorithm is not restricted to the example shown in Fig. 2, but also has been experienced while applying the Jaakkola method, referenced herein, to real world datasets, involving the Reuters data set known to those skilled in the art. The inherent instability of the method described in Table 2 is a major shortcoming of this implementation 5 and restricts its general usability, though the Jaakkola method may be implemented in some embodiments of the present invention.
One preferred approach of the present invention employs transductive classification using the framework of Maximum Entropy Discrimination (MED). It should be understood that 10 various embodiments of the present invention, while applicable for classification may also be applicable to other MED learning problems using transduction, including, but not limited to transductive MED regression and graphical models.
Maximum Entropy Discrimination constrains and reduces the possible solutions, by
15 assuming a prior probability distribution over the parameters. The final solution is the expectation of all possible solutions according to the probability distribution that is closest to the assumed prior probability distribution under the constraint that the expected solution describes the training data correctly. The prior probability distribution over solutions maps to a regularization term, i.e. by choosing a specific prior distribution one has selected a specific
.0 regularization.
Discriminative estimation as applied by Support Vector Machines is effective in learning from few examples. This method and apparatus of one embodiment of the present invention has this in common with Support Vector Machines and does not attempt to estimate more
.5 parameters than necessary for solving the given problem and, consequently, yields a sparse solution. This is in contrast to generative model estimation that attempts to explain the underlying process and, in general needs higher statistics than discriminative estimation. On the other hand, generative models are more versatile and can be applied to a larger variety of problems. In addition, generative model estimation enables straightforward inclusion of prior
)0 knowledge. The method and apparatus of one embodiment of the present invention using Maximum Entropy Discrimination bridges the gap between pure discriminative, e.g. Support Vector Machine learning, and generative model estimation.
The method of one embodiment of the present invention as shown in Table 3 is an improved transductive MED classification algorithm that does not have the instability problem of the method discussed in Jaakkola, referenced herein. Differences include, but are not limited to, that in one embodiment of the present invention every data point has its own cost factor proportional to its absolute label expectation value Ky)I- In addition, each data points label prior probability is updated after each M step according to the estimated class membership probability as function of the data point's distance to the decision function. The method of one embodiment of the present invention is described in Table 3 as follows:
Improved transductive MED classification
Require: Data matrix AT of labeled and unlabeled training examples
Require: Label prior probabilities po(y) for labeled and unlabeled training examples.
Require: Global cost factor c.
1: (Y) : ExpectedLabel(po(y)) {Expected label determined from the training
' examples' label prior probabilities.} 2: while — \converged do
3: C :- {Scale each training example's cost factor by the absolute value of its expected label.}
4: W := MinimizeKLDivergence(X , (Y), C)
5: P0 OO '•= EstimateClassProbability (JV, (Y)) 6: T := InduceLabels(W , X, p0 (y), C)
T. (r) :=e (r)+(i- e)r
8: end while
Table 3 Scaling the data points cost factors by i(_y)| mitigates the problem that the unlabeled data can have a stronger cumulative pull on the hyperplane than the labeled data, since the cost factors of unlabeled data are now smaller than labeled data cost factors, i.e. each unlabeled data point's individual contribution to the final solution is always smaller than labeled data points individual contribution. However, in case the amount of unlabeled data is much larger then the number of labeled data, the unlabeled data still can influence the final solution more than the labeled data. In addition, the conjunction of cost factor scaling with updating the label prior probability using the estimated class probability solves the problem of the bridge effect outlined above. At the first M steps, unlabeled data have small cost factors yielding an expected label as function of the classification score that is very flat (see Fig. 1) and, accordingly, to some extent all unlabeled data are allowed to pull on the hyperplane, albeit only with small weight. In addition, owing to the updating of the label prior probability, unlabeled data far away from the separating hyperplane do not get assigned an expected label close to zero, but after several iterations a label close to either y = +1 of y = -1 and, thus, are slowly treated like labeled data.
In a specific implementation of the method of one embodiment of the present invention, by assuming a Gaussian prior with zero mean and unit variance for the decision function parameters Θ
The prior distribution over decision function parameters incorporates important prior knowledge of the specific classification problem at hand. Other prior distributions of decision function parameters important for classification problem are for example a multinomial distribution, a Poisson distribution, a Cauchy distribution (Breit-Wigner), a Maxwell- Boltzman distribution or a Bose-Einstein distribution. The prior distribution over the threshold b of the decision function is given by a Gaussian distribution with mean μb and variance σj
As prior distribution of a data point's classification margin γ(
-c(l+--r.)
Po(T1) = ce ° ' , (9)
10
Was elected, where c is the cost factor. This prior distribution differs from the one used in Jaalckola, referenced herein, which has the form exp[-c(l - y)] . Preferably, the form given in
Eq- 9 over the form used in Jaakkola, referenced herein, since it yields a positive expected margin even for cost factor smaller than one, whereas exp[-c(l-^)] yields a negative 15 expected margin for c < 1 .
Given these prior distributions, determining the corresponding partition functions Z is straightforward (see for example T.M. Cover and J.A. Thomas, Elements of Information Theory, John Wiley & Sons, Inc.) (Cover), and the objective functions 3 = -log Z are >0
3,W = ∑(i +-)A +iog(i -4-).
According to Jaakkola, referenced herein the objective function of the M step is 3Ma) = 3e(λ) + 3b(λ) + 3r(λ) (11)
and the E step's objective function is
3£(Λ) = 3r(Λ)-∑log ∑ pO l{y,)e^\ (12)
where S1 is the / — th data point's classification score determined in the previous M step and P01Cy1) the data point's binary label prior probability. The label prior is initialized to P0 j Xy1) =1 for labeled data and to either the non-informative prior of pOJ(y,)=l / 2 or the class prior probability for unlabeled data.
The section herein entitled M STEP describes the algorithm to solve the M step objective function. Also, the section herein entitled E STEP describes the E step algorithm.
The step EstimateClassProbability in line 5 of Table 3 uses the training data to determine the calibration parameters to turn classification scores into class membership probabilities, i.e. the probability of the class given the score Relevant methods for estimating the score calibration to probabilities are described in J. Platt, Probabilistic outputs for support vector machines and comparison to regularized likelihood methods, pages 61-74, 2000 (Platt) and B. Zadrozny and C. Elkan, Transforming classifier scores into accurate multi-class probability estimates, 2002 (Zadrozny).
Referring particularly to Fig. 3, the cross (x) denotes a labeled negative example, the plus sign (+) a labeled positive example, and the circles (o) unlabeled data. The different plots show separating hyperplanes determined at various iterations of the M step. The 20-th iteration shows the final solution elected by the improved transductive MED classifier. Fig. 3 shows the improved transductive MED classification algorithm applied to the toy dataset introduced above. The parameters used arec = 10,^ = 1, and μb = 0. Varying c yields separating hyperplanes that are located between jt « — 0.5 and x = 0 , whereby with c < 3.5 the! hyperplane is located right to the one unlabeled data with x < 0 and with c ≥ 3.5 left to this unlabeled data point.
Referring particularly to Fig. 4, a control flow diagram is illustrated showing the method of classification of unlabeled data of one embodiment of the present invention. The method 100 begins at step 102 and at step 104 accesses stored data 106. The data is stored at a memory location and includes labeled data, unlabeled data and at least one predetermined cost factor.
The data 106 includes data points having assigned labels. The assigned labels identify whether a labeled data point is intended to be included within a particular category or excluded from a particular category.
Once data is accessed at step 104, the method of one embodiment of the present invention at step 108 then determines the label prior probabilities of the data point using the label information of data point. Then, at step 110 the expected labels of the data point are determined according to the label prior probability. With the expected labels calculated in step 110, along with the labeled data, unlabeled data and cost factors, step 112 includes iterative training of the transductive MED classifier by the scaling of the cost factor unlabeled data points. In each iteration of the calculation the unlabeled data points' cost factors are scaled. As such, the MED classifier learns through repeated iterations of calculations. The trained classifier then accessed input data 114 at step 116. The trained classifier can then complete the step of classifying input data at step 118 and terminates at step 120.
It is to be understood that the unlabeled data of 106 and the input data 114 may be derived from a single source. As such, the input data/unlabeled data can be used in the iterative process of 112 which is then used to classify at 118. Furthermore, one embodiment of the present invention contemplates that the input data 114 maybe include a feedback mechanism to supply the input data to the stored data at 106 such that the MED classifier of 112 can dynamically learn from new data that is input. Referring particularly to Fig. 5, a control flow diagram is illustrated showing another method of classification of unlabeled data of one embodiment of the present invention including user defined prior probability information. The method 200 begins at step 202 and at step 204 accesses stored data 206. The data 206 includes labeled data, unlabeled data, a predetermined cost factor, and prior probability information provided by a user. The labeled data of 206 includes data points having assigned labels. The assigned labels identify whether the labeled data point is intended to be included within a particular category or excluded from a particular category.
At step 208, expected labels are calculated from the data of 206. The expected labels then used in step 210 along with labeled data, unlabeled data and cost factors to conduct iterative training of a transductive MED classifier. The iterative calculations of 210 scale the cost factors of the unlabeled data at each calculation. The calculations continue until the classifier is properly trained.
The trained classifier then accessed input data at 214 from input data 212. The trained classifier can then complete the step of classifying input data at step 216. As with the process and method described in Fig 4, the input data and the unlabeled data may derive from a single source and may be put into the system at both 206 and 212. As such, the input data 212 can influence the training at 210 such that the process my dynamically change over time with continuing input data.
In both methods as shown in Figs. 4 and 5 a monitor may determine whether or not the system has reached convergence. Convergence may be determined when the change of the hyperplane between each iteration of the MED calculation falls below a predetermined threshold value. In an alternative embodiment of the present invention, the threshold value can be determined when the change of the determined expected label falls below a predetermined threshold value. If convergence is reached, then the iterative training process may cease. Referring particularly to Fig. 6, illustrated is a more detailed control flow diagram of the iterative training process of at least one embodiment of the method of the present invention. The process 300 commences at step 302 and at step 304 data is accessed from data 306 and may include labeled data, unlabeled data, at least one predetermined cost factor, and prior probability information. The labeled data points of 306 include a label identifying whether the' data point is a training example for data points to be included in the designated category or a training example for data points to be excluded form a designated category. The prior probability information of 306 includes the probability information of labeled data sets and unlabeled data sets.
In step 308, expected labels are determined from the data from the prior probability information of 306. In step 310, the cost factor is scaled for each unlabeled data set proportional to the absolute value of the expected label of a data point. An MED classifier is then trained in step 312 by determining the decision function that maximizes the margin between the included training and excluded training examples utilizing the labeled as well as the unlabeled data as training examples according to their expected labels. In step 314 classification scores are determined using the trained classifier of 312. In 316 classification scores are calibrated to class membership probability. In step 318, label prior probability information is updated according to the class membership probability. An MED calculation is preformed in step 320 to determine label and margin probability distributions, wherein the previously determined classification scores are used in the MED calculation. As a result, new expected labels are computed at step 322 and the expected labels are updated in step 324 using the computations from step 322. At step 326 the method determines whether convergence has been achieved. If so, the method terminates at step 328. If convergence is not reached, another iteration of the method is completed starting with step 310. Iterations are repeated until convergence is reached thus resulting in an iterative training of the MED classifier. Convergence may be reached when change of the decision function between each iteration of the MED calculation falls below a predetermined value. In an alternative embodiment of the present invention, convergence may be reached when the change of the determined expected label value falls below a predetermined threshold value. Fig. 7 illustrates a network architecture 700, in accordance with one embodiment. As shown, a plurality of remote networks 702 are provided including a first remote network 704 and a second remote network 706. A gateway 707 may be coupled between the remote networks 702 and a proximate network 708. In the context of the present network architecture 700, the networks 704, 706 may each take any form including, but not limited to a LAN, a WAN such as the Internet, PSTN, internal telephone network, etc.
In use, the gateway 707 serves as an entrance point from the remote networks 702 to the proximate network 708. As such, the gateway 707 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 707, and a switch, which furnishes the actual path in and out of the gateway 707 for a given packet.
Further included is at least one data server 714 coupled to the proximate network 708, and which is accessible from the remote networks 702 via the gateway 707. It should be noted that' the data server(s) 714 may include any type of computing device/groupware. Coupled to eaclj data server 714 is a plurality of user devices 716. Such user devices 716 may include a desktop computer, lap-top computer, hand-held computer, printer or any other type of logic. It should be noted that a user device 717 may also be directly coupled to any of the networks, in one embodiment. ;
A facsimile machine 720 or series of facsimile machines 720 may be coupled to one or more of the networks 704, 706, 708.
It should be noted that databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 704, 706, 708. In the context of the present description, a network element may refer to any component of a network.
Fig. 8 shows a representative hardware environment associated with a user device 716 of Fig. 7, in 'accordance with one embodiment. Such Fig. illustrates a typical hardware configuration of a workstation having a centraJ processing unit 810, such as a microprocessor, and a number of other units interconnected via a system bus 812.
The workstation shown in Fig. 8 includes a Random Access Memory (RAM) 814, Read Only
5 Memory (ROM) 816, an I/O adapter 818 for connecting peripheral devices such as disk storage units 820 to the bus 812, a user interface adapter 822 for connecting a keyboard 824, a mouse 826, a speaker 828, a microphone 832, and/or other user interface devices such as a touch screen and a digital camera (not shown) to the bus 812, communication adapter 834 for connecting the workstation to a communication network 835 (e.g., a data processing
.0 network) and a display adapter 836 for connecting the bus 812 to a display device 838.
Referring particularly to Fig. 9 there is shown the apparatus 414 of one embodiment of the present invention. One embodiment of the present invention comprises in memory device 814 for storing labeled data 416. The labeled data points 416 each include a label indicating 5 whether the data point is a training example for data points being included in the designated category or a training example for data points being excluded from a designated category. Memory 814 also stores unlabeled data 418, prior probability data 420 and the cost factor data 422.
50 The processor 810 accesses the data from the memory 814 and using transductive MED calculations trains a binary classifier enable it to classify unlabeled data. The processor 810 uses iterative transductive calculation by using the cost factor and training examples from labeled and unlabeled data and scaling that cost factor as a function of expected label value thus effecting the data of the cost factor data 422 which is then re-input into processor 810.
!5 Thus the cost factor 422 changes with each iteration of the MED classification by the processor 810. Once the processor 810 adequately trains an MED classifier, the processor can then construct the classifier to classify the unlabeled data into classified data 424.
Transductive SVM and MED formulations of the prior art lead to an exponential growth of i0 possible label assignments and approximations have to be developed for practical applications. In an alternative embodiment of the present invention, a different formulation of the transductive MED classification is introduced that does not suffer from an exponential growth of possible label assignments and allows a general closed form solution. For a linear classifier the problem is formulated as follows: Find the distribution over hyperplane parameters p{&), the bias distribution p{b) , the data points classification margins p(y) whose combined probability distribution has a minimal Kullback Leibler divergence KL to the combined respective prior distributions p0 , i.e. wt) = KL(p(Θ)p(γ)p(b)\\po(Θ)po(γ)pQ(b)), (13)
subject to the following constraint for the labeled data
and subject to the following constraint for the unlabeled data
where the ΘX, is the dot product between the separating hyperplane' s weight vector and the
/ — th data point's feature vector. No prior distribution over labels is necessary. The labeled data are constrained to be on the right side of the separating hyperplane according to their known labels, whereas the only requirement for the unlabeled data is that their squared distance to the hyperplane is greater than the margin. In summary this embodiment of the present invention finds a separating hyperplane that is a compromise of being closest to the chosen prior distribution, separating the labeled data correctly, and having no unlabeled data between the margins. The advantage is that no prior distribution over labels has to be introduced, thus, avoiding the problem of exponentially growing label assignments. In specific implementation of the alternate embodiment of the present invention, using the prior distributions given in the Eqs. 7, 8, and 9 for the hyperplane parameters, the bias, and the'margins yields the following partition function
where subscript t is the index of the labeled data and /' the index of the unlabeled data. Introducing the notation
G2 = ∑ ,.U1^9G3 =G, -2G2,
and W = ∑ tλ,YlUt -2∑,λ,r,U,,
Eq. 16 can be rewritten as follows
yielding, after integration, the following partition function
i.e.'the final objective function is
The' objective function 3 can be solved by applying similar techniques as in the case of known labels as discussed in the section herein entitled M Step. The difference is that matrix G3 ~' in the quadratic form of the maximum margin term has now off-diagonal terms.
There exist many applications of method of the present invention employing Maximum Entropy Discrimination framework besides classification. For example MED can be applied to solve classification of data, in general, any kind of discriminant function and prior distributions, regression and graphical models (T. Jebara, Machine Learning Discriminative and Generative, Kluwer Academic Publishers) (Jebara).
The applications of the embodiments of the present invention can be formulated as pure inductive learning problems with known labels as well as a transductive learning problem with labeled as well as unlabeled training examples. In the latter case, the improvements to the transductive MED classification algorithm described in Table 3 are applicable as well to general transductive MED classification, transductive MED regression, transductive MED learning of graphical models. As such, for purposes of this disclosure and the accompanying claims, the word "classification" may include regression or graphical models. M Step
According to Eq.11, the M step's objective function is
whereby the Lagrange Multipiers λ, are determined by maximizing JM . [0
Omitting the redundant constraint that λ, < c , the Lagrangian for the dual problem above is
Vt:0≤λt≤c,δ,≥0,δ,Λl=0.
The KKT conditions, which are necessary and sufficient for optimality, are
»0
I c) c-λ, ' Vr:0<5> >0,<?,;i, =0 (23)
whereby i^ is
At optimum, the basis equals the expected bias ,λ, (y,) + μb yielding
(y,)(-F.-(b))+s<=° (25)
[5
These equations can be summarized by considering two cases using the δ,λ, = 0 constraint. The first case for all λ, = 0 , and second for all 0 < X1 < c. There is no need for the third case as described in S. Keerthi, S. Shevade, C. Bhattacharyya, and K. Murthy, Improvements to plait's smo algorithm for svm classifier design, 1999 (Keerthi), applied to the SVM 20 algorithm; the potential function in this formulation maintains that λ,≠c.
A,=0,δl≥0^(Fl+(b))(yι)≥0 (26)
0<λ,<c,Sl=0=>(F,+(b)) = 0 (27) Until the optimum is reached, violations of these conditions for some data point t will be present. Namely, F, ≠ -(b) when λ, is nonzero or F1 (>>,) < -(b)(y,) when it is zero.
Unfortunately, calculating (ά) is impossible without the optimum λ, 's. A good solution to this is borrowed from Keerthi, referenced herein again, by constructing the following three sets.
/0 = {r : 0 < Λ, < c) (28)
I4 = {t :(y,) < 0,Λ, = 0} (30)
Using these sets we can define the most extreme violations of the optimal ity conditions using the following definitions. The elements in J0 are violations whenever they are not equal to
— (#) , therefore, the largest and smallest Ft from/, are candidates for being violations. The elements in /, are violations when F1 < -{b) so the smallest element from /, is the most extreme violation if one exists. Lastly, the elements in /4 are violations when F, > —(b} , which makes the largest elements from /4 violation candidates. Therefore, - (b) is bounded by the min and max over these sets as shown below. -blow =™ {F, :t e IQ vI<) (32)
Due to the fact that at optimum —bup and — blow must be equal, namely — (&) , then reducing the gap between —bup and — blow will push the training algorithm to convergence.
Additionally, the gap can also be measured as a way to determine numerical convergence.
As previously stated, the value of b = (ό) is not known until convergence. The method of this alternate embodiment differs in that only one example can be optimized at a time. Therefore the training heuristic is to alternate between the examples in I0 and all of the examples every other time.
E Step The E step's objective function of Eq. 12 is
whereby s, is the t — th datapoint's classification score determined in the previous M step. The Lagrange Multipliers X1 are determined by maximizing 3£ .
Omitting the redundant constraint that λ, < c , the Lagrangian for the dual problem above is:
Vl : 0 ≤ λ, ≤ c,δ, ≥ 0,δ,λ1 = 0
The <KKT conditions, which are necessary and sufficient for optimality, are
'.
Solving for the Lagrange5 multipliers by optimizing the KKT conditions can be done in one pass over the exampled since they factorize over the examples.
For labeled examples the expected label (>»,) is one with PO l{yt) = \ and POJ(-y,) = 0 reducing the KKT conditions to
and yielding as solutions for the Lagrange Multipliers of labeled examples
For unlabeled examples, Eq. 35 cannot be solved analytically, but has to be determined by applying e.g. a linear search for each unlabeled example's Lagrange Multiplier that satisfies Eq. 35.
The following are several non-limiting examples that are enabled by the techniques illustrated above, derivations or variations thereof, and other techniques known in the art. Each example includes the preferred operations, along with optional operations or parameters that'may be implemented in the basic preferred methodology.
In one embodiment, as presented in Fig. 10, labeled data points are received at step 1002, where each of the labeled data points has at least one label which indicates whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category. In addition, unlabeled data points are received at step 1004, as well as at least one predetermined cost factor of the labeled data points and unlabeled data points. The data points may contain any medium, e.g. words, images, sounds, etc. Prior probability information of labeled and unlabeled data points may also be received. Also, the label of the included training example may be mapped to a first numeric value, e.g. +1, etc., and the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc. In addition, the labeled data points, unlabeled data points, input data points, and at least one predetermined cost factor of the labeled data points and unlabeled data points may be stored in a memory of a computer. Further, at step 1006 a transductive MED classifier is trained through iterative calculation using said at least one cost factor and the labeled data points and the unlabeled data points as training examples. For each iteration of the calculations, the unlabeled data point cost factor is adjusted as a function of an expected label value, e.g. the absolute value of the expected label of a data point, etc., and a data point label prior probability is adjusted according to an estimate of a data point class membership probability, thereby ensuring stability. Also, the transductive classifier may learn using prior probability information of the labeled and unlabeled data, which further improves stability. The iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
Additionally, in step 1008 the trained classifier is applied to classify at least one of the unlabeled data points, the labeled data points, and input data points. Input data points may be received before or after the classifier is trained, or may not be received at all. Also, the decision function that minimizes the BCL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data points as learning examples according to their expected label. Alternatively, the decision function may be determined with minimal KL divergence using a multinomial distribution for the decision function parameters.
In step 1010 a classification of the classified data points, or a derivative thereof, is output to at least one of a user, another system, and another process. The system may be remote or local. Examples of the derivative of the classification may be, but are not limited to, the classified data points themselves, a representation or identifier of the classified data points or host file/document, etc.
In another embodiment, computer executable program code is deployed to and executed on a computer system. This program code comprises instructions for accessing stored labeled data points in a memory of a computer, where each of said labeled data points has at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category. In addition, the computer code comprises instructions for accessing unlabeled data points from a memory of a computer as well as accessing at least one predetermined cost factor of the labeled data points and unlabeled data points from a memory of a computer. Prior probability information of labeled and unlabeled data points stored in a memory of a computer may also be accessed. Also, the label of the included training example may be mapped to a first numeric value, e.g. +1, etc., and the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc.
Further, the program code comprises instructions for training a transductive classifier through iterative calculation, using the at least one stored cost factor and stored labeled data points and, stored unlabeled data points as training examples. Also, for each iteration of the calculation, the unlabeled data point cost factor is adjusted as a function of the expected label value of the data point, e.g. the absolute value of the expected label of a data point. Also, for each iteration, the prior probability information may be adjusted according to an estimate of a data point class membership probability. The iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
Additionally, the program code comprises instructions for applying the trained classifier to classify at least one of the unlabeled data points, the labeled data points, and input data points, as well as instructions for outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process. Also, the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data as learning examples according to their expected label. In yet another embodiment, a data processing apparatus comprises at least one memory for storing: (i) labeled data points, wherein each of said labeled data points have at least one label indicating whether the data point is a training example for data points being included in a designated category or a training example for data points being excluded from a designated category; (ii) unlabeled data points; and (iii) at least one predetermined cost factor of the labeled data points and unlabeled data points. The memory may also store prior probability information of labeled and unlabeled data points. Also, the label of the included training example may be mapped to a first numeric value, e.g. +1, etc., and the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc.
In addition, the data processing apparatus comprises a transductive classifier trainer to iteratively teach the transductive classifier using transductive Maximum Entropy Discrimination (MED) using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples. Further, at each iteration of the MED calculation the cost factor of the unlabeled data point is adjusted as a function of the expected label value of the data point, e.g. the absolute value of the expected label of a data point, etc. Also, at each iteration of the MED calculation, the prior probability information may be adjusted according to an estimate of a data point class membership probability. The apparatus may further comprise a means for determining the convergence of data values, e.g. when the change of the decision function of the transductive classifier calculation falls below a predetermined threshold value, when the change of the determined expected label values falls below a predetermined threshold value, etc., and terminating calculations upon determination of convergence.
In addition, a trained classifier is used to classify at least one of the unlabeled data points, the labeled data points, and input data points. Further, the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined by a processor utilizing the labeled as well as the unlabeled data as learning examples according to their expected label. Also, a classification of the classified data points, or derivative thereof, is output to at least one of a user, another system, and another process.
In a further embodiment, an article of manufacture comprises a program storage medium readable by a computer, where the medium tangibly embodies one or more programs of instructions executable by a computer to perform a method of data classification. In use, labeled data points are received, where each of the labeled data points has at least one label which indicates whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category. In addition, unlabeled data points are received, as well as at least one predetermined cost factor of the labeled data points and unlabeled data points. Prior probability information of labeled and unlabeled data points may also be stored in a memory of a computer. Also, the label of the included training example may be mapped to a first numeric value, e.g. +1, etc., and the label of the excluded training example may be mapped to a second numeric value, e.g. -1 , etc.
Further, a transductive classifier is trained with iterative Maximum Entropy Discrimination (MED) calculation using the at least one stored cost factor and the stored labeled data points and the unlabeled data points as training examples. At each iteration of the MED calculation, the unlabeled data point cost factor is adjusted as a function of an expected label value of the data point, e.g. the absolute value of the expected label of a data point, etc. Also, at each iteration of the MED calculation, the prior probability information may be adjusted according to an estimate of a data point class membership probability. The iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
Additionally, input data points are accessed from the memory of a computer, and the trained classifier is applied to classify at least one of the unlabeled data points, the labeled data points, and input data points. Also, the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data as learning examples according to their expected label. Further, a classification of the classified data points, or a derivative thereof, is output to at least one of a 5 user, another system, and another process.
In yet another embodiment, a method for classification of unlabeled data in a computer-based system is presented. In use, labeled data points are received, each of said labeled data points having at least one label indicating whether the data point is a training example for data
[ 0 points for being included in a designated category or a training example for data points being excluded from a designated category.
Additionally, labeled and unlabeled data points are received, as are prior label probability information of labeled data points and unlabeled data points. Further, at least one 15 predetermined cost factor of the labeled data points and unlabeled data points is received.
Further, the expected labels for each labeled and unlabeled data point are determined according to the label prior probability of the data point. The following substeps are repeated until substantial convergence of data values: .0
• generating a scaled cost value for each unlabeled data point proportional to the absolute value of the data point's expected label;
• training a Maximum Entropy Discrimination (MED) classifier by determining the decision function that minimizes the KL divergence to the prior probability distribution
15 of the decision function parameters given the included training and excluded training examples utilizing the labeled as well as the unlabeled data as training examples according to their expected label;
• determining the classification scores of the labeled and unlabeled data points using the trained classifier; O • calibrating the output of the trained classifier to class membership probability; • I updating the label prior probabilities of the unlabeled data points according to the determined class membership probabilities;
• determining the label and margin probability distributions using Maximum Entropy .Discrimination (MED) using the updated label prior probabilities and the previously determined classification scores;
• 'computing new expected labels using the previously determined label probability 'distribution; and
• ^updating expected labels for each data point by interpolating the new expected labels with the expected label of previous iteration.
Also, a classification of the input data points, or derivative thereof, is output to at least one of a user, another system, and another process.
Convergence may be reached when the change of the decision function falls below a predetermined threshold value. Additionally, convergence may also be reached when the change of the determined expected label value falls below a predetermined threshold value. Further, the label of the included training example may have any value, for example, a value of +1, and the label of the excluded training example may have any value, for example, a valu'e of -1. !
In one embodiment of the present invention, a method for classifying documents is presented in Fig. 11. In use, at least one seed document having a known confidence level is received in step 1100, as well as unlabeled documents and at least one predetermined cost factor. The seed, document and other items may be received from a memory of a computer, from a user, from a network connection, etc., and may be received after a request from the system performing the method. The at least one seed document may have a label indicative of whether the document is included in a designated category, may contain a list of keywords, or have any other attribute that may assist in classifying documents. Further, in step 1102 a transductive classifier is trained through iterative calculation using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value. A data point label prior probability for the labeled and unlabeled documents may also be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
Additionally, after at least some of the iterations, in step 1104 confidence scores are stored for the unlabeled documents, and identifiers of the unlabeled documents having the highest confidence scores are output in step 1106 to at least one of a user, another system, and another process. The identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Also, confidence scores may be stored after each of the iterations, wherein an identifier of the unlabeled document having the highest confidence score after each iteration is output.
One embodiment of the present invention is capable of discovering patterns that link the initial document to the remaining documents. The task of discovery is one area where this pattern discovery proves particularly valuable. For instance, in pre-trial legal discovery, a large amount of documents have to be researched with regard to possible connections to the lawsuit at hand. The ultimate goal is to find the "smoking gun." In another example, a common task for inventors, patent examiners, as well as patent lawyers is to evaluate the novelty of a technology through prior art search. In particular the task is to search all published patents and other publications and find documents within this set that might be related to the specific technology that is examined with regard to its novelty.
The task of discovery involves finding a document or a set of documents within a set of data. Given an initial document or concept, a user may want to discover documents that are related to the initial document or concept. However, the notion of relationship between the initial document or concept and the target documents, i.e. the documents that are to be discovered, is only well understood after the discovery has taken place. By learning from labeled and unlabeled documents, concepts, etc., the present invention can learn patterns and relationships between the initial document or documents and the target documents. In .another embodiment of the present invention, a method for analyzing documents associated with legal discovery is presented in Fig. 12. In use, documents associated with a legal matter are received in step 1200. Such documents may include electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Additionally, a document classification technique is performed on the documents in step 1202. Further, identifiers of at least some of the documents are output in step 1204 based on the classification thereof. As an option, a representation of links between the documents may also be output
The document classification technique may include any type of process, e.g. a transductive process, etc. For example, any inductive or transductive technique described above may be used. In a preferred approach, a transductive classifier is trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the documents associated with the legal matter. For each iteration of the calculations the cost factor is preferably adjusted as a function of an expected label value, and the trained classifier is used to classify the received documents. This process may further comprise receiving a data point label prior probability for the labeled and unlabeled documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability. Additionally, the document classification technique may include one or more of a support vector machine process and a maximum entropy discrimination process.
In yet another embodiment, a method for analyzing prior art documents is presented in Fig.
13. In use, a classifier is trained based on a search query in step 1300. A plurality of prior art documents are accessed in step 1302. Such prior art may include any information that has been made available to the public in any form before a given date. Such prior art may also or alternatively include any information that has not been made available to the public in any form before a given date. Illustrative prior art documents may be any type of documents, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, portions of a website, etc. Also, a document classification technique is performed on at least some of the prior art documents in step 1304 using the classifier, and identifiers of at least some of the prior art documents are output in step 1306 based on the classification thereof. The document classification technique may include one or more of any process, including a support vector machine process, a maximum entropy discrimination process, or any inductive or transductive technique described above. Also or alternatively, a representation of links between the documents may also be output. In yet another embodiment, a relevance score of at least some of the prior art documents is output based on the classification thereof.
The search query may include at least a portion of a patent disclosure. Illustrative patent disclosures include a disclosure created by an inventor summarizing the invention, a provisional patent application, a nonprovisional patent application, a foreign patent or patent application, etc.
In one preferred approach, the search query includes at least a portion of a claim from a patent or patent application. In another approach, the search query includes at least a portion of an abstract of a patent or patent application. In a further approach, the search query includes at least a portion of a summary from a patent or patent application.
Fig. 27 illustrates a method for matching documents to claims. In step 2700, a classifier is trained based on at least one claim of a patent or patent application. Thus, one or more claims, or a portion thereof, may be used to train the classifier. In step 2702, a plurality of documents are accessed. Such documents may include prior art documents, documents describing potentially infringing or anticipating products, etc. In step 2704, a document classification technique is performed on at least some of the documents using the classifier. In step 2706, identifiers of at least some of the documents are output based on the classification thereof. A relevance score of at least some of the documents may also be output based on the classification thereof.
An embodiment of the present invention may be used for the classification of patent applications. In the United States, for example, patents and patent applications are currently classified by subject matter using the United States Patent Classification (USPC) system. This task is currently performed manually, and therefore is very expensive and time consuming. Such manual classification is also subject to human errors. Compounding the complexity of such a task is that the patent or patent application may be classified into multiple classes.
I Fig'. 28 depicts a method for classifying a patent application according to one embodiment. In step 2800, a classifier is trained based on a plurality of documents known to be in a particular patent classification. Such documents may typically be patents and patent applications (or portions thereof), but could also be summary sheets describing target subject matter of the particular patent classification. In step 2802, at least a portion of a patent or patent application is received. The portion may include the claims, summary, abstract, specification, title, etc. In step 2804, a document classification technique is performed on the at least the portion of the patent or patent application using the classifier. In step 2806, a classification of the patent or patent application is output. As an option, a user may manually verify the classification of some or all of the patent applications.
Thejdocument classification technique is preferably a yes/no classification technique. In other words, if the probability that the document is in the proper class is above a threshold, the decision is yes, the document belongs in this class. If the probability that the documents is in the proper class is below a threshold, the decision is no, the document does not belong in this class.
I I
Fig. 29 depicts yet another method for classifying a patent application. In step 2900, a document classification technique is performed on at least the portion of a patent or patent application using a classifier that was trained based on at least one document associated with a particular patent classification. Again, the document classification technique is preferably a yes/no classification technique. In step 2902, a classification of the patent or patent application is output.
In either of the methods shown in Figs. 28 and 29, the respective method may be repeated using a different classifier that was trained based on a plurality of documents known to be in a different patent classification. Officially, classification of a patent should be based on the claims. However, it may also be desirable to perform matching between (any IP related content) and (any IP related content). As an example, one approach uses the Description of a patent to train, and classify an application based on its Claims. Another approach uses the Description and Claims to train, and classify based on the Abstract. In particularly preferred approaches, whatever portion of a patent or application is used to train, that same type of content is used when classifying, i.e., if the system is trained on claims, the classification is based on claims. The document classification technique may include any type of process, e.g. a transductive process, etc. For example, any inductive or transductive technique described above may be used. In a preferred approach, the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the prior art documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and the trained classifier may be used to classify the prior art documents. A data point label prior probability for the seed document and prior art documents may also be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability. The seed document may be any document, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, a website, a patent disclosure, etc.
In one approach, Fig. 14 describes one embodiment of the present invention. In step 1401, a set of data is read. The discovery of documents within this set that are relevant to the user is desired. In step 1402 an initial seed document or documents are labeled. The documents may be any type of documents, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, a website, etc. It is also possible to seed the transduction process with a string of different key words or a document provided by the user. In step 1406 training a transductive classifier is trained using the labeled data as well as the set of unlabeled data in the given set. At each label induction step during the iterative transduction process the confidence scores determined during label induction are stored. Once training is finished, the documents that achieved high confidence scores at the label induction steps are displayed in step 1408 for the user. These documents with high confidence scores represent documents relevant to the user for purposes of discovery. The display may be in chronological order of the label induction steps starting with the initial seed document to the final set of documents discovered at the last label induction step.
Another embodiment of the present invention involves data cleanup and accurate classification, for example in conjunction with the automation of business processes. The cleanup and classification technique may include any type of process, e.g. a transductive process, etc. For example, any inductive or transductive technique described above may be used. In a preferred approach, the keys of the entries in the database are utilized as labels associated with some confidence level according to the expected cleanliness of the database. The labels together with the associated confidence level, i.e. the expected labels, are then used to train a transductive classifier that corrects the labels (keys) in order to achieve a more consistent organization of the data in the database. For example, invoices have to be first classified according to the company or person that originated the invoice in order to enable automatic data extraction, e.g. the determination of total dollar amount, purchase order number, product amount, shipping address, etc. Commonly, training examples are needed to set up an automatic classification system. However, training examples provided by the customer often contain misclassified documents or other noise ~ e.g. fax cover sheets -- that have to be identified and removed prior to training the automatic classification system in order to obtain accurate classification. In another example, in the area of patient records, it is useful to detect inconsistencies between the report written by the physician and the diagnosis.
In another example, it is known that the Patent Office undergoes a continuous reclassifϊcation process, in which they (1) evaluate an existing branch of their taxonomy for confusion, (2) re-structure that taxonomy to evenly distributed overly congested nodes, and (3) reclassify existing patents into the new structure. The transductive learning methods presented herein may be used by the Patent Office, and the companies they outsource to do this work, to revaluate their taxonomy, and assist them in (1) build a new taxonomy for a given main classification, and (2) reclassifying existing patents. Transduction learns from labeled and unlabeled data, whereby the transition from labeled to unlabeled data is fluent. At one end of the spectrum are labeled data with perfect prior knowledge, i.e. the given labels are correct with no exceptions. At the other end are unlabeled data where no prior knowledge is given. Organized data with some level of noise constitute mislabeled data and are located somewhere on the spectrum between these two extremes: The labels given by the organization of the data can be trusted to be correct to some extent but not fully. Accordingly, transduction can be utilized to clean up the existing organization of data by assuming a certain level of mistakes within the given organization of the data and interpreting these as uncertainties in the prior knowledge of label assignments.
In one embodiment, a method for cleaning up data is presented in Fig. 15. In use, a plurality of labeled data items are received in step 1500, and subsets of the data items for each of a plurality of categories are selected in step 1502. Additionally, an uncertainty for the data items in each subset is set in step 1504 to about zero, and an uncertainty for the data items not in the subsets is set in step 1506 to a predefined value that is not about zero. Further, a transductive classifier is trained in step 1508 through iterative calculation using the uncertainties, the data items in the subsets, and the data items not in the subsets as training examples, and the trained classifier is applied to each of the labeled data items in step 1510 to classify each of the data items. Also, a classification of the input data items, or derivative thereof, is output in step 1512 to at least one of a user, another system, and another process.
Further, the subsets may be selected at random and may be selected and verified by a user. The label of at least some of the data items may be changed based on the classification. Also, identifiers of data items having a confidence level below a predefined threshold after classification thereof may be output to a user. The identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
In one embodiment of the present invention, as illustrated in Fig. 16, two choices to start a cleanup process are presented to the user at step 1600. One choice is fully automatic cleanup at step 1602, where for each concept or category a specified number of documents are randomly selected and assumed to be correctly organized. Alternatively, at step 1604 a number of documents can be flagged for manual review and verification that one or more label assignments for each concept or category is being correctly organized. An estimate of the noise level in the data is received at step 1606. The transductive classifier is trained in step 1610 using the verified (manually verified or randomly selected) data and the unverified data in step 1608. Once training is finished the documents are reorganized according to the new labels. Documents with low confidence levels in their label assignments below a specified threshold are displayed for the user for manual review in step 1612. Documents with confidence levels in their label assignments above a specified threshold are automatically corrected according to transductive label assignments in step 1614.
In another embodiment, a method for managing medical records is presented in Fig. 17. In use, a classifier is trained based on a medical diagnosis in step 1700, and a plurality of medical records is accessed in step 1702. Additionally, a document classification technique is performed on the medical records in step 1704 using the classifier, and an identifier of at least one of the medical records having a low probability of being associated with the medical diagnosis is output in step 1706. The document classification technique may include any :type of process, e.g. a transductive process, etc., and may include one or more of any inductive or transductive technique described above, including a support vector machine process, a maximum entropy discrimination process, etc.
In one embodiment, the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the medical records, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and the trained classifier may be used to classify the medical records. A data point label prior probability for the seed document and medical records may also be received, wherein for each | iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability. Another embodiment of the present invention accounts for dynamic, shifting classification concepts. For example, in forms processing applications documents are classified using the layout information and/or the content information of the documents to classify the documents for further processing. In many applications the documents are not static but evolve over time. For example the content and/or layout of a document may change owing to new legislation. Transductive classification adapts to these changes automatically yielding the same or comparable classification accuracy despite the drifting classification concepts. This is in contrast to rule based systems or inductive classification methods that, without manually adjustments, will start to suffer in classification accuracy owing to the concept drift. One example of this is invoice processing, which traditionally involves inductive learning, or rule- based systems are used that utilize invoice layout. Under these traditional systems, if a change in the layout occurs the systems have to be manually reconfigured by either labeling new training data or by determining new rules. However, the use of transduction makes the manual reconfiguration unnecessary by automatically adapting to the small changes in layout of the invoices. In another example, transductive classification may be applied to the analysis of customer complaints in order to monitor the changing nature of such complaints. For example, a company can automatically link product changes with customer complaints.
Transduction may also be used in the classification of news articles. For example, news articles on the war on terror starting with articles about the terrorist attacks on September 1 1, 2001 over the war in Afghanistan to news stories about the situation in today's Iraq can be automatically identified using transduction.
In yet another example, the classification of organisms (alpha taxonomy) can change over time through evolution by creating new species of organisms and other species becoming extinct. This and other principles of a classification schema or taxonomy can be dynamic, with classification concepts shifting or changing over time.
By using the incoming data that have to be classified as unlabeled data, transduction can recognize shifting classification concepts, and therefore dynamically adapt to the evolving classification schema. For example, Fig. 18 shows an embodiment of the invention using transduction given drifting classification concepts. Document set Dj enters the system at time tj, as shown in step 1802. At step 1804 a transductive classifier Ci is trained using labeled data and the unlabeled data accumulated so far, and in step 1806 the documents in set Dj are classified. If the manual mode is used, documents with a confidence level below a user supplied threshold as determined in step 1808 are presented to the user for manual review in step 1810. As shown in step 1812, in the automatic mode a document with a confidence level triggers the creation of a new category that is added to the system, and the document is then assigned to the new category. Documents with a confidence level above the chosen threshold are classified into the current categories 1 to N in steps 1820A-B. All documents in the current categories that have been classified prior to step t; into the current categories are reclassified by the classifier Cj in step 1822, and all documents that are no longer classified into the previously assigned categories are moved to new categories in steps 1824 and 1826.
In yet another embodiment, a method for adapting to a shift in document content is presented in Fig. 19. Document content may include, but is not limited to, graphical content, textual content, layout, numbering, etc. Examples of shift may include temporal shift, style shift (where 2 or more people work on one or more documents), shift in process applied, shift in layout, etc. In step 1900, at least one labeled seed document is received, as well as unlabeled documents and at least one predetermined cost factor. The documents may include, but are not limited to, customer complaints, invoices, form documents, receipts, etc. Additionally, a transductive classifier is trained in step 1902 using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents. Also, in step 1904 the unlabeled documents having a confidence level above a predefined threshold are classified into a plurality of categories using the classifier, and at least some of the categorized documents are reclassified in step 1906 into the categories using the classifier. Further, identifiers of the categorized documents are output in step 1908 to at least one of a user, another system, and another process. The identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Further, product changes may be linked with customer complaints, etc. In addition, an unlabeled document having a confidence level below the predefined threshold may be moved into one or more new categories. Also, the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and using the trained classifier to classify the unlabeled documents. Further, a data point label prior probability for the seed document and unlabeled documents may be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
In another embodiment, a method for adapting a patent classification to a shift in document content is presented in Fig. 20. In step 2000, at least one labeled seed document is received, as well as unlabeled documents. The unlabeled documents may include any types of documents, e.g. patent applications, legal filings, information disclosure forms, document amendments, etc. The seed document(s) may include patent(s), patent application(s), etc. A transductive classifier is trained in step 2002 using the at least one seed document and the unlabeled documents, and the unlabeled documents having a confidence level above a predefined threshold are classified into a plurality of existing categories using the classifier. The .classifier may be any type of classifier, e.g. a transductive classifier, etc., and the document classification technique may be any technique, e.g. a support vector machine process, a maximum entropy discrimination process, etc. For example, any inductive or transductive technique described above may be used.
Alsoi in step 2004 the unlabeled documents having a confidence level below the predefined threshold are classified into at least one new category using the classifier, and at least some of the categorized documents are reclassified in step 2006 into the existing categories and the at least one new category using the classifier. Further, identifiers of the categorized documents are output in step 2008 to at least one of a user, another system, and another process. Also, the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the search query, and the documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and the trained classifier may be used to classify the documents. Further, a data point label prior probability for the search query and documents may be received, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
Yet another embodiment of the present invention accounts for document drift in the field of document separation. One use case for Document separation involves the processing of mortgage documents. Loan folders consisting of a sequence of different loan documents, e.g. loan applications, approvals, requests, amounts, etc. are scanned and the different documents within the sequence of images have to be determined before further processing. The documents used are not static but can change over time. For example, tax forms used within a loan folder can change over time owing to legislation changes.
Document separation solves the problem of finding document or subdocument boundaries in a sequence of images. Common examples that produce a sequence of images are digital scanners or Multi Functional Peripherals (MFPs). As in the case of classification, transduction can be utilized in Document separation in order to handle the drift of documents and their boundaries over time. Static separation systems like rule based systems or systems based on inductive learning solutions cannot adapt automatically to drifting separation concepts. The performance of these static separation systems degrade over time whenever a drift occurs. In order to keep the performance on its initial level, one either has to manually adapt the rules (in the case of a rule based system), or has to manually label new documents and relearn the system (in case of an inductive learning solution). Either way is time and cost expensive. Applying transduction to Document separation allows the development of a system that automatically adapts to the drift in the separation concepts.
In one embodiment, a method for separating documents is presented in Fig. 21. In step 2100, labeled data are received, and in step 2102 a sequence of unlabeled documents is received. Such' data and documents may include legal discovery documents, office actions, web page data, attorney-client correspondence, etc. In addition, in step 2104 probabilistic classification rules are adapted using transduction based on the labeled data and the unlabeled documents, and in step 2106 weights used for document separation are updated according to the probabilistic classification rules. Also, in step 2108 locations of separations in the sequence of documents are determined, and in step 2110 indicators of the determined locations of the separations in the sequence are output to at least one of a user, another system, and another process. The indicators may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Further, in step 2112 the documents are flagged with codes, the codes correlating to the indicators.
Fig. 22 shows an implementation of the classification method and apparatus of the present invention used in association with document separation. Automatic document separation is used for reducing the manual effort involved in separating and identifying documents after digital scanning. One such document separation method combines classification rules to automatically separate sequences of .pages by using inference algorithms to reduce the most likely separation from all of the available information, using the classifications methods described therein. In one embodiment of the present invention as shown in Fig. 22, the classification method of transductive MED of the present invention is employed in document separation. More particularly, document pages 2200 are inserted into a digital scanner 2202 or MFP and are converted into a sequence of digital images 2204. The document pages may be pages from any type of document, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, a website, etc. The sequence of digital images is input at step 2206 to dynamically adapt probabilistic classification rules using transduction. Step 2206 utilizes the sequence of images 2204 as unlabeled data and labeled data 2208. At step 2210 the weight in the probabilistic network is updated and is used for automatic document separation according to dynamically adapted classification rules. The output step 2212 is a dynamic adaptation of automatic insertion of separation images such that a sequence of digitized pages 2214 is interleaved with automatic images of separator sheets 2216 at step 2212 automatically inserts the separator sheet images into the image sequence. In one embodiment of the invention, the software generated separator pages 2216 may also indicate the type of document that immediately follows or proceeds the separator page 2216. The system described here automatically adapts to drifting separation concepts of the documents that occur over time without suffering from a decline in separation accuracy as would static systems like rule based or inductive machine learning based solutions. A common example for ' drifting separation or classification concepts in form processing applications are, as mentioned earlier, changes to documents owing to new legislation.
Additionally, the system as shown in Fig. 22 may be modified to a system as shown in Fig. 23 where the pages 2300 are inserted into a digital scanner 2302 or MFP converted into a sequence of digital images 23O4.The sequence of digital images is input at step 2306 to dynamically adapt probabilistic classification rules using transduction. Step 2306 utilizes the sequence of images 2304, as unlabeled data and labeled data 2308. Step 2310 updates weights in the probabilistic network used for automatic document separation according to dynamically adapted classification rules employed. In step 2312 instead of inserting separator sheet images as described in Fig. 18, step 2312 dynamically adapts the automated insertion of separation information and flags the document images 2314 with a coded description. Thus the document page images can be input into an imaging processed database 2316 and the documents can be accessed by the software identifiers.
Yet another embodiment of the present invention is able to perform face recognition using transduction. As mentioned above, the use of transduction has many advantages, for example the need of a relatively small number of training examples, the ability to use unlabeled examples in training, etc. By making use of the aforementioned advantages, transductive face recognition may be implemented for criminal detection.
For example, the Department of Homeland Security must ensure that terrorists are not allowed onto commercial airliners. Part of an airport's screening process may be to take a picture of each passenger at the airport security checkpoint and attempt to recognize that person. The system could initially be trained using a small number of examples from the limited photographs available of possible terrorists. There may also be more unlabeled photographs of the same terrorist available in other law-enforcement databases that may also be used in training. Thus, a transductive trainer would take advantage of not only the initially sparse data to create a functional face-recognition system but would also use unlabeled examples from other sources to increase performance. After processing the photograph taken at the airport security checkpoint, the transductive system would be able to recognize the person in question more accurately than a comparable inductive system. 5
In yet another embodiment, a method for face recognition is presented in Fig. 24. In step 2400, at least one labeled seed image of a face is received, the seed image having a known confidence level. The at least one seed image may have a label indicative of whether the image is included in a designated category. Additionally, in step 2400 unlabeled images are
[0 received, e.g. from the police department, government agency, lost child database, airport security, or any other location, and at least one predetermined cost factor are received. Also, in step 2402 a transductive classifier is trained through iterative calculation using the at least one predetermined cost factor, the at least one seed image, and the unlabeled images, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected
[5 label value. After at least some of the iterations, in step 2404 confidence scores are stored for the unlabeled seed images.
Further, in step 2406 identifiers of the unlabeled documents having the highest confidence scores are output to at least one of a user, another system, and another process. The
20 identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Also, confidence scores may be stored after each of the iterations, wherein an identifier of the unlabeled images having the highest confidence score after each iteration is output. Additionally, a data point label prior probability for the labeled and unlabeled image may be received,
25 wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability. Further, a third unlabeled image of a face, e.g., from the above airport security example, may be received, the third unlabeled image may be compared to at least some of the images having the highest confidence scores, and an identifier of the third unlabeled image may be output if 0 a confidence that the face in the third unlabeled image is the same as the face in the seed image. Ye.t another embodiment of the present invention enables a user to improve their search results by providing feedback to the document discovery system. For example, when performing a search on an internet search engine, patent or patent application search product, etc., users may get a multitude of results in response to their search query. An embodiment of the present invention enables the user to review the suggested results from the search engine and inform the engine of the relevance of one ore more of the retrieved results, e.g. "close, but not exactly what I wanted," "definitely not," etc. As the user provides feedback to the engine, better results are prioritized for the user to review.
In one embodiment, a method for document searching is presented in Fig. 25. In step 2500, a search query is received. The search query may be any type of query, including case- sensitive queries, Boolean queries, approximate match queries, structured queries, etc. In step 2502, documents based on the search query are retrieved. Additionally, in step 2504 the documents are output, and in step 2506 user-entered labels for at least some of the documents are received, the labels being indicative of a relevance of the document to the search query. For example, the user may indicate whether a particular result returned from the query is relevant or not. Also, in step 2508 a classifier is trained based on the search query and the user-entered labels, and in step 2510 a document classification technique is performed on the documents using the classifier for reclassifying the documents. Further, in step 2512 identifiers of at least some of the documents are output based on the classification thereof. The identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. The reclassified documents may also be output, with those documents having a highest confidence being output first.
The document classification technique may include any type of process, e.g. a transductive process, a support vector machine process, a maximum entropy discrimination process, etc. Any inductive or transductive technique described above may be used. In a preferred approach, the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the search query, and the documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and the trained classifier may be used to classify the documents. In addition, a data point label prior probability for the search query and documents may be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
A further embodiment of the present invention may be used for improving ICR/OCR, and speech recognition. For example, many embodiments of speech recognition programs and systems require the operator to repeat a number of words to train the system. The present invention can initially monitor the voice of a user for a preset period of time to gather "unclassified" content, e.g., by listening in to phone conversations. As a result, when the user starts training the recognition system, the system utilizes transductive learning to utilize the monitored speech to assist in building a memory model.
In yet another embodiment, a method for verifying an association of an invoice with an entity is presented in Fig. 26. In step 2600, a classifier is trained based on an invoice format associated with a first entity. The invoice format may refer to either or both of the physical layout of markings on the invoice, or characteristics such as keywords, invoice number, client name, etc. on the invoice. In addition, in step 2602 a plurality of invoices labeled as being associated with at least one of the first entity and other entities are accessed, and in step 2604 a document classification technique is performed on the invoices using the classifier. . For example, any inductive or transductive technique described above may be used as a document classification technique. For example, the document classification technique may include a transductive process, support vector machine process, a maximum entropy discrimination process, etc. Also, in step 2606 an identifier of at least one of the invoices having a high probability of not being associated with the first entity is output.
Further, the classifier may be any type of classifier, for example, a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the invoices, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the invoices. Also, a data point label prior probability for the seed document and invoices may be received, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
One of the benefits afforded by the embodiments depicted herein is the stability of the transductive algorithm. This stability is achieved by scaling the cost factors and adjusting the label prior probability. For example, in one embodiment a transductive classifier is trained through iterative classification using at least one cost factor, the labeled data points, and the unlabeled data points as training examples. For each iteration of the calculations, the unlabeled date point cost factor is adjusted as a function of an expected label value. Additionally, for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
The workstation may have resident thereon an operating system such as the Microsoft Windows® Operating System (OS), a MAC OS, or UNIX operating system. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned. A preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology. Object oriented programming (OOP), which has become increasingly used to develop complex applications, may be used.
The above application uses transductive learning to overcome the problem of very sparse data sets which plague inductive face-recognition systems. This aspect of transductive learning is not limited to this application and may be used to solve other machine-learning problems that arise from sparse data.
Those skilled in the art could devise variations that are within the scope and spirit of the various embodiments of the invention disclosed herein. Further, the various features of the embodiments disclosed herein can be used alone, or in varying combinations with each other and are not intended to be limited to the specific combination described herein. Thus, the scope of the claims is not to be limited by the illustrated embodiments.

Claims

WHAT IS CLAIMED IS:
1. In a computer-based system, a method for classification of data comprising: receiving labeled data points, each of said labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category; receiving unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; training a transductive classifier using Maximum Entropy Discrimination (MED) through iterative calculation using said at least one cost factor and the labeled data points and the unlabeled data points as training examples, wherein for each iteration of the calculations the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point label prior probability is adjusted according to an estimate of a data point class membership probability; applying the trained classifier to classify at least one of the unlabeled data points, the labeled data points, and input data points; and outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process.
2. The method of claim 1 wherein said function is the absolute value of the expected label of a data point.
3. The method of claim 1 further comprising the step of receiving prior probability information of labeled and unlabeled data points.
4. The method of claim 3 wherein said transductive classifier learns using prior probability information of the labeled and unlabeled data.
5. The method of claim 1 comprising the further step of determining the decision function with minimal KL divergence using a Gaussian prior for the decision function parameters given the included and excluded training examples utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
6. The method of claim 1 comprising the further step of determining the decision function with minimal KL divergence using a multinomial prior distribution for the decision function parameters.
7. The method of claim 1 wherein the iterative step of training a transductive classifier is repeated until the convergence of data values is reached.
8. The method of claim 7 wherein convergence is reached when the change of the 'decision function of the transductive classifier falls below a predetermined threshold value.
9. The method of claim 7 wherein convergence is reached when the change of the determined expected label value falls below a predetermined threshold value.
10. The method of claim 1 wherein the label of the included training example has a value of +1 and the label of the excluded training example has a value of -1.
11. The method of claim 1 wherein the label of the included example is mapped to a first numeric value and the label of the excluded example to a second numeric value.
12. The method of claim 1 further comprising: storing the labeled data points in a memory of a computer; storing the unlabeled data points in a memory of a computer; storing the input data points in a memory of a computer; and storing the at least one predetermined cost factor of the labeled data points and unlabeled data points in a memory of a computer.
13. A method for classification of data comprising: providing computer executable program code to be deployed to and executed on a computer system, the program code comprising instructions for: accessing stored labeled data points in a memory of a computer, each of said labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category; accessing unlabeled data points from a memory of a computer; accessing at least one predetermined cost factor of the labeled data points and unlabeled data points from a memory of a computer; training a Maximum Entropy Discrimination (MED) transductive classifier through iterative calculation using said at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein for each iteration of the calculation the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point prior probability is adjusted according to an estimate of a data point class membership probability; applying the trained classifier to classify at least one of the unlabeled data points, the labeled data points, and input data points ; and outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process.
14. The method of claim 13 wherein said function is the absolute value the expected label of a data point.
15. The method of claim 13 further comprising the step of accessing prior probability information of labeled and unlabeled data points stored in a memory of a computer.
16. The method of claim 15 wherein for each iteration, the prior probability information is adjusted according to an estimate of a data point class membership probability.
17. The method of claim 13 further comprising instructions for determining the decision function with minimal KL divergence to the prior distribution of the decision function parameters given the included and excluded training examples utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
18. The method of claim 13 wherein the iterative step of training a transductive classifier is repeated until convergence of data values is reached.
19. The method of claim 18 wherein convergence is reached when the change of the decision function of the transductive classification falls below a predetermined threshold value.
20. The method of claim 18 wherein convergence is reached when the change of the determined expected label value falls below a predetermined threshold value.
21. The method of claim 13 wherein the label of the included training example has a value of +1 and the label of the excluded training example has a value of -1.
22. The method of claim 13 wherein the label of the included example is mapped to a first numeric value and the label of the excluded example to a second numeric value.
23. A data processing apparatus comprising: at least one memory for storing: (i) labeled data points wherein each of said labeled 5 data points having at least one label indicating whether the data point is a training example for data points being included in a designated category or a training example for data points being excluded from a designated category; (ii) unlabeled data points; and (iii) at least one predetermined cost factor of the labeled data points and unlabeled data points; and
10 a transductive classifier trainer to iteratively teach the transductive classifier using transductive Maximum Entropy Discrimination (MED) using said at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein at each iteration of the MED calculation the cost factor of the unlabeled data point is adjusted as a function of an
[ 5 expected label value and a data point label prior probability is adjusted according to an estimate of a data point class membership probability; wherein a classifier trained by the transductive classifier trainer is used to classify at least one of the unlabeled data points, the labeled data points, and input data points;
10 wherein a classification of the classified data points, or derivative thereof, is output to at least one of a user, another system, and another process.
24. The apparatus of claim 23 wherein said function is the absolute value the expected label of a data point.
25. The apparatus of claim 23 wherein said memory also stores prior probability 15 information of labeled and unlabeled data points.
26. The apparatus of claim 25 wherein at each iteration of the MED calculation, the prior probability information is adjusted according to an estimate of a data point class membership probability.
27. The apparatus of claim 23 further comprising a processor for determining the 0 decision function with minimal KL divergence to the prior distribution of the decision ftinction parameters given the included and excluded training examples utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
28. The apparatus of claim 23 further comprising a means for determining the convergence of data values, and terminating calculations upon determination of convergence. 5
29. The apparatus of claim 28 wherein convergence is reached when the change of trie decision function of the transductive classifier calculation falls below a predetermined threshold value.
30. The apparatus of claim 28 wherein convergence is reached when the change of the determined expected label values falls below a predetermined threshold value. 10
31. The apparatus of claim 23 the label of the included training example has a value of +1 and the label of the excluded training example has a value of -1.
32. The apparatus of claim 23 wherein the label of the included example is mapped to a first numeric value and the label of the excluded example to a second numeric value.
15 33. An article of manufacture comprising a program storage medium readable by a computer, the medium tangibly embodying one or more programs of instructions executable by a computer to perform a method of data classification comprising: receiving labeled data points, each of said labeled data points having at least one label indicating whether the data point is a training example for data points for 20 being included in a designated category or a training example for data points being excluded from a designated category; receiving unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; 25 training a transductive classifier with iterative Maximum Entropy Discrimination
(MED) calculation using said at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein at each iteration of the MED calculation the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point prior $0 probability is adjusted according to an estimate of a data point class membership probability; applying the trained classifier to classify at least one of the unlabeled data points, the labeled data points, and input data points; and outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process.
5 34. The article of manufacture of claim 33 wherein said function is the absolute value the expected label of a data point.
35. The article of manufacture of claim 33 further comprising the step of storing prior probability information of labeled and unlabeled data points in a memory of a computer.
0 36. The article of manufacture of claim 35 wherein at each iteration of the MED calculation, the prior probability information is adjusted according to an estimate of a data point class membership probability.
37. The article of manufacture of claim 33 comprising the further step of determining the decision function with minimal KL divergence to the prior distiibution of the
[ 5 decision function parameters given the included and excluded training examples utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
38. The article of manufacture of claim 33 wherein the iterative step of training a transductive classifier is repeated until the convergence of data values is reached.
39. The article of manufacture of claim 38 wherein convergence is reached when >0 the change of the decision function of the transductive classification falls below a predetermined threshold value.
40. The article of manufacture of claim 38 wherein convergence is reached when the change of the determined expected label value falls below a predetermined threshold value.
.5 41. The article of manufacture of claim 33 wherein the label of the included training example has a value of +1 and the label of the excluded training example has a value of-1.
42. The article of manufacture of claim 33 wherein the label of the included example is mapped to a first numeric value and the label of the excluded example is a second
JO numeric value.
43. In a computer-based system, a method for classification of unlabeled data comprising: receiving labeled data points, each of said labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category; receiving labeled and unlabeled data points; receiving prior label probability information of labeled data points and unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; determining the expected labels for each labeled and unlabeled data point according to the label prior probability of the data point; repeating the following substeps until substantial convergence of data values: • generating a scaled cost value for each unlabeled data point proportional to the absolute value of the data point's expected label;
• training a classifier by determining the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included training and excluded training examples utilizing the labeled as well as the unlabeled data as training examples according to their expected label;
• determining the classification scores of the labeled and unlabeled data points using the trained classifier;
• calibrating the output of the trained classifier to class membership probability;
• updating the label prior probabilities of the unlabeled data points according to the determined class membership probabilities; • determining the label and margin probability distributions using
Maximum Entropy Discrimination (MED) using the updated label prior probabilities and the previously determined classification scores; • computing new expected labels using the previously determined label probability distribution; and • updating expected labels for each data point by interpolating the new expected labels with the expected label of previous iteration; and outputting a classification of the input data points, or derivative thereof, to at least one of a user, another system, and another process.
44. The method of claim 43 wherein convergence is reached when the change of the decision function falls below a predetermined threshold value.
45. The method of claim 43 wherein convergence is reached when the change of the determined expected label value falls below a predetermined threshold value.
46. The method of claim 43 wherein the label of the included training example has a value of +1 and the label of the excluded training example has a value of -1.
47. A method for classifying documents, comprising: receiving at least one labeled seed document having a known confidence level of label assignment; receiving unlabeled documents; receiving at least one predetermined cost factor; training a transductive classifier through iterative calculation using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value; after at least some of the iterations, storing confidence scores for the unlabeled documents; and outputting identifiers of the unlabeled documents having the highest confidence scores to at least one of a user, another system, and another process.
48. The method of claim 47, wherein the at least one seed document has a list of keywords.
49. The method of claim 47, wherein confidence scores are stored after each of the| iterations, wherein an identifier of the unlabeled document having the highest confidence score after each iteration is output.
50. The method of claim 47, further comprising receiving a data point label prior probability for the labeled and unlabeled documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
51. A method for analyzing documents associated with legal discovery, comprising: receiving documents associated with a legal matter; performing a document classification technique on the documents; and outputting identifiers of at least some of the documents based on the classification thereof.
52. The method of claim 51 , wherein the document classification technique includes a transductive process.
1 53. The method of claim 52, further comprising training a transductive classifier through iterative calculation using at least one predetermined cost factor, at least one seed document, and the documents associated with the legal matter, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the received documents.
54. The method of claim 53, further comprising receiving a data point label prior probability for the labeled and unlabeled documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability. !
55. The method of claim 51 , wherein the document classification technique includes a support vector machine process.
56. The method of claim 51 , wherein the document classification technique includes a maximum entropy discrimination process.
57. The method of claim 51 , further comprising outputting a representation of links between the documents.
58. A method for cleaning up data, comprising: receiving a plurality of labeled data items; selecting subsets of the data items for each of a plurality of categories; setting an uncertainty for the data items in each subset to about zero; setting an uncertainty for the data items not in the subsets to a predefined value that is 5 not about zero; training a transductive classifier through iterative calculation using the uncertainties, the data items in the subsets, and the data items not in the subsets as training examples; applying the trained classifier to each of the labeled data items to classify each of the 0 data items; and outputting a classification of the input data items, or derivative thereof, to at least one of a user, another system, and another process.
59. The method of claim 58, wherein the subsets are selected at random.
60. The method of claim 58, wherein the subsets are selected and verified by a 5 user.
61. The method of claim 58, further comprising changing the label of at least some of the data items based on the classification.
62. The method of claim 58, wherein identifiers of data items having a confidence level below a predefined threshold after classification thereof are output to a user.
!0 63. A method for verifying an association of an invoice with an entity, comprising: training a classifier based on an invoice format associated with a first entity; accessing a plurality of invoices labeled as being associated with at least one of the first entity and other entities; 15 performing a document classification technique on the invoices using the classifier; and outputting an identifier of at least one of the invoices having a high probability of not being associated with the first entity.
64. The method of claim 63, wherein the document classification technique 10 includes a transductive process.
65. The method of claim 64, wherein the classifier is a transductive classifier, and further comprising training the transductive classifier through iterative calculation using at least one predetermined cost factor, at least one seed document, and the invoices, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the invoices.
66. The method of claim 65, further comprising receiving a data point label prior probability for the seed document and invoices, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability. '
67. The method of claim 63, wherein the document classification technique includes a support vector machine process.
68. The method of claim 63, wherein the document classification technique includes a maximum entropy discrimination process.
69. A method for managing medical records, comprising: training a classifier based on a medical diagnosis; accessing a plurality of medical records; performing a document classification technique on the medical records using the classifier; and outputting an identifier of at least one of the medical records having a low probability of being associated with the medical diagnosis.
70. The method of claim 69, wherein the document classification technique includes a transductive process.
71. The method of claim 70. wherein the classifier is a transductive classifier, and further comprising training the transductive classifier through iterative calculation using at least one predetermined cost factor, at least one seed document, and the medical records, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the medical records.
72. The method of claim 71, further comprising receiving a data point label prior probability for the seed document and medical records, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
73. The method of claim 69, wherein the document classification technique includes a support vector machine process.
74. The method of claim 69, wherein the document classification technique includes a maximum entropy discrimination process.
5 75. A method for face recognition, comprising: receiving at least one labeled seed image of a face, the seed image having a known confidence level; receiving unlabeled images; receiving at least one predetermined cost factor; 0 training a transductive classifier through iterative calculation using the at least one predetermined cost factor, the at least one seed image, and the unlabeled images, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value; after at least some of the iterations, storing confidence scores for the unlabeled seed S images; and outputting identifiers of the unlabeled images having the highest confidence scores to at least one of a user, another system, and another process.
76. The method of claim 75, wherein the at least one seed image has a label indicative of whether the image is included in a designated category. 0
77. The method of claim 75, wherein confidence scores are stored after each of the iterations, wherein an identifier of the unlabeled images having the highest confidence score after each iteration is output.
78. The method of claim 75, further comprising receiving a data point label prior probability for the labeled and unlabeled image, wherein for each iteration of the calculations5 the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
79. The method of claim 75, further comprising receiving a third unlabeled image of a face, comparing the third unlabeled image to at least some of the images having the highest confidence scores, and outputting an identifier of the third unlabeled image if a0 confidence that the face in the third unlabeled image is the same as the face in the seed image.
80. A method for analyzing prior art documents, comprising: training a classifier based on a search query; accessing a plurality of prior art documents; performing a document classification technique on at least some of the prior art 5 documents using the classifier; and outputting identifiers of at least some of the prior art documents based on the classification thereof.
81. The method of claim 80, wherein the document classification technique includes a transductive process.
0 ' 82. The method of claim 81, wherein the classifier is a transductive classifier, and further comprising training the transductive classifier through iterative calculation using at least one predetermined cost factor, at least one seed document, and the prior art documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the prior art documents. .5
83. The method of claim 82, further comprising receiving a data point label prior probability for the seed document and prior art documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
84. The method of claim 80, wherein the search query includes at least a portion !0 of a patent disclosure.
.
85. The method of claim 80, wherein the search query includes at least a portion of a claim from a patent or patent application.
1
86. The method of claim 80, wherein the search query includes at least a portion of an abstract of a patent or patent application.
15 ,
87. The method of claim 80, wherein the search query includes at least a portion of a summary from a patent or patent application.
88. The method of claim 80, wherein the document classification technique includes a support vector machine process.
; 89. The method of claim 80, wherein the document classification technique i0 includes a maximum entropy discrimination process.
90. The method of claim 80, wherein the prior art documents are publications of a patent office.
91. The method of claim 80, further comprising outputting a representation of links between the documents.
5 ' 92. The method of claim 80, further comprising outputting a relevance score of at least some of the prior art documents based on the classification thereof.
93. A method for adapting a patent classification to a shift in document content comprising:
' receiving at least one labeled seed document;
0 receiving unlabeled documents; training a transductive classifier using the at least one seed document and the unlabeled documents; classifying the unlabeled documents having a confidence level above a predefined threshold into a plurality of existing categories using the classifier;
15 classifying the unlabeled documents having a confidence level below the predefined threshold into at least one new category using the classifier; reclassifying at least some of the categorized documents into the existing categories and the at least one new category using the classifier; and outputting identifiers of the categorized documents to at least one of a user, another JO , system, and another process.
94. The method of claim 93, wherein the classifier is a transductive classifier, and further comprising training the transductive classifier through iterative calculation using at least one predetermined cost factor, the search query, and the documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label
Ϊ5 value, and using the trained classifier to classify the documents.
95. The method of claim 94, further comprising receiving a data point label prior probability for the search query and documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
50 96. The method of claim 93, wherein the document classification technique includes a support vector machine process.
97. The method of claim 93, wherein the document classification technique includes a maximum entropy discrimination process.
98. The method of claim 93, wherein the unlabeled documents are patent applications.
99. The method of claim 93, wherein the at least one seed document is selected from a group consisting of a patent and a patent application.
100. A method for matching documents to claims, comprising: training a classifier based on at least one claim of a patent or patent application; accessing a plurality of documents; performing a document classification technique on at least some of the documents using the classifier; and outputting identifiers of at least some of the documents based on the classification thereof.
101. The method of claim 100, further comprising outputting a relevance score of at least some of the documents based on the classification thereof.
102. The method of claim 100, wherein the documents are prior art documents.
103. The method of claim 100, wherein the documents describe products.
104. A method for classifying a patent or patent application, comprising: training a classifier based on a plurality of documents known to be in a particular patent classification; receiving at least a portion of a patent or patent application; performing a document classification technique on the at least the portion of the patent or patent application using the classifier; and outputting a classification of the patent or patent application, wherein the document classification technique is a yes/no classification technique.
105. The method of claim 104, wherein the documents are selected from a group consisting of patents and patent applications.
106. The method of claim 105, wherein the at least a portion of the patent or patent application includes at least a portion of a claim from a patent or patent application.
107. The method of claim 105, wherein the at least a portion of the patent or patent application includes at least a portion of an abstract of a patent or patent application.
108. The method of claim 105, wherein the at least a portion of the patent or patent application includes at least a portion of a summary from a patent or patent application
109. A method for classifying a patent or patent application, comprising: performing a document classification technique on at least the portion of a patent or patent application using a classifier that was trained based on at least one document associated with a particular patent classification, wherein the document classification technique is a yes/no classification technique; and outputting a classification of the patent or patent application.
110. The method of claim 109, further comprising repeating the method using a different classifier that was trained based on a plurality of documents known to be in a second patent classification.
111. The method of claim 109, wherein the at least a portion of the patent or patent application includes at least a portion of a claim from a patent or patent application.
112. The method of claim 109, wherein the at least a portion of the patent or patent application includes at least a portion of an abstract of a patent or patent application.
113. The method of claim 109, wherein the at least a portion of the patent or patent application includes at least a portion of a summary from a patent or patent application
114. A method for adapting to a shift in document content, comprising: receiving at least one labeled seed document; receiving unlabeled documents; receiving at least one predetermined cost factor; training a transductive classifier using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents; classifying the unlabeled documents having a confidence level above a predefined threshold into a plurality of categories using the classifier; reclassifying at least some of the categorized documents into the categories using the classifier; and outputting identifiers of the categorized documents to at least one of a user, another system, and another process.
115. The method of claim 114, further comprising moving an unlabeled document having a confidence level below the predefined threshold into one or more new categories.
116. The method of claim 114, and further comprising training the transductive classifier through iterative calculation using at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the unlabeled documents.
117. The method of claim 116, further comprising receiving a data point label prior probability for the seed document and unlabeled documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
118. The method of claim 114, wherein the unlabeled documents are customer complaints, and further comprising linking product changes with customer complaints.
119. The method of claim 114, wherein the unlabeled documents are invoices.
120. A method for separating documents, comprising: receiving labeled data; receiving a sequence of unlabeled documents; adapting probabilistic classification rules using transduction based on the labeled data and the unlabeled documents; updating weights used for document separation according to the probabilistic classification rules; determining locations of separations in the sequence of documents; outputting indicators of the determined locations of the separations in the sequence to at least one of a user, another system, and another process; and flagging the documents with codes, the codes correlating to the indicators.
121. A method for document searching, comprising: receiving a search query; retrieving documents based on the search query; outputting the documents; receiving user-entered labels for at least some of the documents, the labels being indicative of a relevance of the document to the search query; training a classifier based on the search query and the user-entered labels; performing a document classification technique on the documents using the classifier for reclassifying the documents; and outputting identifiers of at least some of the documents based on the classification thereof.
5 122. The method of claim 121, wherein the document classification technique includes a transductive process.
123. The method of claim 122, wherein the classifier is a transductive classifier, and further comprising training the transductive classifier through iterative calculation using at least one predetermined cost factor, the search query, and the documents, wherein for each
(0 iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the documents.
124. The method of claim 123, further comprising receiving a data point label prior probability for the search query and documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class
15 membership probability.
125. The method of claim 121, wherein the document classification technique includes a support vector machine process.
126. The method of claim 121, wherein the document classification technique includes a maximum entropy discrimination process.
20 127. The method of claim 121, wherein the reclassified documents are output, those documents having a highest confidence being output first.
EP07809394.5A 2006-07-12 2007-06-07 Methods and systems for transductive data classification and data classification methods using machine learning techniques Pending EP1924926A4 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US83031106P 2006-07-12 2006-07-12
US11/752,634 US7761391B2 (en) 2006-07-12 2007-05-23 Methods and systems for improved transductive maximum entropy discrimination classification
US11/752,691 US20080086432A1 (en) 2006-07-12 2007-05-23 Data classification methods using machine learning techniques
US11/752,673 US7958067B2 (en) 2006-07-12 2007-05-23 Data classification methods using machine learning techniques
US11/752,719 US7937345B2 (en) 2006-07-12 2007-05-23 Data classification methods using machine learning techniques
PCT/US2007/013484 WO2008008142A2 (en) 2006-07-12 2007-06-07 Machine learning techniques and transductive data classification

Publications (2)

Publication Number Publication Date
EP1924926A2 true EP1924926A2 (en) 2008-05-28
EP1924926A4 EP1924926A4 (en) 2016-08-17

Family

ID=38923733

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07809394.5A Pending EP1924926A4 (en) 2006-07-12 2007-06-07 Methods and systems for transductive data classification and data classification methods using machine learning techniques

Country Status (3)

Country Link
EP (1) EP1924926A4 (en)
JP (1) JP5364578B2 (en)
WO (1) WO2008008142A2 (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9769354B2 (en) 2005-03-24 2017-09-19 Kofax, Inc. Systems and methods of processing scanned data
US9137417B2 (en) 2005-03-24 2015-09-15 Kofax, Inc. Systems and methods for processing video data
US8885229B1 (en) 2013-05-03 2014-11-11 Kofax, Inc. Systems and methods for detecting and classifying objects in video captured using mobile devices
US7958067B2 (en) 2006-07-12 2011-06-07 Kofax, Inc. Data classification methods using machine learning techniques
US7937345B2 (en) 2006-07-12 2011-05-03 Kofax, Inc. Data classification methods using machine learning techniques
US8190868B2 (en) 2006-08-07 2012-05-29 Webroot Inc. Malware management through kernel detection
US10007882B2 (en) * 2008-06-24 2018-06-26 Sharon Belenzon System, method and apparatus to determine associations among digital documents
US9576272B2 (en) 2009-02-10 2017-02-21 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9767354B2 (en) 2009-02-10 2017-09-19 Kofax, Inc. Global geographic information retrieval, validation, and normalization
US8774516B2 (en) 2009-02-10 2014-07-08 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9349046B2 (en) 2009-02-10 2016-05-24 Kofax, Inc. Smart optical input/output (I/O) extension for context-dependent workflows
US8958605B2 (en) 2009-02-10 2015-02-17 Kofax, Inc. Systems, methods and computer program products for determining document validity
US8438386B2 (en) * 2009-04-21 2013-05-07 Webroot Inc. System and method for developing a risk profile for an internet service
US11489857B2 (en) 2009-04-21 2022-11-01 Webroot Inc. System and method for developing a risk profile for an internet resource
US9483794B2 (en) 2012-01-12 2016-11-01 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9058580B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US8989515B2 (en) 2012-01-12 2015-03-24 Kofax, Inc. Systems and methods for mobile image capture and processing
US10146795B2 (en) 2012-01-12 2018-12-04 Kofax, Inc. Systems and methods for mobile image capture and processing
US9058515B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9311531B2 (en) 2013-03-13 2016-04-12 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9355312B2 (en) 2013-03-13 2016-05-31 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9208536B2 (en) 2013-09-27 2015-12-08 Kofax, Inc. Systems and methods for three dimensional geometric reconstruction of captured image data
US20140316841A1 (en) 2013-04-23 2014-10-23 Kofax, Inc. Location-based workflows and services
US20160210426A1 (en) * 2013-08-30 2016-07-21 3M Innovative Properties Company Method of classifying medical documents
US9386235B2 (en) 2013-11-15 2016-07-05 Kofax, Inc. Systems and methods for generating composite images of long documents using mobile video data
US9760788B2 (en) 2014-10-30 2017-09-12 Kofax, Inc. Mobile document detection and orientation based on reference object characteristics
KR102315574B1 (en) * 2014-12-03 2021-10-20 삼성전자주식회사 Apparatus and method for classification of data, apparatus and method for segmentation of region of interest
CN104700099B (en) * 2015-03-31 2017-08-11 百度在线网络技术(北京)有限公司 The method and apparatus for recognizing traffic sign
US10242285B2 (en) 2015-07-20 2019-03-26 Kofax, Inc. Iterative recognition-guided thresholding and data extraction
US11550688B2 (en) 2015-10-29 2023-01-10 Micro Focus Llc User interaction logic classification
US10339193B1 (en) * 2015-11-24 2019-07-02 Google Llc Business change detection from street level imagery
US9779296B1 (en) 2016-04-01 2017-10-03 Kofax, Inc. Content-based detection and three dimensional geometric reconstruction of objects in image and video data
JP6973733B2 (en) * 2017-11-07 2021-12-01 株式会社アイ・アール・ディー Patent information processing equipment, patent information processing methods and programs
US11062176B2 (en) 2017-11-30 2021-07-13 Kofax, Inc. Object detection and image cropping using a multi-detector approach
JP7024515B2 (en) 2018-03-09 2022-02-24 富士通株式会社 Learning programs, learning methods and learning devices
JP7079483B2 (en) * 2018-06-18 2022-06-02 国立研究開発法人産業技術総合研究所 Information processing methods, systems and programs
US20210342744A1 (en) * 2018-09-28 2021-11-04 Element Al Inc. Recommendation method and system and method and system for improving a machine learning system
EP3864524A1 (en) 2018-10-08 2021-08-18 Artic Alliance Europe OY Method and system to perform text-based search among plurality of documents
KR102033136B1 (en) * 2019-04-03 2019-10-16 주식회사 루닛 Method for machine learning based on semi-supervised learning and apparatus thereof
WO2020231188A1 (en) * 2019-05-13 2020-11-19 삼성전자주식회사 Classification result verifying method and classification result learning method which use verification neural network, and computing device for performing methods
CN113240025B (en) * 2021-05-19 2022-08-12 电子科技大学 Image classification method based on Bayesian neural network weight constraint
JP2023144562A (en) 2022-03-28 2023-10-11 富士通株式会社 Machine learning program, data processing program, information processing device, machine learning method and data processing method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002095534A2 (en) * 2001-05-18 2002-11-28 Biowulf Technologies, Llc Methods for feature selection in a learning machine
US7376635B1 (en) * 2000-07-21 2008-05-20 Ford Global Technologies, Llc Theme-based system and method for classifying documents
US7702526B2 (en) 2002-01-24 2010-04-20 George Mason Intellectual Properties, Inc. Assessment of episodes of illness
US7184929B2 (en) * 2004-01-28 2007-02-27 Microsoft Corporation Exponential priors for maximum entropy models
US7492943B2 (en) * 2004-10-29 2009-02-17 George Mason Intellectual Properties, Inc. Open set recognition using transduction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2008008142A2 *

Also Published As

Publication number Publication date
EP1924926A4 (en) 2016-08-17
WO2008008142A2 (en) 2008-01-17
JP5364578B2 (en) 2013-12-11
JP2009543254A (en) 2009-12-03
WO2008008142A3 (en) 2008-12-04

Similar Documents

Publication Publication Date Title
US7937345B2 (en) Data classification methods using machine learning techniques
WO2008008142A2 (en) Machine learning techniques and transductive data classification
US7761391B2 (en) Methods and systems for improved transductive maximum entropy discrimination classification
US7958067B2 (en) Data classification methods using machine learning techniques
US20080086432A1 (en) Data classification methods using machine learning techniques
US11528290B2 (en) Systems and methods for machine learning-based digital content clustering, digital content threat detection, and digital content threat remediation in machine learning-based digital threat mitigation platform
US6192360B1 (en) Methods and apparatus for classifying text and for building a text classifier
JP4490876B2 (en) Content classification method, content classification device, content classification program, and recording medium on which content classification program is recorded
Hu et al. Rank-based decomposable losses in machine learning: A survey
Gao et al. A maximal figure-of-merit (MFoM)-learning approach to robust classifier design for text categorization
Perez et al. Bug or not bug? That is the question
Villa-Blanco et al. Feature subset selection for data and feature streams: a review
Nashaat et al. Semi-supervised ensemble learning for dealing with inaccurate and incomplete supervision
Trivedi et al. A modified content-based evolutionary approach to identify unsolicited emails
Cornuejols et al. Statistical computational learning
Ibrahim et al. Towards out-of-distribution adversarial robustness
WO2002048911A1 (en) A system and method for multi-class multi-label hierachical categorization
Lemhadri et al. RbX: Region-based explanations of prediction models
Mácha et al. Deeptoppush: Simple and scalable method for accuracy at the top
Han et al. Customized classification learning based on query projections
CN107180264A (en) For the transductive classification method to document and data
CN101449264B (en) Method and system and the data classification method of use machine learning method for data classification of transduceing
Allen Constructing and classifying email networks from raw forensic images
Barddal Feature analysis in evolving data streams: Issues and algorithms
O'Neill An evaluation of selection strategies for active learning with regression

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20080215

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: KOFAX, INC.

R17D Deferred search report published (corrected)

Effective date: 20081204

RIN1 Information on inventor provided before grant (corrected)

Inventor name: HARRIS, CHRISTOPHER K.

Inventor name: SARAH, ANTHONY

Inventor name: SCHMIDTLER, MAURITIUS A.R.

Inventor name: CARUSO, NICOLA

Inventor name: BORREY, ROLAND

DAX Request for extension of the european patent (deleted)
RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 15/18 20060101AFI20160316BHEP

Ipc: G06N 99/00 20100101ALI20160316BHEP

A4 Supplementary search report drawn up and despatched

Effective date: 20160718

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 15/18 20060101AFI20160712BHEP

Ipc: G06N 99/00 20100101ALI20160712BHEP

Ipc: G06K 9/62 20060101ALN20160712BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180216

RIC1 Information provided on ipc code assigned before grant

Ipc: G06N 99/00 20100101ALI20160712BHEP

Ipc: G06F 15/18 20060101AFI20160712BHEP

Ipc: G06K 9/62 20060101ALN20160712BHEP

RIC1 Information provided on ipc code assigned before grant

Ipc: G06N 99/00 20190101ALI20160712BHEP

Ipc: G06K 9/62 20060101ALN20160712BHEP

Ipc: G06F 15/18 20060101AFI20160712BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS