EP1924926A2 - Verfahren und systeme zur transduktiven datenklassifizierung und datenklassifizierungsverfahren unter verwendung maschineller lerntechniken - Google Patents
Verfahren und systeme zur transduktiven datenklassifizierung und datenklassifizierungsverfahren unter verwendung maschineller lerntechnikenInfo
- Publication number
- EP1924926A2 EP1924926A2 EP07809394A EP07809394A EP1924926A2 EP 1924926 A2 EP1924926 A2 EP 1924926A2 EP 07809394 A EP07809394 A EP 07809394A EP 07809394 A EP07809394 A EP 07809394A EP 1924926 A2 EP1924926 A2 EP 1924926A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- documents
- label
- unlabeled
- classifier
- data points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims abstract description 394
- 238000010801 machine learning Methods 0.000 title abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 15
- 238000004519 manufacturing process Methods 0.000 claims abstract description 14
- 238000012549 training Methods 0.000 claims description 195
- 230000006870 function Effects 0.000 claims description 121
- 230000008569 process Effects 0.000 claims description 107
- 238000004364 calculation method Methods 0.000 claims description 104
- 238000009826 distribution Methods 0.000 claims description 73
- 238000012706 support-vector machine Methods 0.000 claims description 35
- 230000008859 change Effects 0.000 claims description 30
- 238000000926 separation method Methods 0.000 claims description 30
- 230000026683 transduction Effects 0.000 claims description 23
- 238000010361 transduction Methods 0.000 claims description 23
- 238000003745 diagnosis Methods 0.000 claims description 7
- 238000004140 cleaning Methods 0.000 claims description 3
- 230000001939 inductive effect Effects 0.000 description 26
- 238000013528 artificial neural network Methods 0.000 description 17
- 238000013459 approach Methods 0.000 description 16
- 239000013598 vector Substances 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 14
- 210000002569 neuron Anatomy 0.000 description 14
- 238000010586 diagram Methods 0.000 description 11
- 230000006698 induction Effects 0.000 description 11
- 230000003068 static effect Effects 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000007635 classification algorithm Methods 0.000 description 6
- 238000009472 formulation Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 239000000203 mixture Substances 0.000 description 5
- 238000012552 review Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000008520 organization Effects 0.000 description 4
- 238000005192 partition Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 241000894007 species Species 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000009827 uniform distribution Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 206010001497 Agitation Diseases 0.000 description 1
- 206010035148 Plague Diseases 0.000 description 1
- 241000139306 Platt Species 0.000 description 1
- 241000607479 Yersinia pestis Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003749 cleanliness Effects 0.000 description 1
- 238000013329 compounding Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 208000001491 myopia Diseases 0.000 description 1
- 230000004379 myopia Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000004205 output neuron Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- JTJMJGYZQZDUJJ-UHFFFAOYSA-N phencyclidine Chemical compound C1CCCCN1C1(C=2C=CC=CC=2)CCCCC1 JTJMJGYZQZDUJJ-UHFFFAOYSA-N 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000000391 smoking effect Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Definitions
- the present invention relates generally to methods and apparatus for data classification. More particularly, the present invention provides improved transductive machine learning methods. The present invention also relates to novel applications using machine learning techniques.
- a method for classification of data includes receiving labeled data points, each of the labeled data points
- a method for classification of data according to another embodiment of the present invention ⁇ 0 includes providing computer executable program code to be deployed to and executed on a computer system.
- the program code comprises instructions for: accessing stored labeled data points in a memory of a computer, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a !5 designated category; accessing unlabeled data points from a memory of a computer; accessing at least one predetermined cost factor of the labeled data points and unlabeled data points from a memory of a computer; training a Maximum Entropy Discrimination (MED) transductive classifier through iterative calculation using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein 10 for each iteration of the calculation the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point prior probability is adjusted according
- a data processing apparatus includes: at least one memory for storing: (i) labeled data points wherein each of the labeled data points having at least one label indicating whether the data point is a training example for data points being included in a designated category or a training example for data points being excluded from a designated category; (ii) unlabeled data points; and (iii) at least one predetermined cost factor of the labeled data points and unlabeled data points; and a transductive classifier trainer to iteratively teach the transductive classifier using transductive Maximum Entropy Discrimination (MED) using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein at each iteration of the MED calculation the cost factor of the unlabeled data point is adjusted as a function of an expected label value and a data point label prior probability is adjusted according to an estimate of a data point class membership probability; wherein a classifier trained by the transductive classifier trainer is used to classify
- MED transductive Maximum
- An article of manufacture comprises a program storage medium readable by a computer, the medium tangibly embodying one or more programs of instructions executable by a computer to perform a method of data classification comprising: receiving labeled data points, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category; receiving unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; training a transductive classifier with iterative Maximum Entropy Discrimination (MED) calculation using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples wherein at each iteration of the MED calculation the unlabeled data point cost factor is adjusted as a function of an expected label value and a data point prior probability is adjusted according to an estimate of a data point class membership probability; applying the trained classifier to classify at
- a method for classification of unlabeled data includes receiving labeled data points, each of the labeled data points having at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data' points being excluded from a designated category; receiving labeled and unlabeled data points; receiving prior label probability information of labeled data points and unlabeled data points; receiving at least one predetermined cost factor of the labeled data points and unlabeled data points; determining the expected labels for each labeled and unlabeled data point according to the label prior probability of the data point; and repeating the following substeps until substantial convergence of data values:
- a classification of the input data points, or derivative thereof, is output to at least one of a user, another system, and another process.
- a method for classifying documents includes receiving at least one labeled seed document having a known confidence level of label assignment; receiving unlabeled documents; receiving at least one predetermined cost factor; training a transductive classifier through iterative calculation using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value; after at least some of the iterations, storing confidence scores for the unlabeled documents; and outputting identifiers of the unlabeled documents having the highest confidence scores to at least one of a user, another system, and another process.
- a method for analyzing documents associated with legal discovery includes receiving documents associated with a legal matter; performing a document classification technique on the documents; and outputting identifiers of at least some of the documents based on the classification thereof.
- a method for cleaning up data includes receiving a plurality of labeled data items; selecting subsets of the data items for each of a plurality of categories; setting an uncertainty for the data items in each subset to about zero; setting an uncertainty for the data items not in the subsets to a predefined value that is not about zero; training a transductive classifier through iterative calculation using the uncertainties, the data items in the subsets, and the data items not in the subsets as training examples; applying the trained classifier to each of the labeled data items to classify each of the data items; and outputting a classification of the input data items, or derivative thereof, to at least one of a user, another system, and another process.
- a method for verifying an association of an invoice with an entity includes training a classifier based on an invoice format associated with a first entity; accessing a plurality of invoices labeled as being associated with at least one of the first entity and other entities; performing a document classification technique on the invoices using the classifier; and outputting an identifier of at least one of the invoices having a high probability of not being associated with the first entity.
- a method for managing medical records includes training a classifier based on a medical diagnosis; accessing a plurality of medical records; performing a document classification technique on the medical records using the classifier; and outputting an identifier of at least one of the medical records having a low probability of being associated with the medical diagnosis.
- a method for face recognition includes receiving at least one labeled seed image of a face, the seed image having a known confidence level; receiving unlabeled images; receiving at least one predetermined cost factor; training a transductive classifier through iterative calculation using the at least one predetermined cost factor, the at least one seed image, and the unlabeled images, wherein for each' iteration of the calculations the cost factor is adjusted as a function of an expected label value; after at least some of the iterations, storing confidence scores for the unlabeled seed images; and outputting identifiers of the unlabeled images having the highest confidence scores to at least one of a user, another system, and another process.
- a method for analyzing prior art documents includes training a classifier based on a search query; accessing a plurality of prior art documents; performing a document classification technique on at least some of the prior art documents using the classifier; and outputting identifiers of at least some of the prior art documents based on the classification thereof.
- a method for adapting a patent classification to a shift in document content includes receiving at least one labeled seed document; receiving unlabeled documents; training a transductive classifier using the at least one, seed document and the unlabeled documents; classifying the unlabeled documents having a confidence level above a predefined threshold into a plurality of existing categories using the classifier; classifying the unlabeled documents having a confidence level below the predefined threshold into at least one new category using the classifier; reclassifying at least some of the categorized documents into the existing categories and the at least one new category using the classifier; and outputting identifiers of the categorized documents to at least one of a user, another system, and another process.
- a method for matching documents to claims includes training a classifier based on at least one claim of a patent or patent application; accessing a plurality of documents; performing a document classification technique on at least some of the documents using the classifier; and outputting identifiers of at least some of the documents based on the classification thereof.
- a method for classifying a patent or patent application includes training a classifier based on a plurality of documents known to be, in a particular patent classification; receiving at least a portion of a patent or patent application; performing a document classification technique on the at least the portion of the patent or patent application using the classifier; and outputting a classification of the patent or patent application, wherein the document classification technique is a yes/no classification technique.
- a method for classifying a patent or patent application includes performing a document classification technique on at least the portion of a patent or patent application using a classifier that was trained based on at least one document associated with a particular patent classification, wherein the document classification technique is a yes/no classification technique; and outputting a classification of the patent or patent application.
- a method for adapting to a shift in document content includes receiving at least one labeled seed document; receiving unlabeled documents; receiving at least one predetermined cost factor; training a transductive classifier using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents; classifying the unlabeled documents having a confidence level above a predefined threshold into a plurality of categories using the classifier; reclassifying at least some of the categorized documents into the categories using the classifier; and outputting identifiers of the categorized documents to at least one of a user, another system, and ⁇ another process.
- a method for separating documents includes receiving labeled data; receiving a sequence of unlabeled documents; adapting probabilistic classification rules using transduction based on the labeled data and the unlabeled documents; updating weights used for document separation according to the probabilistic classification rules; determining locations of separations in the sequence of documents; outputting indicators of the determined locations of the separations in the sequence to at least one of a user, another system, and another process; and flagging the documents with codes, the codes correlating to the indicators.
- a method for document searching includes receiving a search query; retrieving documents based on the search query; outputting the documents; receiving user-entered labels for at least some of the documents, the labels being indicative of a relevance of the document to the search query; training a classifier based on the search query and the user-entered labels; performing a document classification technique on the documents using the classifier for reclassifying the documents; and outputting identifiers of at least some of the documents based on the classification thereof.
- Fig! 1 is a depiction of a chart plotting the expected label as a function of the classification score as obtained by employing MED discriminative learning applied to label induction.
- Fig: 2 is a depiction of a series of plots showing calculated iterations of the decision function obtained by transductive MED learning.
- Fig. 3 is depiction of a series of plots showing calculated iterations of the decision function obtained by the improved transductive MED learning of one embodiment of the present invention.
- Fig. 4 illustrates a control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using a scaled cost factor.
- Fig. '5 illustrates a control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using user defined prior probability information.
- Fig. 6 illustrates a detailed control flow diagram for the classification of unlabeled data in accordance with one embodiment of the invention using Maximum Entropy Discrimination with .scaled cost factors and prior probability information.
- Fig. 7 is a network diagram illustrating a network architecture in which the various embodiments described herein may be implemented.
- Fig. 8 is a system diagram of a representative hardware environment associated with a user device.
- Fig. 9 illustrates a block diagram representation of the apparatus of one embodiment of the present invention.
- Fig. 10 illustrates, in a flowchart, a classification process performed by in accordance with one, embodiment.
- Fig. 11 illustrates, in a flowchart, a classification process performed by in accordance with one. embodiment.
- Fig. 1 12 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 13 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 14 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 15 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 16 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 17 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 18 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 19 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 1 20 illustrates, in a flowchart, a classification process performed by in accordance with
- Fig.' 21 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- FigJ 22 illustrates a control flow diagram showing the method of one embodiment of the present invention applied to a first document separating system.
- Fig.; 23 illustrates a control flow diagram showing the method of one embodiment of the present invention applied to a second separating system.
- Fig. 24 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 25 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. ,26 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. '27 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 28 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- Fig. 29 illustrates, in a flowchart, a classification process performed by in accordance with one embodiment.
- T-Ie 1 interest and need for classification of textual data has been particularly strong, and several methods of classification have been employed. A discussion of classification methods for textual data follows:
- computers are called upon to classify (or recognize) objects to an ever increasing extent.
- computers may 'use optical character recognition to classify handwritten or scanned numbers and letters, pattern recognition to classify an image, such as a face, a fingerprint, a fighter plane, etc., or speech recognition to classify a sound, a voice, etc.
- Text classification may be used to organize textual information objects into a hierarchy of predetermined classes or categories for example. In this way, finding (or navigating to) textual information objects related to a particular subject matter is simplified. Text classification may be used to route appropriate textual information objects to appropriate people or locations. In this way, an information service can route textual information objects covering diverse subject matters (e.g., business, sports, the stock market, football, a particular company, a particular football team) to people having diverse interests.
- diverse subject matters e.g., business, sports, the stock market, football, a particular company, a particular football team
- Text classification may be used to filter textual information objects so that a person is not annoyed by unwanted textual content (such as unwanted and unsolicited e-mail, also referred to as junk e-mail, or "spam").
- unwanted textual content such as unwanted and unsolicited e-mail, also referred to as junk e-mail, or "spam"
- rule-based system may be used to effect such types of classification. Basically, rule-based systems use production rules of the form:
- the conditions may include whether the textual information includes certain words or phrases, has a certain syntax, or has certain attributes. For example, if the textual content has the word “close”, the phrase “nasdaq” and a number, then it is classified as "stock market” text.
- classifiers Over the last decade or so, other types of classifiers have been used increasingly. Although these classifiers do not use static, predefined logic, as do rule-based classifiers, they have outperformed rule-based classifiers in many applications. Such classifiers typically include a learning element and a performance element. Such classifiers may include neural networks, Bayesian networks, and support vector machines. Although each of these classifiers is known, each is briefly introduced below for the reader's convenience.
- classifiers having learning and performance elements outperform rule-based classifiers, in many applications.
- these classifiers may include neural networks, Bayesian networks, and support vector machines.
- a neural network is basically a multilayered, hierarchical arrangement of identical processing elements, also referred to as neurons.
- Each neuron can have one or more inputs but only one output.
- Each neuron input is weighted by a coefficient.
- the output of a neuron is typically a function of the sum of its weighted inputs and a bias value.
- This function also referred to as an activation function, is typically a sigmoid function. That is, the activation function may be S-shaped, monotonically increasing and asymptotically approaching fixed values (e.g., +1, 0, -1) as its input(s) respectively approaches positive or negative infinity.
- the sigmoid function and the individual neural weight and bias values determine the response or "excitability" of the neuron to input signals.
- the output of a neuron in one layer may be distributed as an input to one or more neurons in a next layer.
- a typical neural network may include an input layer and two (2) distinct layers; namely, an input layer, an intermediate neuron layer, and an output neuron layer. Note that the nodes of the input layer are not neurons. Rather, the nodes of the input layer have only one input and basically provide the input, unprocessed, to the inputs of the next layer.
- the input layer could have 300 neurons (i.e., one for each pixel of the input) and the output array could have 10 neurons (i.e., one for each of the ten digits).
- the use of neural networks generally involves two (2) successive steps. First, the neural network is initialized and trained on known inputs having known output values (or classifications). Once the neural network is trained, it can then be used to classify unknown inputs.
- the neural network may be initialized by setting the weights and biases of the neurons to random values, typically generated from a Gaussian distribution.
- the neural network is then trained using a succession of inputs having known outputs (or classes).
- the values of the neural weights and biases are adjusted (e.g., in accordance with the known back-propagation technique) such that the output of the neural network of each individual training pattern approaches or matches the known output.
- a gradient descent in weight space is used to minimize the output error. In this way, learning using successive training inputs converges towards a locally optimal solution for the weights and biases. That is, the weights and biases are adjusted to minimize an error.
- the system is not typically trained to the point where it converges to an optimal solution. Otherwise, the system would be "over trained” such that it would be too specialized to the training data and might not be good at classifying inputs which differ, in some way, from those in the training set. Thus, at various times during its training, the system is tested on a set of validation data. Training is halted when the system's performance on the validation set no longer improves.
- the neural network can be used to classify unknown inputs in accordance with the weights and biases determined during training. If the neural network can classify the unknown input with confidence, one of the outputs of the neurons in the output layer will be much higher than the others.
- Bayesian networks use hypotheses as intermediaries between data (e.g., input feature vectors) and predictions (e.g., classifications).
- the probability of each hypothesis, given the data (“P(hypo
- a prediction is made from the hypotheses using posterior probabilities of the hypotheses to weight the individual predictions of each of the hypotheses.
- Hj is the i 1 * 1 hypothesis.
- a most probable hypothesis Hj that maximizes the probability of Hj given D is referred to as a maximum a posterior hypothesis (or "H MAP ") and 5 may be expressed as follows:
- the first term of the numerator represents the probability that the data would have been observed given the hypothesis i.
- the second term represents the prior probability assigned to the given hypothesis i.
- a Bayesian network includes variables and directed edges between the variables, thereby defining a directed acyclic graph (or "DAG"). Each variable can assume any of a finite number of mutually exclusive states. For each variable A, having parent variables Bi, . . . B n , there 1 is an attached probability table (P(A
- a variable “MML” may represent a "moisture of my lawn” and may have states “wet” and “dry”.
- the MML variable may have "rain” and “my sprinkler on” parent variables each having "Yes” and “No” states.
- Another variable, “MNL” may represent a "moisture of my neighbor's lawn” and may have states “wet” and “dry”.
- the MNL variable may share the "rain” parent variable. In this example, a prediction may be whether my lawn is "wet” or "dry”.
- This prediction may depend of the hypotheses (i) if it rains, my lawn will be wet with probability (xi) and (ii) if my sprinkler was on, my lawn will be wet with probability (X 2 ).
- the probability that it has rained or that my sprinkler was on may depend on other variables. For example, if my neighbor's lawn is wet and they don't have a sprinkler, it is more likely that it has rained.
- conditional probability tables in Bayesian networks may be trained, as was the case with neural networks.
- the learning process may be shortened.
- prior probabilities for the conditional probabilities are usually unknown, in which case a uniform prior is used.
- One embodiment of the present invention may perform at least one (1) of two (2) basic functions, namely generating parameters for a classifier, and classifying objects, such as textural information objects.
- parameters are generated for a classifier based on a set of training examples.
- a set of feature vectors may be generated from a set of training examples. The features of the set of feature vectors may be reduced.
- the parameters to be generated may include a defined monotonic (e.g., sigmoid) function and a weight vector.
- the weight vector may be determined by means of SVM training (or by another, known, technique).
- the monotonic (e.g., sigmoid) function may be defined by means of an optimization method.
- The? text classifier may include a weight vector and a defined monotonic (e.g., sigmoid) function. Basically, the output of the text classifier of the present invention may be expressed as:
- O c a classification output for category c
- w c a weight vector parameter associated with category c
- x is a (reduced) feature vector based on the unknown textual information object
- a and B are adjustable parameters of a monotonic (e.g., sigmoid) function.
- the calculation of the output from expression (2) is quicker than the calculation of the output from expression (1).
- the classifier may (i) convert a textual information object to a feature vector, and (ii) reduce the feature vector to a reduced feature vector having less elements.
- Inductive machine learning is used to ascribe properties or relations to types based on tokens (i.e. 1 , on one or a small number of observations or experiences); or to formulate laws based on limited observations of recurring patterns. Inductive machine learning involves reasoning from observed training cases to create general rules, which are then applied to the test cases.
- Transductive machine learning is a powerful method that does not surfer from these disadvantages.
- Transductive machine techniques may be capable of learning from a very small set of labeled training examples, automatically adapting to drifting classification concepts, and automatically correcting the labeled training examples. These advantages make transductive machine learning an interesting and valuable method for a large variety of commercial applications.
- Transduction learns patterns in data. It extends the concept of inductive learning by learning not only from labeled data but also from unlabeled data. This enables transduction to learn patterns that are not or only partly captured in the labeled data. As a result transduction can, in contrast to rule based systems or systems based on inductive learning, adapt to dynamically changing environments. This capability enables transduction to be utilized for document discovery, data cleanup, and addressing drifting classification concepts, among other things.
- Support Vector Machines is one employed method of text classification, and such method approaches the problem of the large number of solutions and the resulting r generalization problem by deploying constraints on the possible solutions utilizing concepts of regularization theory. For example, a binary SVM classifier selects from all hyperplanes that separate the training data correctly as solution the hyperplane that maximizes the margin. i
- the constraint on the training data memorizes the , data, whereas the regularization ensures appropriate generalization.
- Inductive classification learns from training examples that have known labels, i.e. every training example's class membership is known. Where inductive classification learns from known labels, transductive classification determines the classification rules from labeled as well as unlabeled data.
- An example of transductive SVM classification is shown in table 1.
- Require Data matrix X of labeled training examples and their labels Y .
- Require Data matrix X' of the unlabeled training examples.
- Require A list of all possible labels assignments of the unlabeled training examples
- Table 1 shows the principle of a transductive classification with Support Vector Machines: The solution is given by the hypeiplane that yields the maximum margin over all possible label assignments of the unlabeled data. The possible label assignments grow exponentially in the number of unlabeled data and for practically applicable solutions, the algorithm in Table 1 must be approximated. An example of such an approximation is described in T. Joachims, Transductive inference for text classification using support vector machines, Technical report, Universitaet Dortmund, LAS VIII, 1999 (Joachims).
- a label expectation of zero can be obtained by a fixed class prior probability equal to 1/2 or a class prior probability that is a random variable with an uniform prior distribution, i.e. an unknown class prior probability. Accordingly, in applications with known class prior probabilities that are not equal to 1/2 the algorithm could be improved by incorporating this additional information.
- MED Maximum Entropy Discrimination
- Inductive MED classification assumes a prior distribution over the parameters of the decision function, a prior distribution over the bias term, and a prior distribution over margins. It selects as a final distribution over these parameters the one that is closest to the prior distributions and yields an expected decision function that classifies the data points correctly.
- the problem is formulated as follows: Find the distribution over hyperplane parameters p(&), the bias , the data points classification margins p( ⁇ ) whose combined probability distribution has a minimal Kullback Leibler divergence KL to the combined respective prior distributions p 0 , i.e.
- Transductive MED classification Require Data Matrix AOf labeled and unlabeled training examples.
- the label prior distribution is a ⁇ function, thus, effectively fixing the label to be either +1 or -1.
- the label induction step determines the label probability distribution given a fixed probability distribution for the hyperplane parameters. Using the margin and label priors introduced above yields the following objective function for the label induction step (see Table 2)
- unlabeled data points outside the margin i.e. ⁇ s ⁇ > ⁇
- data points close to the margin, i.e. ⁇ 1 yield the highest absolute expected label values
- the M step of the transductive classification algorithm of Jaakkola determines the probability distributions for the hyperplane parameters, the bias term, and margins of the data points that are closest to the respective prior distribution under the constraints
- Vt Sl (y,)-( ⁇ ,) ⁇ 0, (5)
- s is the t— th data point classification score, (j/,) its expected label and ( ⁇ ) its expected margin.
- the expected label for unlabeled data lies in the interval (-1, +1) and is estimated in the label induction step.
- unlabeled data have to fulfill tighter classification constraints than labeled data since the classification score is scaled by the expected label.
- unlabeled data close to the separating hyperplane have the most stringent classification constraints since their score as well as the absolute value of their expected label
- the M step's full objective function given the prior distributions mentioned above is
- the first term is derived from the Gaussian hyperplane parameters prior distribution
- the second term is the margin prior regularization term
- the last term is the bias prior regularization term derived from a Gaussian prior with zero mean and variance .
- the prior distribution over the bias term can be interpreted as a prior distribution over class prior probabilities. Accordingly, the regularization term that corresponds to the bias prior distribution constrains the weight of the positive to negative examples. According to Eq. 6, the contribution of the bias term is minimized in case the collective pull of the positive examples on the hyperplane equals the collective pull of the negative examples.
- the collective constraint on the Lagrange Multipliers owing to the bias prior is weighted by the expected label of the data points and is, therefore, less restrictive for unlabeled data than for labeled data. Thus, unlabeled data have the ability of influencing the final solution stronger than the labeled data.
- unlabeled data have to fulfill stricter classification constraints than the labeled data and their cumulative weight to the solution is less constrained than for labeled data.
- unlabeled data with an expected label close to zero that lie within the margin of the current M step influence the solution the most.
- the resulting net effect of formulating the E and M step this way is illustrated by applying this algorithm to the dataset shown in Fig. 2.
- the dataset includes two labeled examples, a negative example (x) at x-position -1 and a positive example (+) at +1, and six unlabeled examples (o) between -1 and +1 along the x- axis.
- the cross (x) denotes a labeled negative example, the plus sign (+) a labeled positive example, and the circles (o) unlabeled data.
- the different plots show separating hyperplanes determined at various iterations of the M step.
- the one unlabeled data point with a negative x-value is closer than any other unlabeled data to this separating hyperplane.
- the M step suffers from a kind of short sightedness, where the unlabeled data point closest to the current separating hyperplane determines the final position of the plane the most and the data points further away are not very, important.
- One preferred approach of the present invention employs transductive classification using the framework of Maximum Entropy Discrimination (MED). It should be understood that 10 various embodiments of the present invention, while applicable for classification may also be applicable to other MED learning problems using transduction, including, but not limited to transductive MED regression and graphical models.
- MED Maximum Entropy Discrimination
- the final solution is the expectation of all possible solutions according to the probability distribution that is closest to the assumed prior probability distribution under the constraint that the expected solution describes the training data correctly.
- the prior probability distribution over solutions maps to a regularization term, i.e. by choosing a specific prior distribution one has selected a specific
- Discriminative estimation as applied by Support Vector Machines is effective in learning from few examples. This method and apparatus of one embodiment of the present invention has this in common with Support Vector Machines and does not attempt to estimate more
- the method and apparatus of one embodiment of the present invention using Maximum Entropy Discrimination bridges the gap between pure discriminative, e.g. Support Vector Machine learning, and generative model estimation.
- the method of one embodiment of the present invention as shown in Table 3 is an improved transductive MED classification algorithm that does not have the instability problem of the method discussed in Jaakkola, referenced herein. Differences include, but are not limited to, that in one embodiment of the present invention every data point has its own cost factor proportional to its absolute label expectation value Ky)I-
- each data points label prior probability is updated after each M step according to the estimated class membership probability as function of the data point's distance to the decision function.
- unlabeled data have small cost factors yielding an expected label as function of the classification score that is very flat (see Fig. 1) and, accordingly, to some extent all unlabeled data are allowed to pull on the hyperplane, albeit only with small weight.
- the prior distribution over decision function parameters incorporates important prior knowledge of the specific classification problem at hand.
- Other prior distributions of decision function parameters important for classification problem are for example a multinomial distribution, a Poisson distribution, a Cauchy distribution (Breit-Wigner), a Maxwell- Boltzman distribution or a Bose-Einstein distribution.
- the prior distribution over the threshold b of the decision function is given by a Gaussian distribution with mean ⁇ b and variance ⁇ j
- S 1 is the / — th data point's classification score determined in the previous M step and P 01 Cy 1 ) the data point's binary label prior probability.
- the section herein entitled M STEP describes the algorithm to solve the M step objective function. Also, the section herein entitled E STEP describes the E step algorithm.
- the step EstimateClassProbability in line 5 of Table 3 uses the training data to determine the calibration parameters to turn classification scores into class membership probabilities, i.e. the probability of the class given the score
- class membership probabilities i.e. the probability of the class given the score
- Relevant methods for estimating the score calibration to probabilities are described in J. Platt, Probabilistic outputs for support vector machines and comparison to regularized likelihood methods, pages 61-74, 2000 (Platt) and B. Zadrozny and C. Elkan, Transforming classifier scores into accurate multi-class probability estimates, 2002 (Zadrozny).
- the cross (x) denotes a labeled negative example, the plus sign (+) a labeled positive example, and the circles (o) unlabeled data.
- the different plots show separating hyperplanes determined at various iterations of the M step.
- the 20-th iteration shows the final solution elected by the improved transductive MED classifier.
- Fig. 3 shows the improved transductive MED classification algorithm applied to the toy dataset introduced above.
- the method 100 begins at step 102 and at step 104 accesses stored data 106.
- the data is stored at a memory location and includes labeled data, unlabeled data and at least one predetermined cost factor.
- the data 106 includes data points having assigned labels.
- the assigned labels identify whether a labeled data point is intended to be included within a particular category or excluded from a particular category.
- step 108 determines the label prior probabilities of the data point using the label information of data point. Then, at step 110 the expected labels of the data point are determined according to the label prior probability.
- step 112 includes iterative training of the transductive MED classifier by the scaling of the cost factor unlabeled data points. In each iteration of the calculation the unlabeled data points' cost factors are scaled. As such, the MED classifier learns through repeated iterations of calculations.
- the trained classifier then accessed input data 114 at step 116. The trained classifier can then complete the step of classifying input data at step 118 and terminates at step 120.
- the unlabeled data of 106 and the input data 114 may be derived from a single source.
- the input data/unlabeled data can be used in the iterative process of 112 which is then used to classify at 118.
- the input data 114 maybe include a feedback mechanism to supply the input data to the stored data at 106 such that the MED classifier of 112 can dynamically learn from new data that is input.
- a control flow diagram is illustrated showing another method of classification of unlabeled data of one embodiment of the present invention including user defined prior probability information.
- the method 200 begins at step 202 and at step 204 accesses stored data 206.
- the data 206 includes labeled data, unlabeled data, a predetermined cost factor, and prior probability information provided by a user.
- the labeled data of 206 includes data points having assigned labels. The assigned labels identify whether the labeled data point is intended to be included within a particular category or excluded from a particular category.
- expected labels are calculated from the data of 206.
- the expected labels then used in step 210 along with labeled data, unlabeled data and cost factors to conduct iterative training of a transductive MED classifier.
- the iterative calculations of 210 scale the cost factors of the unlabeled data at each calculation. The calculations continue until the classifier is properly trained.
- the trained classifier then accessed input data at 214 from input data 212.
- the trained classifier can then complete the step of classifying input data at step 216.
- the input data and the unlabeled data may derive from a single source and may be put into the system at both 206 and 212.
- the input data 212 can influence the training at 210 such that the process my dynamically change over time with continuing input data.
- a monitor may determine whether or not the system has reached convergence. Convergence may be determined when the change of the hyperplane between each iteration of the MED calculation falls below a predetermined threshold value. In an alternative embodiment of the present invention, the threshold value can be determined when the change of the determined expected label falls below a predetermined threshold value. If convergence is reached, then the iterative training process may cease. Referring particularly to Fig. 6, illustrated is a more detailed control flow diagram of the iterative training process of at least one embodiment of the method of the present invention.
- the process 300 commences at step 302 and at step 304 data is accessed from data 306 and may include labeled data, unlabeled data, at least one predetermined cost factor, and prior probability information.
- the labeled data points of 306 include a label identifying whether the' data point is a training example for data points to be included in the designated category or a training example for data points to be excluded form a designated category.
- the prior probability information of 306 includes the probability information of labeled data sets and unlabeled data sets.
- step 308 expected labels are determined from the data from the prior probability information of 306.
- step 310 the cost factor is scaled for each unlabeled data set proportional to the absolute value of the expected label of a data point.
- An MED classifier is then trained in step 312 by determining the decision function that maximizes the margin between the included training and excluded training examples utilizing the labeled as well as the unlabeled data as training examples according to their expected labels.
- step 314 classification scores are determined using the trained classifier of 312.
- classification scores are calibrated to class membership probability.
- label prior probability information is updated according to the class membership probability.
- An MED calculation is preformed in step 320 to determine label and margin probability distributions, wherein the previously determined classification scores are used in the MED calculation.
- step 322 new expected labels are computed at step 322 and the expected labels are updated in step 324 using the computations from step 322.
- step 326 the method determines whether convergence has been achieved. If so, the method terminates at step 328. If convergence is not reached, another iteration of the method is completed starting with step 310. Iterations are repeated until convergence is reached thus resulting in an iterative training of the MED classifier. Convergence may be reached when change of the decision function between each iteration of the MED calculation falls below a predetermined value. In an alternative embodiment of the present invention, convergence may be reached when the change of the determined expected label value falls below a predetermined threshold value.
- Fig. 7 illustrates a network architecture 700, in accordance with one embodiment.
- a plurality of remote networks 702 are provided including a first remote network 704 and a second remote network 706.
- a gateway 707 may be coupled between the remote networks 702 and a proximate network 708.
- the networks 704, 706 may each take any form including, but not limited to a LAN, a WAN such as the Internet, PSTN, internal telephone network, etc.
- the gateway 707 serves as an entrance point from the remote networks 702 to the proximate network 708.
- the gateway 707 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 707, and a switch, which furnishes the actual path in and out of the gateway 707 for a given packet.
- At least one data server 714 coupled to the proximate network 708, and which is accessible from the remote networks 702 via the gateway 707. It should be noted that' the data server(s) 714 may include any type of computing device/groupware. Coupled to eaclj data server 714 is a plurality of user devices 716. Such user devices 716 may include a desktop computer, lap-top computer, hand-held computer, printer or any other type of logic. It should be noted that a user device 717 may also be directly coupled to any of the networks, in one embodiment. ;
- a facsimile machine 720 or series of facsimile machines 720 may be coupled to one or more of the networks 704, 706, 708.
- databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 704, 706, 708.
- a network element may refer to any component of a network.
- Fig. 8 shows a representative hardware environment associated with a user device 716 of Fig. 7, in 'accordance with one embodiment.
- Such Fig. illustrates a typical hardware configuration of a workstation having a centraJ processing unit 810, such as a microprocessor, and a number of other units interconnected via a system bus 812.
- a centraJ processing unit 810 such as a microprocessor
- the workstation shown in Fig. 8 includes a Random Access Memory (RAM) 814, Read Only Memory
- ROM Read Only Memory
- I/O adapter 818 for connecting peripheral devices such as disk storage units 820 to the bus 812
- user interface adapter 822 for connecting a keyboard 824, a mouse 826, a speaker 828, a microphone 832, and/or other user interface devices such as a touch screen and a digital camera (not shown) to the bus 812
- communication adapter 834 for connecting the workstation to a communication network 835 (e.g., a data processing
- a display adapter 836 for connecting the bus 812 to a display device 838.
- One embodiment of the present invention comprises in memory device 814 for storing labeled data 416.
- the labeled data points 416 each include a label indicating 5 whether the data point is a training example for data points being included in the designated category or a training example for data points being excluded from a designated category.
- Memory 814 also stores unlabeled data 418, prior probability data 420 and the cost factor data 422.
- the processor 810 accesses the data from the memory 814 and using transductive MED calculations trains a binary classifier enable it to classify unlabeled data.
- the processor 810 uses iterative transductive calculation by using the cost factor and training examples from labeled and unlabeled data and scaling that cost factor as a function of expected label value thus effecting the data of the cost factor data 422 which is then re-input into processor 810.
- the cost factor 422 changes with each iteration of the MED classification by the processor 810. Once the processor 810 adequately trains an MED classifier, the processor can then construct the classifier to classify the unlabeled data into classified data 424.
- Transductive SVM and MED formulations of the prior art lead to an exponential growth of i0 possible label assignments and approximations have to be developed for practical applications.
- a different formulation of the transductive MED classification is introduced that does not suffer from an exponential growth of possible label assignments and allows a general closed form solution.
- ⁇ X is the dot product between the separating hyperplane' s weight vector and the
- this embodiment of the present invention finds a separating hyperplane that is a compromise of being closest to the chosen prior distribution, separating the labeled data correctly, and having no unlabeled data between the margins.
- the advantage is that no prior distribution over labels has to be introduced, thus, avoiding the problem of exponentially growing label assignments.
- using the prior distributions given in the Eqs. 7, 8, and 9 for the hyperplane parameters, the bias, and the'margins yields the following partition function
- G 2 ⁇ ,.U 1 ⁇ 9
- G 3 G, -2G 2 ,
- The' objective function 3 can be solved by applying similar techniques as in the case of known labels as discussed in the section herein entitled M Step. The difference is that matrix G 3 ⁇ ' in the quadratic form of the maximum margin term has now off-diagonal terms.
- MED can be applied to solve classification of data, in general, any kind of discriminant function and prior distributions, regression and graphical models (T. Jebara, Machine Learning Discriminative and Generative, Kluwer Academic Publishers) (Jebara).
- the applications of the embodiments of the present invention can be formulated as pure inductive learning problems with known labels as well as a transductive learning problem with labeled as well as unlabeled training examples.
- the improvements to the transductive MED classification algorithm described in Table 3 are applicable as well to general transductive MED classification, transductive MED regression, transductive MED learning of graphical models.
- the word "classification" may include regression or graphical models.
- Vt:0 ⁇ t ⁇ c, ⁇ , ⁇ 0, ⁇ , ⁇ l 0.
- the basis equals the expected bias , ⁇ , (y,) + ⁇ b yielding
- the gap can also be measured as a way to determine numerical convergence.
- the method of this alternate embodiment differs in that only one example can be optimized at a time. Therefore the training heuristic is to alternate between the examples in I 0 and all of the examples every other time.
- s is the t — th datapoint's classification score determined in the previous M step.
- the Lagrange Multipliers X 1 are determined by maximizing 3 £ .
- Eq. 35 cannot be solved analytically, but has to be determined by applying e.g. a linear search for each unlabeled example's Lagrange Multiplier that satisfies Eq. 35.
- labeled data points are received at step 1002, where each of the labeled data points has at least one label which indicates whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category.
- unlabeled data points are received at step 1004, as well as at least one predetermined cost factor of the labeled data points and unlabeled data points.
- the data points may contain any medium, e.g. words, images, sounds, etc. Prior probability information of labeled and unlabeled data points may also be received.
- the label of the included training example may be mapped to a first numeric value, e.g.
- the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc.
- the labeled data points, unlabeled data points, input data points, and at least one predetermined cost factor of the labeled data points and unlabeled data points may be stored in a memory of a computer.
- a transductive MED classifier is trained through iterative calculation using said at least one cost factor and the labeled data points and the unlabeled data points as training examples. For each iteration of the calculations, the unlabeled data point cost factor is adjusted as a function of an expected label value, e.g.
- the transductive classifier may learn using prior probability information of the labeled and unlabeled data, which further improves stability.
- the iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
- the trained classifier is applied to classify at least one of the unlabeled data points, the labeled data points, and input data points.
- Input data points may be received before or after the classifier is trained, or may not be received at all.
- the decision function that minimizes the BCL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data points as learning examples according to their expected label. Alternatively, the decision function may be determined with minimal KL divergence using a multinomial distribution for the decision function parameters.
- a classification of the classified data points, or a derivative thereof is output to at least one of a user, another system, and another process.
- the system may be remote or local.
- Examples of the derivative of the classification may be, but are not limited to, the classified data points themselves, a representation or identifier of the classified data points or host file/document, etc.
- computer executable program code is deployed to and executed on a computer system.
- This program code comprises instructions for accessing stored labeled data points in a memory of a computer, where each of said labeled data points has at least one label indicating whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category.
- the computer code comprises instructions for accessing unlabeled data points from a memory of a computer as well as accessing at least one predetermined cost factor of the labeled data points and unlabeled data points from a memory of a computer. Prior probability information of labeled and unlabeled data points stored in a memory of a computer may also be accessed.
- the label of the included training example may be mapped to a first numeric value, e.g. +1, etc.
- the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc.
- the program code comprises instructions for training a transductive classifier through iterative calculation, using the at least one stored cost factor and stored labeled data points and, stored unlabeled data points as training examples. Also, for each iteration of the calculation, the unlabeled data point cost factor is adjusted as a function of the expected label value of the data point, e.g. the absolute value of the expected label of a data point. Also, for each iteration, the prior probability information may be adjusted according to an estimate of a data point class membership probability. The iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
- the program code comprises instructions for applying the trained classifier to classify at least one of the unlabeled data points, the labeled data points, and input data points, as well as instructions for outputting a classification of the classified data points, or derivative thereof, to at least one of a user, another system, and another process.
- the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
- a data processing apparatus comprises at least one memory for storing: (i) labeled data points, wherein each of said labeled data points have at least one label indicating whether the data point is a training example for data points being included in a designated category or a training example for data points being excluded from a designated category; (ii) unlabeled data points; and (iii) at least one predetermined cost factor of the labeled data points and unlabeled data points.
- the memory may also store prior probability information of labeled and unlabeled data points.
- the label of the included training example may be mapped to a first numeric value, e.g. +1, etc.
- the label of the excluded training example may be mapped to a second numeric value, e.g. -1, etc.
- the data processing apparatus comprises a transductive classifier trainer to iteratively teach the transductive classifier using transductive Maximum Entropy Discrimination (MED) using the at least one stored cost factor and stored labeled data points and stored unlabeled data points as training examples.
- MED transductive Maximum Entropy Discrimination
- the cost factor of the unlabeled data point is adjusted as a function of the expected label value of the data point, e.g. the absolute value of the expected label of a data point, etc.
- the prior probability information may be adjusted according to an estimate of a data point class membership probability.
- the apparatus may further comprise a means for determining the convergence of data values, e.g. when the change of the decision function of the transductive classifier calculation falls below a predetermined threshold value, when the change of the determined expected label values falls below a predetermined threshold value, etc., and terminating calculations upon determination of convergence.
- a trained classifier is used to classify at least one of the unlabeled data points, the labeled data points, and input data points.
- the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined by a processor utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
- a classification of the classified data points, or derivative thereof is output to at least one of a user, another system, and another process.
- an article of manufacture comprises a program storage medium readable by a computer, where the medium tangibly embodies one or more programs of instructions executable by a computer to perform a method of data classification.
- labeled data points are received, where each of the labeled data points has at least one label which indicates whether the data point is a training example for data points for being included in a designated category or a training example for data points being excluded from a designated category.
- unlabeled data points are received, as well as at least one predetermined cost factor of the labeled data points and unlabeled data points.
- Prior probability information of labeled and unlabeled data points may also be stored in a memory of a computer.
- the label of the included training example may be mapped to a first numeric value, e.g. +1, etc.
- the label of the excluded training example may be mapped to a second numeric value, e.g. -1 , etc.
- a transductive classifier is trained with iterative Maximum Entropy Discrimination (MED) calculation using the at least one stored cost factor and the stored labeled data points and the unlabeled data points as training examples.
- MED Maximum Entropy Discrimination
- the unlabeled data point cost factor is adjusted as a function of an expected label value of the data point, e.g. the absolute value of the expected label of a data point, etc.
- the prior probability information may be adjusted according to an estimate of a data point class membership probability.
- the iterative step of training a transductive classifier may be repeated until the convergence of data values is reached, e.g. when the change of the decision function of the transductive classifier falls below a predetermined threshold value, when the change of the determined expected label value falls below a predetermined threshold value, etc.
- input data points are accessed from the memory of a computer, and the trained classifier is applied to classify at least one of the unlabeled data points, the labeled data points, and input data points.
- the decision function that minimizes the KL divergence to the prior probability distribution of the decision function parameters given the included and excluded training examples may be determined utilizing the labeled as well as the unlabeled data as learning examples according to their expected label.
- a classification of the classified data points, or a derivative thereof is output to at least one of a 5 user, another system, and another process.
- a method for classification of unlabeled data in a computer-based system is presented.
- labeled data points are received, each of said labeled data points having at least one label indicating whether the data point is a training example for data
- labeled and unlabeled data points are received, as are prior label probability information of labeled data points and unlabeled data points. Further, at least one 15 predetermined cost factor of the labeled data points and unlabeled data points is received.
- the expected labels for each labeled and unlabeled data point are determined according to the label prior probability of the data point. The following substeps are repeated until substantial convergence of data values: .0
- a classification of the input data points, or derivative thereof is output to at least one of a user, another system, and another process.
- Convergence may be reached when the change of the decision function falls below a predetermined threshold value. Additionally, convergence may also be reached when the change of the determined expected label value falls below a predetermined threshold value.
- the label of the included training example may have any value, for example, a value of +1, and the label of the excluded training example may have any value, for example, a valu'e of -1. !
- a method for classifying documents is presented in Fig. 11.
- at least one seed document having a known confidence level is received in step 1100, as well as unlabeled documents and at least one predetermined cost factor.
- the seed, document and other items may be received from a memory of a computer, from a user, from a network connection, etc., and may be received after a request from the system performing the method.
- the at least one seed document may have a label indicative of whether the document is included in a designated category, may contain a list of keywords, or have any other attribute that may assist in classifying documents.
- a transductive classifier is trained through iterative calculation using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value.
- a data point label prior probability for the labeled and unlabeled documents may also be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
- step 1104 confidence scores are stored for the unlabeled documents, and identifiers of the unlabeled documents having the highest confidence scores are output in step 1106 to at least one of a user, another system, and another process.
- the identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
- confidence scores may be stored after each of the iterations, wherein an identifier of the unlabeled document having the highest confidence score after each iteration is output.
- One embodiment of the present invention is capable of discovering patterns that link the initial document to the remaining documents.
- the task of discovery is one area where this pattern discovery proves particularly valuable. For instance, in pre-trial legal discovery, a large amount of documents have to be researched with regard to possible connections to the lawsuit at hand. The ultimate goal is to find the "smoking gun.”
- a common task for inventors, patent examiners, as well as patent lawyers is to evaluate the novelty of a technology through prior art search. In particular the task is to search all published patents and other publications and find documents within this set that might be related to the specific technology that is examined with regard to its novelty.
- the task of discovery involves finding a document or a set of documents within a set of data. Given an initial document or concept, a user may want to discover documents that are related to the initial document or concept. However, the notion of relationship between the initial document or concept and the target documents, i.e. the documents that are to be discovered, is only well understood after the discovery has taken place. By learning from labeled and unlabeled documents, concepts, etc., the present invention can learn patterns and relationships between the initial document or documents and the target documents. In .another embodiment of the present invention, a method for analyzing documents associated with legal discovery is presented in Fig. 12. In use, documents associated with a legal matter are received in step 1200.
- Such documents may include electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Additionally, a document classification technique is performed on the documents in step 1202. Further, identifiers of at least some of the documents are output in step 1204 based on the classification thereof. As an option, a representation of links between the documents may also be output
- the document classification technique may include any type of process, e.g. a transductive process, etc.
- a transductive classifier is trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the documents associated with the legal matter.
- the cost factor is preferably adjusted as a function of an expected label value, and the trained classifier is used to classify the received documents.
- This process may further comprise receiving a data point label prior probability for the labeled and unlabeled documents, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
- the document classification technique may include one or more of a support vector machine process and a maximum entropy discrimination process.
- a classifier is trained based on a search query in step 1300.
- a plurality of prior art documents are accessed in step 1302.
- Such prior art may include any information that has been made available to the public in any form before a given date.
- Such prior art may also or alternatively include any information that has not been made available to the public in any form before a given date.
- Illustrative prior art documents may be any type of documents, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, portions of a website, etc.
- a document classification technique is performed on at least some of the prior art documents in step 1304 using the classifier, and identifiers of at least some of the prior art documents are output in step 1306 based on the classification thereof.
- the document classification technique may include one or more of any process, including a support vector machine process, a maximum entropy discrimination process, or any inductive or transductive technique described above. Also or alternatively, a representation of links between the documents may also be output. In yet another embodiment, a relevance score of at least some of the prior art documents is output based on the classification thereof.
- the search query may include at least a portion of a patent disclosure.
- Illustrative patent disclosures include a disclosure created by an inventor summarizing the invention, a provisional patent application, a nonprovisional patent application, a foreign patent or patent application, etc.
- the search query includes at least a portion of a claim from a patent or patent application.
- the search query includes at least a portion of an abstract of a patent or patent application.
- the search query includes at least a portion of a summary from a patent or patent application.
- Fig. 27 illustrates a method for matching documents to claims.
- a classifier is trained based on at least one claim of a patent or patent application. Thus, one or more claims, or a portion thereof, may be used to train the classifier.
- a plurality of documents are accessed. Such documents may include prior art documents, documents describing potentially infringing or anticipating products, etc.
- a document classification technique is performed on at least some of the documents using the classifier.
- identifiers of at least some of the documents are output based on the classification thereof.
- a relevance score of at least some of the documents may also be output based on the classification thereof.
- An embodiment of the present invention may be used for the classification of patent applications.
- patents and patent applications are currently classified by subject matter using the United States Patent Classification (USPC) system.
- USPC United States Patent Classification
- This task is currently performed manually, and therefore is very expensive and time consuming.
- Such manual classification is also subject to human errors. Compounding the complexity of such a task is that the patent or patent application may be classified into multiple classes.
- Fig'. 28 depicts a method for classifying a patent application according to one embodiment.
- a classifier is trained based on a plurality of documents known to be in a particular patent classification. Such documents may typically be patents and patent applications (or portions thereof), but could also be summary sheets describing target subject matter of the particular patent classification.
- documents may typically be patents and patent applications (or portions thereof), but could also be summary sheets describing target subject matter of the particular patent classification.
- step 2802 at least a portion of a patent or patent application is received. The portion may include the claims, summary, abstract, specification, title, etc.
- a document classification technique is performed on the at least the portion of the patent or patent application using the classifier.
- a classification of the patent or patent application is output. As an option, a user may manually verify the classification of some or all of the patent applications.
- Thejdocument classification technique is preferably a yes/no classification technique. In other words, if the probability that the document is in the proper class is above a threshold, the decision is yes, the document belongs in this class. If the probability that the documents is in the proper class is below a threshold, the decision is no, the document does not belong in this class.
- Fig. 29 depicts yet another method for classifying a patent application.
- a document classification technique is performed on at least the portion of a patent or patent application using a classifier that was trained based on at least one document associated with a particular patent classification.
- the document classification technique is preferably a yes/no classification technique.
- a classification of the patent or patent application is output.
- the respective method may be repeated using a different classifier that was trained based on a plurality of documents known to be in a different patent classification.
- classification of a patent should be based on the claims.
- one approach uses the Description of a patent to train, and classify an application based on its Claims.
- Another approach uses the Description and Claims to train, and classify based on the Abstract.
- whatever portion of a patent or application is used to train that same type of content is used when classifying, i.e., if the system is trained on claims, the classification is based on claims.
- the document classification technique may include any type of process, e.g. a transductive process, etc.
- the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the prior art documents, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and the trained classifier may be used to classify the prior art documents.
- a data point label prior probability for the seed document and prior art documents may also be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
- the seed document may be any document, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, a website, a patent disclosure, etc.
- Fig. 14 describes one embodiment of the present invention.
- a set of data is read. The discovery of documents within this set that are relevant to the user is desired.
- an initial seed document or documents are labeled.
- the documents may be any type of documents, e.g. publications of a patent office, data retrieved from a database, a collection of prior art, a website, etc. It is also possible to seed the transduction process with a string of different key words or a document provided by the user.
- training a transductive classifier is trained using the labeled data as well as the set of unlabeled data in the given set. At each label induction step during the iterative transduction process the confidence scores determined during label induction are stored.
- the documents that achieved high confidence scores at the label induction steps are displayed in step 1408 for the user. These documents with high confidence scores represent documents relevant to the user for purposes of discovery.
- the display may be in chronological order of the label induction steps starting with the initial seed document to the final set of documents discovered at the last label induction step.
- the cleanup and classification technique may include any type of process, e.g. a transductive process, etc.
- any inductive or transductive technique described above may be used.
- the keys of the entries in the database are utilized as labels associated with some confidence level according to the expected cleanliness of the database.
- the labels together with the associated confidence level, i.e. the expected labels, are then used to train a transductive classifier that corrects the labels (keys) in order to achieve a more consistent organization of the data in the database. For example, invoices have to be first classified according to the company or person that originated the invoice in order to enable automatic data extraction, e.g.
- training examples are needed to set up an automatic classification system.
- training examples provided by the customer often contain misclassified documents or other noise ⁇ e.g. fax cover sheets -- that have to be identified and removed prior to training the automatic classification system in order to obtain accurate classification.
- misclassified documents or other noise ⁇ e.g. fax cover sheets -- that have to be identified and removed prior to training the automatic classification system in order to obtain accurate classification.
- the Patent Office undergoes a continuous reclassif ⁇ cation process, in which they (1) evaluate an existing branch of their taxonomy for confusion, (2) re-structure that taxonomy to evenly distributed overly congested nodes, and (3) reclassify existing patents into the new structure.
- the transductive learning methods presented herein may be used by the Patent Office, and the companies they outsource to do this work, to revaluate their taxonomy, and assist them in (1) build a new taxonomy for a given main classification, and (2) reclassifying existing patents. Transduction learns from labeled and unlabeled data, whereby the transition from labeled to unlabeled data is fluent.
- a method for cleaning up data is presented in Fig. 15.
- a plurality of labeled data items are received in step 1500, and subsets of the data items for each of a plurality of categories are selected in step 1502. Additionally, an uncertainty for the data items in each subset is set in step 1504 to about zero, and an uncertainty for the data items not in the subsets is set in step 1506 to a predefined value that is not about zero.
- a transductive classifier is trained in step 1508 through iterative calculation using the uncertainties, the data items in the subsets, and the data items not in the subsets as training examples, and the trained classifier is applied to each of the labeled data items in step 1510 to classify each of the data items.
- a classification of the input data items, or derivative thereof is output in step 1512 to at least one of a user, another system, and another process.
- the subsets may be selected at random and may be selected and verified by a user.
- the label of at least some of the data items may be changed based on the classification.
- identifiers of data items having a confidence level below a predefined threshold after classification thereof may be output to a user.
- the identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
- two choices to start a cleanup process are presented to the user at step 1600.
- One choice is fully automatic cleanup at step 1602, where for each concept or category a specified number of documents are randomly selected and assumed to be correctly organized.
- a number of documents can be flagged for manual review and verification that one or more label assignments for each concept or category is being correctly organized.
- An estimate of the noise level in the data is received at step 1606.
- the transductive classifier is trained in step 1610 using the verified (manually verified or randomly selected) data and the unverified data in step 1608. Once training is finished the documents are reorganized according to the new labels. Documents with low confidence levels in their label assignments below a specified threshold are displayed for the user for manual review in step 1612. Documents with confidence levels in their label assignments above a specified threshold are automatically corrected according to transductive label assignments in step 1614.
- a method for managing medical records is presented in Fig. 17.
- a classifier is trained based on a medical diagnosis in step 1700, and a plurality of medical records is accessed in step 1702.
- a document classification technique is performed on the medical records in step 1704 using the classifier, and an identifier of at least one of the medical records having a low probability of being associated with the medical diagnosis is output in step 1706.
- the document classification technique may include any :type of process, e.g. a transductive process, etc., and may include one or more of any inductive or transductive technique described above, including a support vector machine process, a maximum entropy discrimination process, etc.
- the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the medical records, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and the trained classifier may be used to classify the medical records.
- a data point label prior probability for the seed document and medical records may also be received, wherein for each
- Another embodiment of the present invention accounts for dynamic, shifting classification concepts. For example, in forms processing applications documents are classified using the layout information and/or the content information of the documents to classify the documents for further processing.
- transductive classification adapts to these changes automatically yielding the same or comparable classification accuracy despite the drifting classification concepts. This is in contrast to rule based systems or inductive classification methods that, without manually adjustments, will start to suffer in classification accuracy owing to the concept drift.
- invoice processing which traditionally involves inductive learning, or rule- based systems are used that utilize invoice layout. Under these traditional systems, if a change in the layout occurs the systems have to be manually reconfigured by either labeling new training data or by determining new rules.
- transduction makes the manual reconfiguration unnecessary by automatically adapting to the small changes in layout of the invoices.
- transductive classification may be applied to the analysis of customer complaints in order to monitor the changing nature of such complaints. For example, a company can automatically link product changes with customer complaints.
- Transduction may also be used in the classification of news articles. For example, news articles on the war on terror starting with articles about the terrorist attacks on September 1 1, 2001 over the war in Afghanistan to news stories about the situation in today's Iraq can be automatically identified using transduction.
- the classification of organisms can change over time through evolution by creating new species of organisms and other species becoming extinct.
- This and other principles of a classification schema or taxonomy can be dynamic, with classification concepts shifting or changing over time.
- Fig. 18 shows an embodiment of the invention using transduction given drifting classification concepts.
- Document set Dj enters the system at time tj, as shown in step 1802.
- a transductive classifier Ci is trained using labeled data and the unlabeled data accumulated so far, and in step 1806 the documents in set Dj are classified. If the manual mode is used, documents with a confidence level below a user supplied threshold as determined in step 1808 are presented to the user for manual review in step 1810.
- a document with a confidence level triggers the creation of a new category that is added to the system, and the document is then assigned to the new category.
- Documents with a confidence level above the chosen threshold are classified into the current categories 1 to N in steps 1820A-B. All documents in the current categories that have been classified prior to step t; into the current categories are reclassified by the classifier Cj in step 1822, and all documents that are no longer classified into the previously assigned categories are moved to new categories in steps 1824 and 1826.
- a method for adapting to a shift in document content is presented in Fig. 19.
- Document content may include, but is not limited to, graphical content, textual content, layout, numbering, etc.
- Examples of shift may include temporal shift, style shift (where 2 or more people work on one or more documents), shift in process applied, shift in layout, etc.
- step 1900 at least one labeled seed document is received, as well as unlabeled documents and at least one predetermined cost factor.
- the documents may include, but are not limited to, customer complaints, invoices, form documents, receipts, etc.
- a transductive classifier is trained in step 1902 using the at least one predetermined cost factor, the at least one seed document, and the unlabeled documents.
- step 1904 the unlabeled documents having a confidence level above a predefined threshold are classified into a plurality of categories using the classifier, and at least some of the categorized documents are reclassified in step 1906 into the categories using the classifier.
- identifiers of the categorized documents are output in step 1908 to at least one of a user, another system, and another process.
- the identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc. Further, product changes may be linked with customer complaints, etc.
- an unlabeled document having a confidence level below the predefined threshold may be moved into one or more new categories.
- the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the at least one seed document, and the unlabeled documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and using the trained classifier to classify the unlabeled documents.
- a data point label prior probability for the seed document and unlabeled documents may be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
- a method for adapting a patent classification to a shift in document content is presented in Fig. 20.
- step 2000 at least one labeled seed document is received, as well as unlabeled documents.
- the unlabeled documents may include any types of documents, e.g. patent applications, legal filings, information disclosure forms, document amendments, etc.
- the seed document(s) may include patent(s), patent application(s), etc.
- a transductive classifier is trained in step 2002 using the at least one seed document and the unlabeled documents, and the unlabeled documents having a confidence level above a predefined threshold are classified into a plurality of existing categories using the classifier.
- the .classifier may be any type of classifier, e.g. a transductive classifier, etc.
- the document classification technique may be any technique, e.g. a support vector machine process, a maximum entropy discrimination process, etc.
- any inductive or transductive technique described above may be used.
- the unlabeled documents having a confidence level below the predefined threshold are classified into at least one new category using the classifier, and at least some of the categorized documents are reclassified in step 2006 into the existing categories and the at least one new category using the classifier.
- identifiers of the categorized documents are output in step 2008 to at least one of a user, another system, and another process.
- the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the search query, and the documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and the trained classifier may be used to classify the documents.
- a data point label prior probability for the search query and documents may be received, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
- Yet another embodiment of the present invention accounts for document drift in the field of document separation.
- Document separation involves the processing of mortgage documents.
- Loan folders consisting of a sequence of different loan documents, e.g. loan applications, approvals, requests, amounts, etc. are scanned and the different documents within the sequence of images have to be determined before further processing.
- the documents used are not static but can change over time. For example, tax forms used within a loan folder can change over time owing to legislation changes.
- Document separation solves the problem of finding document or subdocument boundaries in a sequence of images.
- Common examples that produce a sequence of images are digital scanners or Multi Functional Peripherals (MFPs).
- MFPs Multi Functional Peripherals
- transduction can be utilized in Document separation in order to handle the drift of documents and their boundaries over time.
- Static separation systems like rule based systems or systems based on inductive learning solutions cannot adapt automatically to drifting separation concepts.
- the performance of these static separation systems degrade over time whenever a drift occurs.
- one In order to keep the performance on its initial level, one either has to manually adapt the rules (in the case of a rule based system), or has to manually label new documents and relearn the system (in case of an inductive learning solution). Either way is time and cost expensive.
- Applying transduction to Document separation allows the development of a system that automatically adapts to the drift in the separation concepts.
- a method for separating documents is presented in Fig. 21.
- labeled data are received, and in step 2102 a sequence of unlabeled documents is received.
- Such ' data and documents may include legal discovery documents, office actions, web page data, attorney-client correspondence, etc.
- probabilistic classification rules are adapted using transduction based on the labeled data and the unlabeled documents, and in step 2106 weights used for document separation are updated according to the probabilistic classification rules.
- locations of separations in the sequence of documents are determined, and in step 2110 indicators of the determined locations of the separations in the sequence are output to at least one of a user, another system, and another process.
- the indicators may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
- the documents are flagged with codes, the codes correlating to the indicators.
- Fig. 22 shows an implementation of the classification method and apparatus of the present invention used in association with document separation.
- Automatic document separation is used for reducing the manual effort involved in separating and identifying documents after digital scanning.
- One such document separation method combines classification rules to automatically separate sequences of .pages by using inference algorithms to reduce the most likely separation from all of the available information, using the classifications methods described therein.
- the classification method of transductive MED of the present invention is employed in document separation. More particularly, document pages 2200 are inserted into a digital scanner 2202 or MFP and are converted into a sequence of digital images 2204.
- the document pages may be pages from any type of document, e.g.
- the sequence of digital images is input at step 2206 to dynamically adapt probabilistic classification rules using transduction.
- Step 2206 utilizes the sequence of images 2204 as unlabeled data and labeled data 2208.
- the weight in the probabilistic network is updated and is used for automatic document separation according to dynamically adapted classification rules.
- the output step 2212 is a dynamic adaptation of automatic insertion of separation images such that a sequence of digitized pages 2214 is interleaved with automatic images of separator sheets 2216 at step 2212 automatically inserts the separator sheet images into the image sequence.
- the software generated separator pages 2216 may also indicate the type of document that immediately follows or proceeds the separator page 2216.
- the system described here automatically adapts to drifting separation concepts of the documents that occur over time without suffering from a decline in separation accuracy as would static systems like rule based or inductive machine learning based solutions.
- drifting separation or classification concepts in form processing applications are, as mentioned earlier, changes to documents owing to new legislation.
- the system as shown in Fig. 22 may be modified to a system as shown in Fig. 23 where the pages 2300 are inserted into a digital scanner 2302 or MFP converted into a sequence of digital images 23O4.
- the sequence of digital images is input at step 2306 to dynamically adapt probabilistic classification rules using transduction.
- Step 2306 utilizes the sequence of images 2304, as unlabeled data and labeled data 2308.
- Step 2310 updates weights in the probabilistic network used for automatic document separation according to dynamically adapted classification rules employed.
- step 2312 instead of inserting separator sheet images as described in Fig. 18, step 2312 dynamically adapts the automated insertion of separation information and flags the document images 2314 with a coded description.
- the document page images can be input into an imaging processed database 2316 and the documents can be accessed by the software identifiers.
- Yet another embodiment of the present invention is able to perform face recognition using transduction.
- the use of transduction has many advantages, for example the need of a relatively small number of training examples, the ability to use unlabeled examples in training, etc.
- transductive face recognition may be implemented for criminal detection.
- the Department of Homeland Security must ensure that terrorists are not allowed onto commercial airliners.
- Part of an airport's screening process may be to take a picture of each passenger at the airport security checkpoint and attempt to recognize that person.
- the system could initially be trained using a small number of examples from the limited photographs available of possible terrorists. There may also be more unlabeled photographs of the same terrorist available in other law-enforcement databases that may also be used in training.
- a transductive trainer would take advantage of not only the initially sparse data to create a functional face-recognition system but would also use unlabeled examples from other sources to increase performance. After processing the photograph taken at the airport security checkpoint, the transductive system would be able to recognize the person in question more accurately than a comparable inductive system. 5
- a method for face recognition is presented in Fig. 24.
- step 2400 at least one labeled seed image of a face is received, the seed image having a known confidence level.
- the at least one seed image may have a label indicative of whether the image is included in a designated category. Additionally, in step 2400 unlabeled images are
- a transductive classifier is trained through iterative calculation using the at least one predetermined cost factor, the at least one seed image, and the unlabeled images, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected
- step 2404 confidence scores are stored for the unlabeled seed images.
- identifiers of the unlabeled documents having the highest confidence scores are output to at least one of a user, another system, and another process.
- identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
- confidence scores may be stored after each of the iterations, wherein an identifier of the unlabeled images having the highest confidence score after each iteration is output. Additionally, a data point label prior probability for the labeled and unlabeled image may be received,
- a third unlabeled image of a face e.g., from the above airport security example, may be received, the third unlabeled image may be compared to at least some of the images having the highest confidence scores, and an identifier of the third unlabeled image may be output if 0 a confidence that the face in the third unlabeled image is the same as the face in the seed image.
- Ye.t another embodiment of the present invention enables a user to improve their search results by providing feedback to the document discovery system.
- An embodiment of the present invention enables the user to review the suggested results from the search engine and inform the engine of the relevance of one ore more of the retrieved results, e.g. "close, but not exactly what I wanted," "definitely not,” etc. As the user provides feedback to the engine, better results are prioritized for the user to review.
- a method for document searching is presented in Fig. 25.
- a search query is received.
- the search query may be any type of query, including case- sensitive queries, Boolean queries, approximate match queries, structured queries, etc.
- documents based on the search query are retrieved.
- the documents are output, and in step 2506 user-entered labels for at least some of the documents are received, the labels being indicative of a relevance of the document to the search query. For example, the user may indicate whether a particular result returned from the query is relevant or not.
- a classifier is trained based on the search query and the user-entered labels
- a document classification technique is performed on the documents using the classifier for reclassifying the documents.
- identifiers of at least some of the documents are output based on the classification thereof.
- the identifiers may be electronic copies of the document themselves, portions thereof, titles thereof, names thereof, file names thereof, pointers to the documents, etc.
- the reclassified documents may also be output, with those documents having a highest confidence being output first.
- the document classification technique may include any type of process, e.g. a transductive process, a support vector machine process, a maximum entropy discrimination process, etc. Any inductive or transductive technique described above may be used.
- the classifier may be a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, the search query, and the documents, wherein for each iteration of the calculations the cost factor may be adjusted as a function of an expected label value, and the trained classifier may be used to classify the documents.
- a data point label prior probability for the search query and documents may be received, wherein for each iteration of the calculations the data point label prior probability may be adjusted according to an estimate of a data point class membership probability.
- a further embodiment of the present invention may be used for improving ICR/OCR, and speech recognition.
- speech recognition programs and systems require the operator to repeat a number of words to train the system.
- the present invention can initially monitor the voice of a user for a preset period of time to gather "unclassified" content, e.g., by listening in to phone conversations. As a result, when the user starts training the recognition system, the system utilizes transductive learning to utilize the monitored speech to assist in building a memory model.
- a method for verifying an association of an invoice with an entity is presented in Fig. 26.
- a classifier is trained based on an invoice format associated with a first entity.
- the invoice format may refer to either or both of the physical layout of markings on the invoice, or characteristics such as keywords, invoice number, client name, etc. on the invoice.
- a plurality of invoices labeled as being associated with at least one of the first entity and other entities are accessed, and in step 2604 a document classification technique is performed on the invoices using the classifier.
- any inductive or transductive technique described above may be used as a document classification technique.
- the document classification technique may include a transductive process, support vector machine process, a maximum entropy discrimination process, etc.
- an identifier of at least one of the invoices having a high probability of not being associated with the first entity is output.
- the classifier may be any type of classifier, for example, a transductive classifier, and the transductive classifier may be trained through iterative calculation using at least one predetermined cost factor, at least one seed document, and the invoices, wherein for each iteration of the calculations the cost factor is adjusted as a function of an expected label value, and using the trained classifier to classify the invoices.
- a data point label prior probability for the seed document and invoices may be received, wherein for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
- a transductive classifier is trained through iterative classification using at least one cost factor, the labeled data points, and the unlabeled data points as training examples. For each iteration of the calculations, the unlabeled date point cost factor is adjusted as a function of an expected label value. Additionally, for each iteration of the calculations the data point label prior probability is adjusted according to an estimate of a data point class membership probability.
- the workstation may have resident thereon an operating system such as the Microsoft Windows® Operating System (OS), a MAC OS, or UNIX operating system. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned.
- OS Microsoft Windows® Operating System
- a preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology.
- Object oriented programming (OOP) which has become increasingly used to develop complex applications, may be used.
- transductive learning uses transductive learning to overcome the problem of very sparse data sets which plague inductive face-recognition systems.
- This aspect of transductive learning is not limited to this application and may be used to solve other machine-learning problems that arise from sparse data.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
- Sorting Of Articles (AREA)
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US83031106P | 2006-07-12 | 2006-07-12 | |
US11/752,691 US20080086432A1 (en) | 2006-07-12 | 2007-05-23 | Data classification methods using machine learning techniques |
US11/752,673 US7958067B2 (en) | 2006-07-12 | 2007-05-23 | Data classification methods using machine learning techniques |
US11/752,634 US7761391B2 (en) | 2006-07-12 | 2007-05-23 | Methods and systems for improved transductive maximum entropy discrimination classification |
US11/752,719 US7937345B2 (en) | 2006-07-12 | 2007-05-23 | Data classification methods using machine learning techniques |
PCT/US2007/013484 WO2008008142A2 (en) | 2006-07-12 | 2007-06-07 | Machine learning techniques and transductive data classification |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1924926A2 true EP1924926A2 (de) | 2008-05-28 |
EP1924926A4 EP1924926A4 (de) | 2016-08-17 |
Family
ID=38923733
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP07809394.5A Ceased EP1924926A4 (de) | 2006-07-12 | 2007-06-07 | Verfahren und systeme zur transduktiven datenklassifizierung und datenklassifizierungsverfahren unter verwendung maschineller lerntechniken |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1924926A4 (de) |
JP (1) | JP5364578B2 (de) |
WO (1) | WO2008008142A2 (de) |
Families Citing this family (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9769354B2 (en) | 2005-03-24 | 2017-09-19 | Kofax, Inc. | Systems and methods of processing scanned data |
US9137417B2 (en) | 2005-03-24 | 2015-09-15 | Kofax, Inc. | Systems and methods for processing video data |
US7937345B2 (en) | 2006-07-12 | 2011-05-03 | Kofax, Inc. | Data classification methods using machine learning techniques |
US7958067B2 (en) | 2006-07-12 | 2011-06-07 | Kofax, Inc. | Data classification methods using machine learning techniques |
US8190868B2 (en) | 2006-08-07 | 2012-05-29 | Webroot Inc. | Malware management through kernel detection |
CN102160066A (zh) * | 2008-06-24 | 2011-08-17 | 沙伦·贝伦宗 | 特别适用于专利文献的搜索引擎和方法 |
US9349046B2 (en) | 2009-02-10 | 2016-05-24 | Kofax, Inc. | Smart optical input/output (I/O) extension for context-dependent workflows |
US9767354B2 (en) | 2009-02-10 | 2017-09-19 | Kofax, Inc. | Global geographic information retrieval, validation, and normalization |
US9576272B2 (en) | 2009-02-10 | 2017-02-21 | Kofax, Inc. | Systems, methods and computer program products for determining document validity |
US8774516B2 (en) | 2009-02-10 | 2014-07-08 | Kofax, Inc. | Systems, methods and computer program products for determining document validity |
US8958605B2 (en) | 2009-02-10 | 2015-02-17 | Kofax, Inc. | Systems, methods and computer program products for determining document validity |
US8438386B2 (en) * | 2009-04-21 | 2013-05-07 | Webroot Inc. | System and method for developing a risk profile for an internet service |
US11489857B2 (en) | 2009-04-21 | 2022-11-01 | Webroot Inc. | System and method for developing a risk profile for an internet resource |
US10146795B2 (en) | 2012-01-12 | 2018-12-04 | Kofax, Inc. | Systems and methods for mobile image capture and processing |
US9058580B1 (en) | 2012-01-12 | 2015-06-16 | Kofax, Inc. | Systems and methods for identification document processing and business workflow integration |
US9058515B1 (en) | 2012-01-12 | 2015-06-16 | Kofax, Inc. | Systems and methods for identification document processing and business workflow integration |
US9165188B2 (en) | 2012-01-12 | 2015-10-20 | Kofax, Inc. | Systems and methods for mobile image capture and processing |
US9483794B2 (en) | 2012-01-12 | 2016-11-01 | Kofax, Inc. | Systems and methods for identification document processing and business workflow integration |
US9208536B2 (en) | 2013-09-27 | 2015-12-08 | Kofax, Inc. | Systems and methods for three dimensional geometric reconstruction of captured image data |
US9355312B2 (en) | 2013-03-13 | 2016-05-31 | Kofax, Inc. | Systems and methods for classifying objects in digital images captured using mobile devices |
EP2973226A4 (de) | 2013-03-13 | 2016-06-29 | Kofax Inc | Klassifizierung von objekten auf mit mobilvorrichtungen aufgenommenen digitalbildern |
US20140316841A1 (en) | 2013-04-23 | 2014-10-23 | Kofax, Inc. | Location-based workflows and services |
EP2992481A4 (de) | 2013-05-03 | 2017-02-22 | Kofax, Inc. | Systeme und verfahren zur detektion und klassifizierung von objekten in mithilfe von mobilen vorrichtungen aufgenommenen videos |
WO2015031449A1 (en) * | 2013-08-30 | 2015-03-05 | 3M Innovative Properties Company | Method of classifying medical documents |
US9386235B2 (en) | 2013-11-15 | 2016-07-05 | Kofax, Inc. | Systems and methods for generating composite images of long documents using mobile video data |
US9760788B2 (en) | 2014-10-30 | 2017-09-12 | Kofax, Inc. | Mobile document detection and orientation based on reference object characteristics |
KR102315574B1 (ko) * | 2014-12-03 | 2021-10-20 | 삼성전자주식회사 | 데이터 분류 방법 및 장치와 관심영역 세그멘테이션 방법 및 장치 |
CN104700099B (zh) * | 2015-03-31 | 2017-08-11 | 百度在线网络技术(北京)有限公司 | 识别交通标志的方法和装置 |
US10242285B2 (en) | 2015-07-20 | 2019-03-26 | Kofax, Inc. | Iterative recognition-guided thresholding and data extraction |
US11550688B2 (en) | 2015-10-29 | 2023-01-10 | Micro Focus Llc | User interaction logic classification |
US10339193B1 (en) * | 2015-11-24 | 2019-07-02 | Google Llc | Business change detection from street level imagery |
US9779296B1 (en) | 2016-04-01 | 2017-10-03 | Kofax, Inc. | Content-based detection and three dimensional geometric reconstruction of objects in image and video data |
JP6973733B2 (ja) * | 2017-11-07 | 2021-12-01 | 株式会社アイ・アール・ディー | 特許情報処理装置、特許情報処理方法およびプログラム |
US11062176B2 (en) | 2017-11-30 | 2021-07-13 | Kofax, Inc. | Object detection and image cropping using a multi-detector approach |
JP7024515B2 (ja) | 2018-03-09 | 2022-02-24 | 富士通株式会社 | 学習プログラム、学習方法および学習装置 |
JP7079483B2 (ja) * | 2018-06-18 | 2022-06-02 | 国立研究開発法人産業技術総合研究所 | 情報処理方法、システム及びプログラム |
WO2020065611A1 (en) * | 2018-09-28 | 2020-04-02 | Element Ai Inc. | Recommendation method and system and method and system for improving a machine learning system |
US11880396B2 (en) | 2018-10-08 | 2024-01-23 | Arctic Alliance Europe Oy | Method and system to perform text-based search among plurality of documents |
KR102033136B1 (ko) * | 2019-04-03 | 2019-10-16 | 주식회사 루닛 | 준지도 학습 기반의 기계학습 방법 및 그 장치 |
WO2020231188A1 (ko) * | 2019-05-13 | 2020-11-19 | 삼성전자주식회사 | 검증 뉴럴 네트워크를 이용한 분류 결과 검증 방법, 분류 결과 학습 방법 및 상기 방법을 수행하는 컴퓨팅 장치 |
CN113240025B (zh) * | 2021-05-19 | 2022-08-12 | 电子科技大学 | 一种基于贝叶斯神经网络权重约束的图像分类方法 |
JP2023144562A (ja) | 2022-03-28 | 2023-10-11 | 富士通株式会社 | 機械学習プログラム,データ処理プログラム,情報処理装置,機械学習方法およびデータ処理方法 |
WO2024113266A1 (en) * | 2022-11-30 | 2024-06-06 | Paypal, Inc. | Use of a training framework of a multi-class model to train a multi-label model |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7376635B1 (en) * | 2000-07-21 | 2008-05-20 | Ford Global Technologies, Llc | Theme-based system and method for classifying documents |
AU2002305652A1 (en) * | 2001-05-18 | 2002-12-03 | Biowulf Technologies, Llc | Methods for feature selection in a learning machine |
US7702526B2 (en) | 2002-01-24 | 2010-04-20 | George Mason Intellectual Properties, Inc. | Assessment of episodes of illness |
US7184929B2 (en) * | 2004-01-28 | 2007-02-27 | Microsoft Corporation | Exponential priors for maximum entropy models |
US7492943B2 (en) * | 2004-10-29 | 2009-02-17 | George Mason Intellectual Properties, Inc. | Open set recognition using transduction |
-
2007
- 2007-06-07 WO PCT/US2007/013484 patent/WO2008008142A2/en active Application Filing
- 2007-06-07 JP JP2009519439A patent/JP5364578B2/ja active Active
- 2007-06-07 EP EP07809394.5A patent/EP1924926A4/de not_active Ceased
Non-Patent Citations (1)
Title |
---|
See references of WO2008008142A2 * |
Also Published As
Publication number | Publication date |
---|---|
JP5364578B2 (ja) | 2013-12-11 |
WO2008008142A2 (en) | 2008-01-17 |
JP2009543254A (ja) | 2009-12-03 |
WO2008008142A3 (en) | 2008-12-04 |
EP1924926A4 (de) | 2016-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7937345B2 (en) | Data classification methods using machine learning techniques | |
WO2008008142A2 (en) | Machine learning techniques and transductive data classification | |
US7761391B2 (en) | Methods and systems for improved transductive maximum entropy discrimination classification | |
US7958067B2 (en) | Data classification methods using machine learning techniques | |
US20080086432A1 (en) | Data classification methods using machine learning techniques | |
US11330009B2 (en) | Systems and methods for machine learning-based digital content clustering, digital content threat detection, and digital content threat remediation in machine learning task-oriented digital threat mitigation platform | |
JP4490876B2 (ja) | コンテンツ分類方法、コンテンツ分類装置、コンテンツ分類プログラムおよびコンテンツ分類プログラムを記録した記録媒体 | |
Hu et al. | Rank-based decomposable losses in machine learning: A survey | |
Hu et al. | Sum of ranked range loss for supervised learning | |
Gao et al. | A maximal figure-of-merit (MFoM)-learning approach to robust classifier design for text categorization | |
CN108304568B (zh) | 一种房地产公众预期大数据处理方法及系统 | |
Nashaat et al. | Semi-supervised ensemble learning for dealing with inaccurate and incomplete supervision | |
WO2002048911A1 (en) | A system and method for multi-class multi-label hierachical categorization | |
Nazir | A critique of imbalanced data learning approaches for big data analytics | |
Mácha et al. | Deeptoppush: Simple and scalable method for accuracy at the top | |
Lemhadri et al. | RbX: Region-based explanations of prediction models | |
Han et al. | Customized classification learning based on query projections | |
O'Neill | An evaluation of selection strategies for active learning with regression | |
Shiplu et al. | A Robust Ensemble Machine Learning Model with Advanced Voting Techniques for Comment Classification | |
CN107180264A (zh) | 用于对文档和数据的转导分类方法 | |
CN101449264B (zh) | 用于转导数据分类的方法和系统以及使用机器学习方法的数据分类方法 | |
Allen | Constructing and classifying email networks from raw forensic images | |
An et al. | Interaction Identification and Clique Screening for Classification with Ultra-high Dimensional Discrete Features | |
Shrestha et al. | Analysis of Machine Learning Approach for Spamming Electronic Mail Detection | |
Bchir et al. | Verbal offense detection in social network comments using novel fusion approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20080215 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK RS |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: KOFAX, INC. |
|
R17D | Deferred search report published (corrected) |
Effective date: 20081204 |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: HARRIS, CHRISTOPHER K. Inventor name: SARAH, ANTHONY Inventor name: SCHMIDTLER, MAURITIUS A.R. Inventor name: CARUSO, NICOLA Inventor name: BORREY, ROLAND |
|
DAX | Request for extension of the european patent (deleted) | ||
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06F 15/18 20060101AFI20160316BHEP Ipc: G06N 99/00 20100101ALI20160316BHEP |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20160718 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06F 15/18 20060101AFI20160712BHEP Ipc: G06N 99/00 20100101ALI20160712BHEP Ipc: G06K 9/62 20060101ALN20160712BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20180216 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06N 99/00 20100101ALI20160712BHEP Ipc: G06F 15/18 20060101AFI20160712BHEP Ipc: G06K 9/62 20060101ALN20160712BHEP |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06N 99/00 20190101ALI20160712BHEP Ipc: G06K 9/62 20060101ALN20160712BHEP Ipc: G06F 15/18 20060101AFI20160712BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20240213 |