WO1998055958A1 - Reducing handwriting recognizer errors using decision trees - Google Patents

Reducing handwriting recognizer errors using decision trees Download PDF

Info

Publication number
WO1998055958A1
WO1998055958A1 PCT/US1998/011642 US9811642W WO9855958A1 WO 1998055958 A1 WO1998055958 A1 WO 1998055958A1 US 9811642 W US9811642 W US 9811642W WO 9855958 A1 WO9855958 A1 WO 9855958A1
Authority
WO
WIPO (PCT)
Prior art keywords
recognizer
code point
chirograph
code
chirographs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US1998/011642
Other languages
English (en)
French (fr)
Inventor
Gregory N. Hullender
John R. Bennett
Patrick M. Haluptzok
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Corp
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to JP50292899A priority Critical patent/JP4233612B2/ja
Priority to AU78194/98A priority patent/AU7819498A/en
Publication of WO1998055958A1 publication Critical patent/WO1998055958A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • G06V30/347Sampling; Contour coding; Stroke extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/36Matching; Classification
    • G06V30/373Matching; Classification using a special pattern or subpattern alphabet

Definitions

  • the invention relates generally to the input of user information into computer systems, and more particularly to the recognition of handwritten characters input by a user.
  • One of the biggest problems in handwriting recognition technology is reducing the error rate.
  • One frequent type of error results when a user electronically enters a handwritten character, known as a chirograph, that closely matches two or more possible characters in a set to which the computer is trying to match the chirograph, i.e., a set of possible code points.
  • Characters which cause the most errors are typically those which are identical to one another except for a single difference that humans can discern, but contemporary recognizers cannot. For example, certain Japanese symbols are substantially identical to one another but for a single, subtle difference.
  • Another object is to provide a method and system of the above kind that can be automatically trained using sample data. Yet another object is to provide a method and mechanism of the above kind that is fast, reliable, cost- efficient, flexible and extensible.
  • the present invention provides a method and mechanism for recognizing chirographs input into a computer system.
  • a primary recognizer is provided for converting chirographs to code points, and secondary recognizers (e.g., CART trees) are developed and trained to differentiate chirographs which produce selected code points. Each such secondary recognizer is associated with each selected code point.
  • secondary recognizers e.g., CART trees
  • Each such secondary recognizer is associated with each selected code point.
  • the chirograph is provided to the primary recognizer whereby a code point corresponding thereto is received.
  • a determination is made as to whether the code point corresponds to one of the selected code points having a secondary recognizer associated therewith. If not, the code point provided by the primary recognizer is returned. If so, the chirograph is passed to the secondary recognizer, and a code point is returned from the secondary recognizer.
  • FIGURE 1 is a block diagram representing a computer system into which the present invention may be incorporated;
  • FIG. 2 is a block diagram representing functional components for training a primary handwriting recognizer according to one aspect of the invention
  • FIG. 3 is a block diagram representing functional components for sorting chirographs as recognized by a primary recognizer into code point-based files to develop a secondary recognition system according to the present invention
  • FIG. 4 represents the contents of an exemplary file sorted by the primary recognizer in FIG. 3;
  • FIG. 5 is a flow diagram representing the general steps taken to sort the chirographs
  • FIG. 6 is a block diagram representing functional components for generating the secondary recognition system from the file ⁇ of FIG. 3;
  • FIGS. 7 - 9 comprise a flow diagram representing the general steps take to construct and train the secondary recognition system
  • FIG. 10 is a block diagram representing functional components for optimizing the recognition mechanism of the present invention.
  • FIGS. 11 - 13 comprise a flow diagram representing the general steps taken to optimize the recognition mechanism of the present invention
  • FIG. 14 is a block diagram representing functional components for using the recognition mechanism of the present invention to recognize a chirograph
  • FIG. 15 is a flow diagram representing the general steps taken when using the recognition mechanism of the present invention to recognize a chirograph.
  • the computer system 20 includes a processor 22 operatively connected to storage 24, the storage including random access memory (RAM) 26 and non-volatile storage 28 such as a hard disk-drive, optical drive or the like.
  • RAM random access memory
  • non-volatile storage 28 such as a hard disk-drive, optical drive or the like.
  • the non-volatile storage can be used in conjunction with the RAM to provide a relatively large amount of virtual memory via well-known swapping techniques.
  • the processor 22 also connects through I/O circuitry 32 to one or more input devices 30, such as a keyboard and pointing device such as a mouse, and a pen-tablet, touch device or other means of getting electronic ink.
  • the system 20 also includes at least one local output device 34 connected to the I/O circuitry 32 for communicating information, such as via a graphical user interface, to the user of the system 20.
  • An operating system is loaded in the storage 24.
  • those chirographs which often confuse a recognizer are provided to a secondary recognition process.
  • a conventional (primary) recognizer outputs a code point. Instead of directly returning the code point, however, the code point is first examined to determine if it corresponds to a confusion set, i.e., one of two (or more) code points indicative of chirographs which are often confused for each other.
  • the code point originally returned by the primary recognizer is returned by the mechanism.
  • a secondary recognizer specifically developed to distinguish that particular confusion set, is given the chirograph.
  • the secondary recognizer analyzes the chirograph using more directed tests than performed by the primary recognizer, and returns one of the two (or more) code points based on the results of the tests. Note that such often-confused chirographs are not limited to sets of two, but are often confused with two or more other chirographs.
  • the primary recognizer can be trained to recognize shape classes that represent code points (or subsets of codepoints) that look alike. When provided with a chirograph, the primary recognizer thus returns at least one shape class index. The secondary recognizer then determines from the shape class index which code point the chirograph represents.
  • a shape class index is a more general concept, i.e., a code point is a particular type of shape class index. However, for purposes of simplicity, the invention will be described with respect to a primary recognizer that returns code points, except where otherwise noted.
  • a first aspect involves the development of the improved recognition mechanism of the present invention using handwriting sample data taken from a number (preferably a large number such as thousands) of users.
  • a second aspect involves the use of a recognition mechanism, developed according to the first aspect of the invention, to convert a chirograph into a code point.
  • the first aspect the development of the recognition mechanism, is ordinarily performed in a development environment on a relatively high powered computer system, which may be connected via a network connection or the like to large databases of sample data.
  • the second aspect, the use of the recognition mechanism is typically performed on a hand-held (palm-top) computing device or the like.
  • Such a device preferably runs under the Windows CE operating system loaded in the storage 24, and includes a touch- sensitive liquid crystal display screen for inputting handwritten characters (chirographs) .
  • Other preferred systems include tablet-based desktop personal computers running under the Windows 95 or Windows NT operating systems .
  • a first training set 40 of sample characters is used by a construction / training process 42 to develop and train a primary recognizer 44 (FIG. 3) .
  • a training set is a file including chirographs stored in conjunction with their actual, correct code points, i.e., the code points identifying the character that the user intended to write.
  • the primary recognizer 44 is preferably one which uses a nearest neighbor (KNN) approach.
  • KNN nearest neighbor
  • the recognizer 44 actually may return a (probability-ranked) list of alternative code points in response to a chirograph input thereto, however for purposes of simplicity the present invention will be described with reference to a single returned code point unless otherwise noted.
  • the primary recognizer can also be of the type that returns any type of shape index, with shape codes used to train the primary recognizer. As can be appreciated by those skilled in the art, this technique will work equally well for any recognizer or pattern matching technique that returns discrete proposals .
  • the primary recognizer 44 is used to begin constructing the secondary recognition mechanisms described above.
  • a sorting process 47 sorts chirographs according to whatever code point (or other shape index) the primary recognizer 44 returns for that chirograph, whereby the way in which the chirographs are sorted with respect to their actual code points ultimately reveals the chirographs that the primary recognizer 44 tends to confuse.
  • a second training set 48 containing sample chirographs, each stored along with its actual code point, is provided to the primary recognizer 44.
  • the primary recognizer 44 Using its trained recognizer data 46, the primary recognizer 44 returns a code point to the sorting process 47, which sorts the chirographs and actual code points into various files 50 ⁇ - 50 n . Note that if other types of shape indexes are being used, the chirographs are similarly sorted into files for each shape index based on the shape index returned by the primary recognizer 44.
  • the sorting process 47 first creates a separate file for each code point that is to be supported by the recognition mechanism.
  • the first chirograph in the second training set 48 is selected at step 502, and sent to the primary recognizer 44 at step 504.
  • a code point (which may, in fact, be incorrect) is returned by the primary recognizer 44 to the sorting process 47, and at step 508 written to the file that is associated with the returned code point, along with the actual code point known from the training set 48.
  • steps 510 - 512 repeat the sorting process 47 with a subsequent chirograph until all chirographs in the training set 48 have been sorted in this manner. Note that the process is the same for shape indexes other than code points.
  • FIG. 4 shows the contents of one such file 50 ⁇ , with two confused chirographs therein having different actual code points x and Ny.
  • each distinct chirograph code points Nx or Ny
  • the sorting process 47 wrote each chirograph and its actual code point into the file 50 ⁇ . Note that if the primary recognizer 44 made no mistakes, all of the files would contain only (chirograph, actual code point) pairs which matched the code point identifying the file. However, no primary recognizer has ever been found to have such accuracy when provided with suitably large training sets.
  • CART Classification and Regression Trees
  • CART trees are binary decision trees described in the text entitled Classifica tion and Regression Trees , Breiman, Friedman, Olshen and Stone, Chapman and Hall, (1984), and herein incorporated by reference in its entirety.
  • one CART tree in a set 54 ⁇ - 54 n will be developed and trained for each code point (or shape index) supported.
  • FIGS. 7 - 9 generally describe how each CART tree is developed.
  • a list of questions which are believed to be relevant in distinguishing confusion pairs is assembled. Such questions are frequently based on handwriting strokes, such as, "how many total strokes in the chirograph?", w what is the length of the first stroke?" and/or ⁇ hat is the angle of the third stroke with respect to the first stroke?" .
  • the questions may be tailored to the stroke count in the chirograph which is known to the system. As will become apparent, the order of the questions is not important.
  • the primary recognizer may have provided some featurization information which the construction process can leverage in addition to its own featurization of the ink.
  • the CART-building process 52 applies all of the questions to all of the samples (in each of the files 50 ⁇ - 50 n ) in order to determine and rank which questions best resolve the primary recognizer' s confusion for a given file.
  • a preliminary test is performed by scanning the sample data at step 700 to determine if all of the actual code points in the given file are the same (and match the file) . If so, the data in the sample is pure, whereby secondary recognition will not improve the overall recognition. Accordingly, the CART-building process 52 terminates for such a sample file.
  • the chirographs will have actual code points that do not directly match the code point (and thus the corresponding file) determined by the primary recognizer.
  • the first question in the list is obtained, the first sample chirograph
  • the question is applied to the sample, producing a result.
  • the question may inquire as to the horizontal length of the first stroke, and result in a value of nine (highest x- coordinate minus lowest x-coordinate equals 9) for the first sample.
  • the resulting value is saved in conjunction with the actual code point for that sample, e.g., (value, actual code point) at step 706, and at steps 708 - 710, the process repeated on the next sample in the selected file
  • step 706 again saves whatever
  • the steps of FIG. 8 are executed, in general to find out which of the values divides (splits) the chirographs in the file along the lines of their associated actual code points. It should be noted that it is possible, although generally impractical, to test every conceivable value with each question in a brute- force approach to determine the best split. For example, every length from 1 to 1000 may be tested for the length question, and so on with other wide ranges of values for the other questions. Instead, however, only the actual results obtained by the steps of FIG. 7 are used for this purpose, substantially speeding up the split-testing process of FIG. 8. Moreover, while each unique result can be applied as a binary question against each of the samples to determine the split, a more optimal way is to use the already existing result data to determine the best split.
  • step 720 sorts the results obtained for the given question (in FIG. 7) into an ordered range of (values, actual code points) .
  • the shortest lengths may have been forty and the longest one-hundred.
  • each of the code points having a 'value equal forty" are moved into one (e.g., left) subset, and all code points having other values placed
  • the quality of the split is evaluated according to some split criterion.
  • a preferred way to determine the quality of the split is to test for homogeneity of the sets using the Gini diversity index.
  • the Gini diversity index uses a sum of the squares method for the homogeneity (h) using the quantities of the code points in each of the left and right sets, i.e.,
  • H(Q1, VI) [ (h Le ft) (cpiLeft + cp Left) + (hRight) (CpiRight + CP2Right) ]
  • Step 726 tests the quality of the split against any previous results, if any, and if better, step 727 saves the homogeneity result H(Q1, VI) as the best quality split. Note that step 727 saves the best split over all the questions so far, including possibly the present question, whereby step 726 compares each subsequent split agains the result from the best (question, value) previously determined.
  • Steps 728 - 730 cause the split for the next value to be tested and compared again, this time using the next value in the range, e.g., forty-one (41) .
  • the sample is now effectively split with code points in the left subset being those having values less than or equal to forty-one. Note that the code-points associated with forty previously moved to the left subset remain there, since these are also less than forty-one.
  • step 724 the next homogeneity H(Q1, V2) is computed, compared at step 726 (against the value for forty, which was the best so far) , and if an improvement, saved as the best value, along with the identity of its corresponding question, at step 727.
  • the best value i.e., the value providing the most homogenous split
  • the next question is selected (steps 732 - 734), and the process repeated on the samples in the file using this next question.
  • the best (question, value) pair will continue to be saved for comparison against splits of other questions and values, and so on, until the overall best single (question, value) pair is known.
  • step 742 the sample set (file 50 ⁇ ) is then split at step 742 into two subsets using this best question/value pair. Then, as represented by step 744, the process is iteratively repeated on each of these two subsets to find the next best question and value pair for most homogeneously splitting each of the subsets. The process is repeated recursively, (i.e., the process returns to step 700 of FIG.
  • step 7 to optimally split each of the two subsets) , branching into more and more homogeneous subsets until a point is reached at which the homogeneity is no longer improved.
  • the recursive operation at lower and lower levels establishes the best question/value pairs at each branch and level to further refine the distinction the confusion pairs.
  • a CART tree is built from these question and value pairs at the various levels.
  • the CART trees tend to be imperfect, especially at the lower levels. Moreover, the CART trees may be large, requiring a lot of storage that is not generally available in hand-held computing devices. Accordingly, at step 748 a new set of samples is applied to the CART to test which of its embedded questions are making the correct decisions. Those questions which are determined to be ineffective at resolving the confusion pairs are removed (pruned) from the tree at step 750. This leaves a more manageable CART in terms of size while not adversely affecting the recognition accuracy.
  • CART tree does not improve the recognition some threshold amount (which may be even a very slight improvement) , there is no reason to keep it, since a CART tree costs storage space. Similarly, even though CART trees are extremely fast, secondary recognition using a CART tree adds to the total recognition time, again adding cost.
  • FIGS. 10 and 11 - 13 represent one process for optimizing the recognition mechanism by discarding unneeded CART trees.
  • a first chirograph from a third training set 56 (FIG. 10) is selected.
  • the chirograph is sent to the primary recognizer 44.
  • a primary recognizer match count 62 for the CART tree (i.e., this file) is incremented at step 906.
  • the appropriate CART tree corresponding to the code point returned from the primary recognizer 44, is selected.
  • step 920 the same chirograph is now provided to the CART tree, whereby a decision is made by the CART tree and a code point returned therefor.
  • step 922 if the code point returned by the CART tree 52 is the same as the actual, correct code point, a CART match count 66 for this CART tree is incremented at step 924. Steps 926 - 928 repeat the process until all chirographs in the third training set 56 are tested.
  • FIG. 13 compares the primary and CART match counts for each CART tree to determine if the CART tree improved the recognition. More particularly, the first supported code point (there is one file per each) is chosen at step 940, and the CART match count 66 for this CART tree compared against the primary recognizer match count 62. If the CART match count is less than or equal to the primary match count, the CART tree is discarded at step 944 since it did not improve the recognition mechanism. Otherwise the CART tree for this code point is kept. Steps 946 - 948 repeat the comparison until all supported code points have been tested. Note that if desired, step 942 can be a more complex test so as to discard any CART tree that does not improve the recognition process by some threshold amount.
  • the CART tree can be discarded.
  • the CART trees (which may number several hundred) only add about 18 kilobytes to a one megabyte primary recognizer, so any memory savings resulting from discarding a CART tree that only rarely improves recognition is probably not worth a reduction in recognition accuracy.
  • the combined primary and secondary recognition mechanism of the present invention has been thoroughly tested, and for certain confusion pairs has a 99.7 percent accuracy rate. The 0.3 percent error rate is believed to result from characters too poorly written even for humans to discern, and in fact is comparable to the recognition error rate of humans. Note that the present invention is highly flexible and extensible.
  • the recognition mechanism may be used in a relatively low powered system, e.g., a hand-held personal computing device.
  • FIGS. 14 - 15 shown how the system is used to recognize a character.
  • the system receives a chirograph 80 (FIG. 14) from a user at step 1100 in a known manner, such as via pen input on a touch-sensitive screen.
  • the recognition mechanism of the present invention submits the chirograph to the primary recognizer 44 and receives a code point (or shape index) 82 therefrom (step 1102) .
  • the code point 82 is used (by a lookup process 84 or the like) to determine if the code point has a CART tree associated therewith. If not, the primary recognizer's returned code point 82 is returned by the recognition mechanism at step 1108 as the returned code point 88.
  • a CART tree is associated with the code point 82, the appropriate CART tree in the set of available CART trees 72 is selected and the chirograph 80 submitted thereto at step 1106.
  • a shape index code that is not by itself a code point has a secondary recognizer (CART tree) associated therewith, even if only a minimal one that converts the shape index to a code point.
  • the code point returned by the selected CART tree is returned at step 1108 as the returned code point 88.
  • the recognition mechanism repeats until the user is done writing, as detected by step 1110.
  • the list can be scanned for code points having associated CART trees, and the secondary recognizer operated for one or more of the code points in the list.
  • the secondary process reorders the list with the result from the CART tree placed on top, i.e., with the highest probability.
  • CART trees can provide alternatives ranked by probabilities, all of which can be weaved into a composite, probability-ranked list.
  • a plurality of CART trees can be associated with a single character.
  • a first CART tree can be provided as a secondary process for differentiating two-stroke *A" -shaped characters, and a second, distinct CART tree for differentiating three-or- more-stroke *A" -shaped characters.
  • the primary recognizer can be arranged to split strokes, e.g., a one- stroke "A" shaped character can first be split into two strokes by the primary recognizer prior to its analysis thereof.
  • stroke count may similarly be used by the primary and/or secondary recognizers.
  • the points in the character are received as coordinates of the form (x, y, time), i.e, the points in sequence along with pen-up and pen-down positions are known.
  • Off-line chirographs are only x-y points in no particular order.
  • the invention is valuable in either type of recognition, although the primary and secondary recognizer (e.g., questions therefor) will be rather different.
  • the method and mechanism differentiates ordinarily-confused characters with a high rate of success, and can be automatically trained using sample data.
  • the method and mechanism that is fast, reliable, cost-efficient, flexible and extensible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Character Discrimination (AREA)
PCT/US1998/011642 1997-06-06 1998-06-04 Reducing handwriting recognizer errors using decision trees Ceased WO1998055958A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP50292899A JP4233612B2 (ja) 1997-06-06 1998-06-04 判断ツリーを使用する手書き認識器のエラーの低減
AU78194/98A AU7819498A (en) 1997-06-06 1998-06-04 Reducing handwriting recognizer errors using decision trees

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/870,559 US6061472A (en) 1997-06-06 1997-06-06 Method and mechanism to reduce handwriting recognizer errors using multiple decision trees
US08/870,559 1997-06-06

Publications (1)

Publication Number Publication Date
WO1998055958A1 true WO1998055958A1 (en) 1998-12-10

Family

ID=25355649

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/011642 Ceased WO1998055958A1 (en) 1997-06-06 1998-06-04 Reducing handwriting recognizer errors using decision trees

Country Status (5)

Country Link
US (3) US6061472A (enExample)
JP (1) JP4233612B2 (enExample)
CN (1) CN1163840C (enExample)
AU (1) AU7819498A (enExample)
WO (1) WO1998055958A1 (enExample)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6111985A (en) * 1997-06-06 2000-08-29 Microsoft Corporation Method and mechanism for providing partial results in full context handwriting recognition
US6061472A (en) * 1997-06-06 2000-05-09 Microsoft Corporation Method and mechanism to reduce handwriting recognizer errors using multiple decision trees
US7343041B2 (en) * 2001-02-22 2008-03-11 International Business Machines Corporation Handwritten word recognition using nearest neighbor techniques that allow adaptive learning
US7016884B2 (en) * 2002-06-27 2006-03-21 Microsoft Corporation Probability estimate for K-nearest neighbor
US7259752B1 (en) 2002-06-28 2007-08-21 Microsoft Corporation Method and system for editing electronic ink
US6988107B2 (en) * 2002-06-28 2006-01-17 Microsoft Corporation Reducing and controlling sizes of model-based recognizers
US7079713B2 (en) * 2002-06-28 2006-07-18 Microsoft Corporation Method and system for displaying and linking ink objects with recognized text and objects
US6970877B2 (en) * 2002-06-28 2005-11-29 Microsoft Corporation Reducing and controlling sizes of prototype-based recognizers
US7174042B1 (en) 2002-06-28 2007-02-06 Microsoft Corporation System and method for automatically recognizing electronic handwriting in an electronic document and converting to text
US7185278B1 (en) 2002-06-28 2007-02-27 Microsoft Corporation Separating and moving document objects using the movement of a wiper bar
US7188309B2 (en) 2002-06-28 2007-03-06 Microsoft Corporation Resolving document object collisions
US7751623B1 (en) 2002-06-28 2010-07-06 Microsoft Corporation Writing guide for a free-form document editor
US7721226B2 (en) 2004-02-18 2010-05-18 Microsoft Corporation Glom widget
US7358965B2 (en) * 2004-02-18 2008-04-15 Microsoft Corporation Tapping to create writing
US7659890B2 (en) * 2004-03-19 2010-02-09 Microsoft Corporation Automatic height adjustment for electronic highlighter pens and mousing devices
US7593908B2 (en) * 2005-06-27 2009-09-22 Microsoft Corporation Training with heterogeneous data
US7526737B2 (en) * 2005-11-14 2009-04-28 Microsoft Corporation Free form wiper
US7742642B2 (en) * 2006-05-30 2010-06-22 Expedata, Llc System and method for automated reading of handwriting
AU2012200812B2 (en) * 2011-05-04 2016-12-08 National Ict Australia Limited Measuring cognitive load
US20130011066A1 (en) * 2011-07-07 2013-01-10 Edward Balassanian System, Method, and Product for Handwriting Capture and Storage
CN105095826B (zh) * 2014-04-17 2019-10-01 阿里巴巴集团控股有限公司 一种文字识别方法及装置
CN105868590B (zh) 2015-01-19 2019-09-10 阿里巴巴集团控股有限公司 一种笔迹数据处理方法和装置
CN107704084A (zh) * 2017-10-17 2018-02-16 郭明昭 手写输入识别方法和用户设备
CA3021197A1 (en) * 2017-10-17 2019-04-17 Royal Bank Of Canada Auto-teleinterview solution
US11270104B2 (en) * 2020-01-13 2022-03-08 Apple Inc. Spatial and temporal sequence-to-sequence modeling for handwriting recognition
US11410064B2 (en) * 2020-01-14 2022-08-09 International Business Machines Corporation Automated determination of explanatory variables

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975975A (en) * 1988-05-26 1990-12-04 Gtx Corporation Hierarchical parametric apparatus and method for recognizing drawn characters

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4531231A (en) * 1983-01-19 1985-07-23 Communication Intelligence Corporation Method for distinguishing between complex character sets
US4718102A (en) * 1983-01-19 1988-01-05 Communication Intelligence Corporation Process and apparatus involving pattern recognition
US4561105A (en) * 1983-01-19 1985-12-24 Communication Intelligence Corporation Complex pattern recognition method and system
US4589142A (en) * 1983-12-28 1986-05-13 International Business Machines Corp. (Ibm) Method and apparatus for character recognition based upon the frequency of occurrence of said characters
US5067165A (en) * 1989-04-19 1991-11-19 Ricoh Company, Ltd. Character recognition method
US5077805A (en) * 1990-05-07 1991-12-31 Eastman Kodak Company Hybrid feature-based and template matching optical character recognition system
US5313527A (en) * 1991-06-07 1994-05-17 Paragraph International Method and apparatus for recognizing cursive writing from sequential input information
US5325445A (en) * 1992-05-29 1994-06-28 Eastman Kodak Company Feature classification using supervised statistical pattern recognition
DE69333664T2 (de) * 1992-06-19 2005-11-17 United Parcel Service Of America, Inc. Verfahren und Gerät zur Einstellung eines Neurons
US5742702A (en) * 1992-10-01 1998-04-21 Sony Corporation Neural network for character recognition and verification
US5710916A (en) * 1994-05-24 1998-01-20 Panasonic Technologies, Inc. Method and apparatus for similarity matching of handwritten data objects
JP3630734B2 (ja) * 1994-10-28 2005-03-23 キヤノン株式会社 情報処理方法
JPH09223195A (ja) * 1996-02-06 1997-08-26 Hewlett Packard Co <Hp> 文字認識方法
US5926566A (en) * 1996-11-15 1999-07-20 Synaptics, Inc. Incremental ideographic character input method
US5881172A (en) * 1996-12-09 1999-03-09 Mitek Systems, Inc. Hierarchical character recognition system
US5966460A (en) * 1997-03-03 1999-10-12 Xerox Corporation On-line learning for neural net-based character recognition systems
US6061472A (en) * 1997-06-06 2000-05-09 Microsoft Corporation Method and mechanism to reduce handwriting recognizer errors using multiple decision trees

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975975A (en) * 1988-05-26 1990-12-04 Gtx Corporation Hierarchical parametric apparatus and method for recognizing drawn characters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BREIMAN.L.: "INTRODUCTION TO TREE CLASSIFICATION", CLASSIFICATION AND REGRESSION TREES, XX, XX, 1 January 1984 (1984-01-01), XX, pages 20 - 37 + 66, XP002914519 *

Also Published As

Publication number Publication date
CN1236458A (zh) 1999-11-24
US6973215B1 (en) 2005-12-06
AU7819498A (en) 1998-12-21
US20060072825A1 (en) 2006-04-06
JP4233612B2 (ja) 2009-03-04
US6061472A (en) 2000-05-09
CN1163840C (zh) 2004-08-25
US7379597B2 (en) 2008-05-27
JP2000516376A (ja) 2000-12-05

Similar Documents

Publication Publication Date Title
US6061472A (en) Method and mechanism to reduce handwriting recognizer errors using multiple decision trees
US6539113B1 (en) Radical definition and dictionary creation for a handwriting recognition system
KR100297482B1 (ko) 수기입력의문자인식방법및장치
US7756335B2 (en) Handwriting recognition using a graph of segmentation candidates and dictionary search
US5313527A (en) Method and apparatus for recognizing cursive writing from sequential input information
US5267332A (en) Image recognition system
EP0632403B1 (en) Handwritten symbol recognizer and method for recognising handwritten symbols
US5315667A (en) On-line handwriting recognition using a prototype confusability dialog
US7596272B2 (en) Handling of diacritic points
US7460712B2 (en) Systems and methods for adaptive handwriting recognition
EP1630723A2 (en) Spatial recognition and grouping of text and graphics
WO1998055957A1 (en) Partial results in full context handwriting recognition
WO2002067189A2 (en) Holistic-analytical recognition of handwritten text
EP0689153B1 (en) Character recognition
Tanaka et al. Hybrid pen-input character recognition system based on integration of online-offline recognition
Al-Ma'adeed et al. Writer identification using edge-based directional probability distribution features for arabic words
KR100205726B1 (ko) 수기 문자의 인식 장치
EP0519737A2 (en) Image recognition system
Oulhadj et al. A prediction-verification strategy for automatic recognition of cursive handwriting
CA2497586C (en) Method and apparatus for recognizing cursive writing from sequential input information
KR940001048B1 (ko) 온라인 필기체문자인식방법
EP0564826A2 (en) Resolution of case confusions by majority voting rule in on-line handwriting recognition
Flann Integrating segmentation and recognition in on-line cursive handwriting using error-correcting grammars
KR19990010218A (ko) 군집화된 알파벳 추출에 의한 온라인 영문 단어 인식 장치 및 방법
JPH08129610A (ja) 文字認識装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 98801107.7

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase