JPH0421080A - Character recognition device - Google Patents

Character recognition device

Info

Publication number
JPH0421080A
JPH0421080A JP2124891A JP12489190A JPH0421080A JP H0421080 A JPH0421080 A JP H0421080A JP 2124891 A JP2124891 A JP 2124891A JP 12489190 A JP12489190 A JP 12489190A JP H0421080 A JPH0421080 A JP H0421080A
Authority
JP
Japan
Prior art keywords
character
characters
position information
stored
similar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2124891A
Other languages
Japanese (ja)
Inventor
Hiroaki Ikeda
裕章 池田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to JP2124891A priority Critical patent/JPH0421080A/en
Priority to EP91304283A priority patent/EP0457534B1/en
Priority to DE69132789T priority patent/DE69132789T2/en
Publication of JPH0421080A publication Critical patent/JPH0421080A/en
Priority to US08/348,833 priority patent/US5729630A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE:To reduce erroneous recognition by deciding one character from analogous characters stored in a second storage means as a recognition result corresponding to position information stored in a first storage means. CONSTITUTION:An image is inputted from an image scanner 108, etc., and a character is segmented one by one from an input image by a CPU 100, and the circumscribing rectangle of a segmented character is found, and it is normalized and is expanded/reduced to fixed size. The feature of a character image normalized by the CPU 100 is extracted, and similarity is calculated by using a feature extracted by an identification calculation part 105 and an identification dictionary stored in a ROM 103, and it is decided whether or not the analogous character exists by using an analogous character table stored in the ROM 103. At this time, since the position information of the character when the segmentation of the character is performed is stored in memory 104, either the analogous characters is selected by the CPU 100 based on the position information of the character, and a recognition result is outputted to a CRT 106. Thereby, the erroneous recognition can be reduced.

Description

【発明の詳細な説明】 [産業上の利用分野コ 本発明は、相似文字の識別か可能である認識対象文字に
含まれている文字認識装置に関するものである。
DETAILED DESCRIPTION OF THE INVENTION [Field of Industrial Application] The present invention relates to a character recognition device that is capable of identifying similar characters and is included in characters to be recognized.

[従来の技術] 従来、文字認識装置は、少なくとも認識対象となる文字
数の識別辞書を、主に学習データの平均的な特徴として
持っており、入力画像は文字ごとに切り出され、大きさ
の正規化をして特徴を抽出し、識別辞書を用いて類似度
を計算し、類似度か最大のものを認識結果として、ある
いは大きいものから順番にいくつかを認識候補として表
示装置や記憶装置に出力するものかあった。
[Prior Art] Conventionally, character recognition devices have at least an identification dictionary of the number of characters to be recognized as an average feature of training data, and an input image is cut out for each character and , extract features, calculate similarity using an identification dictionary, and output the highest similarity as the recognition result, or output several recognition candidates in order of similarity to a display device or storage device. There was something to do.

[発明か解決しようとしている課題] しかしなから上記従来例ては切り出された文字の大きさ
を正規化するのて、相似文字たとえば“つ゛と゛つ′の
ように字形が同して大きさのみ違うような文字の区別か
つかなくなり、方識別辞書はそれぞれを異なるカテゴリ
ーとして扱っているため、入力か“つ′てあっても“つ
°てあったり、逆に“つ′を入力した場合゛つ゛か認識
結果となってしまう欠点かあった。
[Problem to be solved by the invention] However, in the above conventional example, it is difficult to normalize the size of the cut out characters, but to normalize the size of the cut out characters, it is difficult to normalize the size of the cut out characters. It becomes difficult to distinguish between different characters, and since the character recognition dictionary treats each character as a different category, it may be difficult to distinguish between characters that are different from each other. There was a drawback that the result was a recognition result.

[課題を解決するための手段] 本発明によれば、上記欠点を解決する為に画像情報を入
力する手段と、入力した画像情報から文字情報を切り出
す切り出し手段と1切り出された文字情報に対する候補
文字を識別する識別手段とを有する文字認識装置におい
て、前記切り出された文字情報の位置情報を記憶する第
一記憶手段、複数の相似文字を1つのカテゴリーとして
記憶する第二記憶手段、前記候補文字に相似文字が存在
するか否か判定する判定手段、前記第一記憶手段に記憶
された位置情報に応じて前記第二記憶手段に記憶された
相似文字から一つの文字を認識結果として決定する決定
手段を有する。
[Means for Solving the Problems] According to the present invention, in order to solve the above-mentioned drawbacks, there is provided a means for inputting image information, a cutting means for cutting out character information from the input image information, and a candidate for the extracted character information. an identification means for identifying characters; a first storage means for storing positional information of the extracted character information; a second storage means for storing a plurality of similar characters as one category; and the candidate character. determining means for determining whether or not a similar character exists in , and determining one character as a recognition result from the similar characters stored in the second storage means according to the position information stored in the first storage means. have the means.

[実施例1] 第1図は本発明の実施例における基本構成を示す図であ
り100は第8図、第9図におけるフローチャート等の
演算を行う中央演算装置(CPU)、101は文字・記
号等の入力や、誤認識した時に修正する際の指示等を行
うためのキーボード(K、B)、102はボインテイン
クデバイスPD、103は文字を認識する際に用いる辞
書等を記憶しているリートオンリーメモリ(ROM)、
IO2はスキャナ108により読みとられたデータを記
憶するメモリ、105はスキャナ108により読みとら
れたデータから候補となる単語等をみつけ、各々の相違
度を計算する識別計算部、106はCRT、107はス
キャナ108のインターフェース(SCAN  I /
F )108は画像情報を読みとるスキャナである。
[Embodiment 1] FIG. 1 is a diagram showing the basic configuration in an embodiment of the present invention, where 100 is a central processing unit (CPU) that performs calculations such as the flowcharts in FIGS. 8 and 9, and 101 is a character/symbol. Keyboards (K, B) are used to input text and give instructions for correcting erroneous recognition, 102 is a Voint Ink device PD, and 103 stores a dictionary used to recognize characters, etc. REIT ONLY MEMORY (ROM),
IO2 is a memory that stores data read by the scanner 108; 105 is an identification calculation unit that finds candidate words from the data read by the scanner 108 and calculates the degree of dissimilarity of each; 106 is a CRT; 107 is the scanner 108 interface (SCAN I/
F) 108 is a scanner that reads image information.

第2図は本発明の特徴を最もよく表わす実施例のブロッ
ク図てあり、lてイメージスキャナ108等から画像が
入力され、2でCP U 100により入力画像から1
文字ずつ文字を切り出し、3てCP U 100により
切り出された文字の外接矩形を求め、正規化して一定の
大きさに拡縮し、4てCP U 1[]0により正規化
された文字画像の特徴を抽出し、5て識別計算部105
により抽出された特徴とROM 103に記憶しである
識別辞書9を用いて類似度を計算し、6てROM103
に記憶しである相似文字テーブル11を用いて相似文字
かあるか否か判定する。2で文字切り出しをした際の文
字の位置情報は10でメモリ104に格納しであるので
、この文字位置情報に基づいて8てCP U 100に
より相似文字のどちらかを選択し、7でCRT 105
に認識結果を出力する。
FIG. 2 is a block diagram of an embodiment that best represents the features of the present invention. In step 1, an image is input from an image scanner 108, etc., and in step 2, the CPU 100 processes an image from the input image.
Characters are cut out character by character, 3. CPU 100 calculates the circumscribed rectangle of the cut out characters, normalized and scaled to a certain size, 4. CPU 1[]0 normalizes the characteristics of the character image. 5 and the identification calculation unit 105
The similarity is calculated using the features extracted by 6 and the identification dictionary 9 stored in the ROM 103.
It is determined whether or not there are similar characters using a similar character table 11 stored in . The position information of the character when the character is extracted in step 2 is stored in the memory 104 in step 10, so based on this character position information, one of the similar characters is selected by the CPU 100 in step 8, and the CRT 105 is selected in step 7.
Output the recognition results to .

ここで第8図に示すフローチャートに従ってCP U 
100に3いて行われる処理の流れを詳細に説明する。
Here, according to the flowchart shown in FIG.
The flow of processing performed in step 100 will be explained in detail.

スキャナ108より画像を入力しくSl)、入力した画
像情報から行の描出を行い(S2)、第3図に示すよう
な行の高さhを求める(S3)。次に第3図に示すよう
な文字の切り出しを行う(S4)。文字の切り出しを行
ってその行にある文字数かわかったら、メモリ104内
の位置情報格納部に下部文字ステータスを表わすビット
を入力文字数分確保する(S5)。
An image is input from the scanner 108 (Sl), a line is drawn from the input image information (S2), and the height h of the line as shown in FIG. 3 is determined (S3). Next, characters are cut out as shown in FIG. 3 (S4). Once the characters are cut out and the number of characters in that line is known, bits representing the lower character status are secured in the position information storage section in the memory 104 for the number of input characters (S5).

文字の最上部の画素の高さかhx−より下に存在してい
るか否かを判定しくS6)、x%より下に存在している
ならばその文字は下部文字であると判断し、位置情報の
ビットをオンしくS7)、S6で下に存在していると判
定されなかったならば位置情報のビットをオフする(S
8)。次の文字があるならば(S9)、S6に戻り、次
に文字がなければ(S9)、次の行へと移る。次に行か
ある場合は(SIO)S3へ戻り、最後の行まで83か
らS10を繰り返し、次に行がない場合は(SIO)切
り出しを行った文字画像の外接矩形を正規化しく5ll
)、文字の特徴を描出しく512) 、識別計算を識別
計算部105において行う(313)。ここで識別計算
を行う際に用いる識別辞書9は第4図のようにROM 
103に記憶されており、大小の区別がある文字は2文
字てlカテゴリーとしてカテゴリ一番号(l〜n)を割
り付けておく。切り出した文字の識別と類似度の計算か
できたら(S13) 、類似度の最大のカテゴリーが相
似文字チーフルに存在するか否か判定する(S14)。
It is determined whether the character exists below the height of the top pixel hx- (S6), and if it exists below x%, the character is determined to be a lower character, and the position information is Turn on the bit of position information (S7), and turn off the bit of position information if it is not judged in S6 that it exists below (S7).
8). If there is the next character (S9), the process returns to S6, and if there is no next character (S9), the process moves to the next line. If there is a next line (SIO), return to S3 and repeat steps 83 to S10 until the last line. If there is no next line (SIO), normalize the circumscribed rectangle of the cut character image.
), character characteristics are depicted (512), and discrimination calculation is performed in the discrimination calculation unit 105 (313). The identification dictionary 9 used when performing the identification calculation here is a ROM as shown in FIG.
103, and two letters that are distinguished by size are assigned a category number (l to n) as an l category. Once the extracted characters have been identified and their similarity calculated (S13), it is determined whether the category with the highest degree of similarity exists in the similar characters Chiful (S14).

相似文字テーブル11は第5図のようにROM 103
に記憶されており、識別辞書9て相似文字か存在するカ
テゴリ一番号について、位置情報のビットがオンの時と
オフの時のそれぞれのカテゴリーか含まれている。
The similar character table 11 is stored in the ROM 103 as shown in FIG.
For each category number in which similar characters exist in the identification dictionary 9, the categories when the position information bit is on and when the position information bit is off are included.

ts3図の例て説明すると、識別計算部の結果かカテゴ
リ一番号でfm、に、IJとなったとする。また位置情
報格納部はrオフ、オン、オフJである。カテゴリ一番
号mは相似文字テーブル11に含まれているのて相似文
字判定部6て相似文字を含むと判定し相似文字選択部8
てカテゴリ一番号mてビットがオフすなわち認識される
。カテゴリ番号父は、相似文字判定部6て相似文字は存
在しないと判定され、「ト」か認識結果となる。S14
て相似文字チーフルに存在しないと判定された場合は、
S]3において類似艙か最大であると識別されたカテゴ
リーを認識結果とする。またS14て相似文字テーブル
に存在すると判定された場合は、S16て更にメモリ1
04内の位置情報のビットかオンしているか否かの判定
を行う。
To explain using the ts3 diagram as an example, let us assume that the result of the identification calculation section is fm and IJ in category 1 number. Further, the position information storage section is r off, on, and off J. Since the category number m is included in the similar character table 11, the similar character determination unit 6 determines that it contains similar characters, and the similar character selection unit 8
If the category number is m, the bit is turned off or recognized. The similar character determining unit 6 determines that there are no similar characters for the category number father, and the recognition result is ``g''. S14
If it is determined that the similar character does not exist in Chiful,
S] The category identified as the largest similar boat in step 3 is the recognition result. In addition, if it is determined in S14 that the similar character exists in the similar character table, then in S16 the memory 1 is further
It is determined whether the position information bit in 04 is on.

S16てビットかオンであると判定された場合は、相似
文字テーブル11のカテゴリーの小文字を認識結果とし
、S16てビットかオンてはないと判定された場合は、
相似文字テーブル11のカテゴリーの大文字を認識結果
とする。
If it is determined in S16 that the bit is on, the lowercase letter in the category of the similar character table 11 is set as the recognition result, and if it is determined in S16 that the bit is not on,
The uppercase letters in the category of the similar character table 11 are taken as recognition results.

S15.’S17.S18て認識結果か出たら、次に文
字かあるか否か判定しく519) 、次に文字かある場
合はSllに戻り、最後の文字まてSllからS ]、
 9を繰り返す、S19て次に文字かないと判定された
場合は結果をCRT 106に表示する(S20)。
S15. 'S17. When the recognition result is displayed in S18, it is determined whether there is a next character or not (519), and if there is a next character, it returns to Sll and returns to the last character from Sll to S ],
9 is repeated, and if it is determined in S19 that there is no next character, the result is displayed on the CRT 106 (S20).

[実施例2] 第6図に相似文字選択方法についての他の実施例を説明
する図を示す。本実施例においては、認識対象文字かア
ルファヘットと記号である時に、「FJとFyJを区別
する方法を例にとって同し文字の特徴を有し、文字位置
のみ異なる文字の識別の方法について説明する。
[Embodiment 2] FIG. 6 is a diagram illustrating another embodiment of the similar character selection method. In this example, when the characters to be recognized are alpha-heads and symbols, we will explain how to identify characters that have the same character characteristics but differ only in character position by taking the method of distinguishing between FJ and FyJ as an example. .

本実施例においても第1図、第2図に示したような文字
認識の構成は実施例1と同様であるのて、ここでは述べ
ない。
In this embodiment as well, the structure of character recognition as shown in FIGS. 1 and 2 is the same as in the first embodiment, so it will not be described here.

ここで第9図に示すフローチャートに従ってCP U 
100において行われる処理の流れを詳細に説明する。
Here, according to the flowchart shown in FIG.
The flow of processing performed in step 100 will be explained in detail.

第9図における処理において、S1〜54S11〜S1
3.S19.S20は実施例1と同様であるのて、ここ
ては述べない。
In the process in FIG. 9, S1-54S11-S1
3. S19. Since S20 is the same as in the first embodiment, it will not be described here.

実施例1と同様に文字の切り出しを行い(S4)、第6
図に示したような、切り出した文字の切り出し枠上部か
ら文字上部までの長さいと、文字下部から切り出し枠下
部まての長さ■を求め(S:11) 、メモリ104内
の位置情報格納部10に記憶し、UとVの大きさを比較
する(S32)  S32てu < vと判定されたな
らばS5においてS5て確保した位置情報のビットをオ
ンしく533) 、S32てu>vと判定されたならば
位置情報のビットをオフする(S34)。次に文字かあ
るか否か判定しく535) 、次に文字かある場合はS
32に戻って最後の文字まてS32からS35を繰り返
す。S35て次に文字かないと判定された場合は336
へ進み、次に行かあるか否か判定する。次に行がある場
合はS3へ戻り、最後の行までS3から336を繰り返
す。536で次に行かないと判定された場合はSllへ
進む。
Characters are cut out in the same manner as in Example 1 (S4), and the sixth
As shown in the figure, the length from the top of the clipping frame to the top of the character and the length from the bottom of the character to the bottom of the clipping frame are determined (S:11), and the position information is stored in the memory 104. 10 and compares the sizes of U and V (S32). If it is determined in S32 that u < v, in S5 the bit of the position information secured in S5 is turned on (533), and in S32 u>v If it is determined that the position information bit is turned off (S34). 535) to determine whether there is a character next, and if there is a character next,
Return to step 32 and repeat steps S32 to S35 until the last character. If S35 determines that there is no next character, 336
Proceed to and determine whether there is a next step. If there is a next row, the process returns to S3 and repeats S3 to 336 until the last row. If it is determined at 536 that there is no need to proceed to the next step, the process proceeds to Sll.

S13て文字の識別計算を行ったら、類似度最大のカテ
ゴリーが2のカテゴリーであるか否か判定しく537)
 、 9のカテゴリーてはないと判定された場合は、類
似度が最大のカテゴリーを認識結果としく538) 、
337てりのカテゴリーであると判定された場合はメモ
リ104内の位置情報のビットかオンであるか否か判定
しく539) 、ヒツトがオンであると判定された場合
はIrtJを認識結果としく540) 、 S39てビ
ットかオンてないと判定された場合は「、Jを認識結果
とする(S41)。
After performing the character identification calculation in S13, it is determined whether the category with the highest degree of similarity is category 2 (537)
, If it is determined that there are no 9 categories, the category with the highest degree of similarity is selected as the recognition result538),
If it is determined that it is in the category of 337, it is determined whether the position information bit in the memory 104 is on or not (539), and if it is determined that the bit is on, IrtJ is set as the recognition result. 540) If it is determined that the bit in S39 is not on, set J as the recognition result (S41).

なお、これと同しようにUと■を求め、Uと■の関係に
よって同し特徴を持つ文字でありながら、文字位置の異
なる文字の識別は第7図のようなテーブルを設けること
により、ツ以外の文字でも可能なことは言うまでもない
In addition, by finding U and ■ in the same way, and identifying characters with the same characteristics but in different character positions based on the relationship between U and ■, a table like the one shown in Figure 7 can be set up. It goes without saying that other characters are also possible.

[発明の効果] 以上説明したように、本発明によれば相似文字を1つの
文字と認識した後に大文字と小文字を選択するようにし
たことにより、相似文字を確実に区別することが可能と
なり、誤認識を減少させる効果かある。
[Effects of the Invention] As explained above, according to the present invention, by selecting uppercase and lowercase letters after recognizing similar characters as one character, it becomes possible to reliably distinguish between similar characters. It seems to have the effect of reducing misrecognition.

以上説明したよ・うに、本発明によれば相似文字の識別
辞書を1つにまとめることにより、辞書容量を減少させ
る効果があり、また処理速度を上げる効果がある。
As described above, according to the present invention, by combining similar character identification dictionaries into one, there is an effect of reducing dictionary capacity and an effect of increasing processing speed.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の実施例の基本的な構成図、第2図は本
発明を実施した文字認識装置のブロック図、 第3図は入力画像における文字の切り出しと、位置情報
取り出しについての説明図、ts4図は識別辞書の内部
を説明した図、第5図は相似文字テーブルを説明した国
策6図は「夕1と「、1の区別方法の文字切り出しと位
置情報取り出しについての説明図、第7図は相似文字選
択の条件を示した図、第8図は実施例1の処理を示すフ
ローチャート、 第9図は実施例2の処理を示すフローチャートである。 1は画像入力部 2は文字切り出し部 3は正規化部 4は特徴抽出部 5は識別計算部 6は相似文字判定部 7は認識結果出力部 8は相似文字選択部 9は識別辞書 lOは位置情報格納部 1は相似文字テーブル 2は文字切り出し枠である
Fig. 1 is a basic configuration diagram of an embodiment of the present invention, Fig. 2 is a block diagram of a character recognition device implementing the present invention, and Fig. 3 is an explanation of cutting out characters in an input image and retrieving position information. Figure, ts4 diagram is a diagram explaining the inside of the identification dictionary, Figure 5 is a diagram explaining the similar character table, and National Policy 6 diagram is an explanatory diagram of character extraction and position information extraction of the method for distinguishing between "Yu 1" and ", 1. Fig. 7 is a diagram showing the conditions for selecting similar characters, Fig. 8 is a flowchart showing the processing of the first embodiment, and Fig. 9 is a flowchart showing the processing of the second embodiment. The extraction unit 3, the normalization unit 4, the feature extraction unit 5, the identification calculation unit 6, the similar character determination unit 7, the recognition result output unit 8, the similar character selection unit 9, the identification dictionary 10, the position information storage unit 1, the similar character table 2 is the character cutting frame

Claims (1)

【特許請求の範囲】 画像情報を入力する手段と、入力した画像 情報から文字情報を切り出す切り出し手段 と、切り出された文字情報に対する候補文字を識別する
識別手段とを有する文字認識装置において、前記切り出
された文字情報の位置情報を記憶する第一記憶手段、 複数の相似文字を1つのカテゴリーとして 記憶する第二記憶手段、 前記候補文字に相似文字が存在するか否か 判定する判定手段、 前記第一記憶手段に記憶された位置情報に 応じて前記第二記憶手段に記憶された相似文字から一つ
の文字を認識結果として決定する決定手段 を有することを特徴とする文字認識装置。
[Scope of Claims] A character recognition device comprising means for inputting image information, a cutting means for cutting out character information from the inputted image information, and an identification means for identifying candidate characters for the cut out character information. a first storage means for storing positional information of character information that has been selected; a second storage means for storing a plurality of similar characters as one category; a determination means for determining whether or not a similar character exists among the candidate characters; A character recognition device comprising a determining means for determining one character as a recognition result from among similar characters stored in the second storage means in accordance with position information stored in the first storage means.
JP2124891A 1990-05-14 1990-05-14 Character recognition device Pending JPH0421080A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2124891A JPH0421080A (en) 1990-05-14 1990-05-14 Character recognition device
EP91304283A EP0457534B1 (en) 1990-05-14 1991-05-13 Image processing method and apparatus
DE69132789T DE69132789T2 (en) 1990-05-14 1991-05-13 Image processing method and apparatus
US08/348,833 US5729630A (en) 1990-05-14 1994-11-29 Image processing method and apparatus having character recognition capabilities using size or position information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2124891A JPH0421080A (en) 1990-05-14 1990-05-14 Character recognition device

Publications (1)

Publication Number Publication Date
JPH0421080A true JPH0421080A (en) 1992-01-24

Family

ID=14896664

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2124891A Pending JPH0421080A (en) 1990-05-14 1990-05-14 Character recognition device

Country Status (1)

Country Link
JP (1) JPH0421080A (en)

Similar Documents

Publication Publication Date Title
KR100249055B1 (en) Character recognition apparatus
JP4553241B2 (en) Character direction identification device, document processing device, program, and storage medium
JP5217127B2 (en) Collective place name recognition program, collective place name recognition apparatus, and collective place name recognition method
KR100412317B1 (en) Character recognizing/correcting system
US10140556B2 (en) Arabic optical character recognition method using hidden markov models and decision trees
Lehal et al. Feature extraction and classification for OCR of Gurmukhi script
US5621818A (en) Document recognition apparatus
Kavallieratou et al. Handwritten character segmentation using transformation-based learning
KR0186025B1 (en) Candidate character classification method
US11361529B2 (en) Information processing apparatus and non-transitory computer readable medium
JPH0421080A (en) Character recognition device
KR940007345B1 (en) On-line recognitin method of hand-written korean character
KR19990049667A (en) Korean Character Recognition Method
KR100332752B1 (en) Method for recognizing character
JPH07319880A (en) Keyword extraction/retrieval device
JP2788506B2 (en) Character recognition device
JP2963474B2 (en) Similar character identification method
JP2925303B2 (en) Image processing method and apparatus
JP2972443B2 (en) Character recognition device
KR910007032B1 (en) A method for truncating strings of characters and each character in korean documents recognition system
JP2637762B2 (en) Pattern detail identification method
JPH08202822A (en) Character segmenting device and method thereof
Hwang et al. Segmentation of a text printed in Korean and English using structure information and character recognizers
JPH0562021A (en) Optical type character recognition (ocr) system for recognizing standard font and user assigned custom font
JPH06162269A (en) Handwritten character recognizing device