JPH02217981A - Device for recognizing on-line hand-written character - Google Patents

Device for recognizing on-line hand-written character

Info

Publication number
JPH02217981A
JPH02217981A JP1038202A JP3820289A JPH02217981A JP H02217981 A JPH02217981 A JP H02217981A JP 1038202 A JP1038202 A JP 1038202A JP 3820289 A JP3820289 A JP 3820289A JP H02217981 A JPH02217981 A JP H02217981A
Authority
JP
Japan
Prior art keywords
feature point
character
input
identification condition
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP1038202A
Other languages
Japanese (ja)
Other versions
JP3066530B2 (en
Inventor
Hiroshi Kamata
洋 鎌田
Naohisa Kawaguchi
川口 尚久
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to JP1038202A priority Critical patent/JP3066530B2/en
Publication of JPH02217981A publication Critical patent/JPH02217981A/en
Application granted granted Critical
Publication of JP3066530B2 publication Critical patent/JP3066530B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Character Discrimination (AREA)

Abstract

PURPOSE:To obtain a high recognition rate by extracting a feature point from the order of successively written strokes and using an identification condition on the feature point to decide a character. CONSTITUTION:When input pattern information is applied, a feature point extracting means 1 extracts a feature point and its positional information from the input character pattern information including the order of strokes of the character. An identification condition applying means 5 retrieves an identification condition storing means 4 from a category obtained by a feature corresponding means 3 to find out an identification condition and whether the identification condition for the category is satisfied or not is decided based upon the positional information of the feature point found out by the means 1 and a correspondence recognizing result obtained from feature point correspondence information. Since whether the identification condition is satisfied or not is decided by the relation of feature point positions independently of the number of strokes, continued input characters can also be decided accurately. Thus, on-line character recognition with a high recognition rate can be attained.

Description

【発明の詳細な説明】 [概   要〕 オンラインによって手書文字データ情報をリアルタイム
で取り込んで認識するオンライン文字認識装置に関し、 認識率の高いオンラインの文字認識を目的とし、筆順を
含む入力文字パターン情報が加わり、該入力文字パター
ン情報から特徴点と該特徴点の位置情報とを求める特徴
点抽出手段と、辞書文字の特徴点を記憶する特徴点辞書
手段と、筆順に従って前記入力文字の特徴点と、前記特
徴点辞書手段で記憶する辞書文字の特徴点とを対応づけ
て対応情報を求めると共に、対応づけ認識結果の候補カ
テゴリを求める特徴点対応づけ手段と、対象カテゴリ毎
に同定条件を記憶する同定条件格納手段と、前記特徴対
応づけ手段で得られたカテゴリから前記同定条件格納手
段を検索して同定条件を求めると共に、前記特徴点抽出
手段で求めた特徴点の位置情報と前記特徴点対応づけ手
段からの対応づけ認識結果をもとに前記カテゴリに対す
る同定条件を満足するかを判別する同定条件適用手段と
よりなるように構成する。
[Detailed Description of the Invention] [Summary] Regarding an online character recognition device that captures and recognizes handwritten character data information online in real time, the present invention aims at online character recognition with a high recognition rate, and uses input character pattern information including stroke order. A feature point extraction means for obtaining feature points and position information of the feature points from the input character pattern information, a feature point dictionary means for storing feature points of dictionary characters, and feature points of the input characters according to the stroke order. , feature point matching means for associating the feature points of the dictionary characters stored in the feature point dictionary means to obtain correspondence information, and for obtaining candidate categories as a result of the association recognition, and storing identification conditions for each target category. The identification condition storage means searches the identification condition storage means from the categories obtained by the feature correlation means to obtain identification conditions, and also searches the identification condition storage means and the feature point correspondence between the position information of the feature points obtained by the feature point extraction means and the feature point correspondence. The identification condition applying means determines whether the identification condition for the category is satisfied based on the association recognition result from the association means.

(産業上の利用分野〕 本発明は文字認識装置に係り、さらに詳しくはオンライ
ンによって手書文字データ情報をリアルタイムで取り込
んで認識するオンライン文字認識装置に関する。
(Industrial Application Field) The present invention relates to a character recognition device, and more particularly to an online character recognition device that captures and recognizes handwritten character data information in real time online.

〔従来の技術〕[Conventional technology]

文字認識装置は、予め書かれている文字を例えばイメー
ジスキャナ等によって読み込み、各文字の特徴点と予め
記憶しである辞書データの特徴点との距離を求め、最も
近い距離の文字を候補文字として決定している。イメー
ジスキャナで読み込み文字を認識する前述の方式は筆順
を得ることができず、その認識率は低いものである。
A character recognition device reads prewritten characters using, for example, an image scanner, calculates the distance between the feature points of each character and the feature points of pre-stored dictionary data, and selects the character with the closest distance as a candidate character. It has been decided. The above-mentioned method of recognizing characters read with an image scanner cannot obtain the stroke order, and its recognition rate is low.

一方、イメージスキャナで文字データを入力するのでは
なく、例えばタブレット等によって書いた情報を読み込
んで認識するいわゆるオンライン文字認識装置がある。
On the other hand, there is a so-called online character recognition device that does not input character data using an image scanner, but reads and recognizes information written on a tablet or the like.

このオンライン文字認識装置はキーボード等の入力機器
に比べ、熟練を要しない点で応用範囲が広く需要が高い
装置である。
This online character recognition device has a wide range of applications and is in high demand because it requires no skill compared to input devices such as keyboards.

例えば、窓口業務での手書き文字を直接コンピュータに
人力できる手段としての利用は高まっている。このよう
なオンライン文字認識装置においては、従来のようなイ
メージスキャナで読み込んだ場合と比較し、筆順の情報
を得るという特徴を有し、認識率が前述の認識装置と比
べ高くなる。
For example, it is increasingly being used as a means of manually inputting handwritten characters in counter operations directly into a computer. Such an online character recognition device has the feature of obtaining stroke order information compared to the case of reading with a conventional image scanner, and has a higher recognition rate than the above-mentioned recognition device.

近年オンライン文字認識装置の性能は向上してきている
。特に続は文字をも認識できる技術が開発され、前述し
た窓口での入力における利用が更に実用段階に近づいた
The performance of online character recognition devices has improved in recent years. In particular, technology that can also recognize characters has been developed, and the use of input at the counter described above has moved closer to the practical stage.

しかしながら、類似した形の文字間の同定の性能は充分
でなく、類似した形の文字の同定についても高い性能が
求められている。
However, the performance in identifying characters with similar shapes is not sufficient, and high performance is also required in identifying characters with similar shapes.

一般的にオンライン文字認識装置は認識候補を出力する
大分類と認識候補を入力として正しい認識結果を求める
類似文字詳細同定部から構成される。
Generally, an online character recognition device is composed of a major classification unit that outputs recognition candidates, and a detailed similar character identification unit that receives the recognition candidates as input and obtains correct recognition results.

第5図は従来のオンライン文字認識装置の構成図である
。パターン入力lOは手書き文字をディジタル情報に変
換する装置であり、例えばディジタイザである。このデ
ィジタイザは通常盤面とスタイラスペンからなり、入力
者が盤面をスタイラスペンで書くとそのスタイラスペン
で書いた座標を検出する。この検出結果は入カバターン
格納部11に加わり記憶される。入カバターン格納部1
1に格納されたディジタル情報は特徴点抽出部12に加
わり、特徴点抽出部12は格納された入カバターンから
文字の特徴を表わす特徴点を抽出し、入力特徴点格納部
13に格納する。
FIG. 5 is a block diagram of a conventional online character recognition device. The pattern input IO is a device that converts handwritten characters into digital information, such as a digitizer. This digitizer usually consists of a board and a stylus pen, and when an inputter writes on the board with the stylus pen, the coordinates written with the stylus pen are detected. This detection result is added to the input cover pattern storage section 11 and stored. Inlet cover turn storage section 1
The digital information stored in 1 is applied to a feature point extraction section 12, which extracts feature points representing character features from the stored input cover patterns and stores them in an input feature point storage section 13.

特徴点辞書部14は文字カテゴリ毎に標準的な文字パタ
ーンである辞書文字の特徴点を記憶している。
The feature point dictionary section 14 stores feature points of dictionary characters that are standard character patterns for each character category.

特徴点対応づけ部15は入力特徴点格納部13で記憶し
た前記特徴点抽出部12の結果と特徴点辞書部14に格
納された入カバターンの特徴点とを、特徴点辞書部14
に格納された辞書文字の特徴点を筆順の順序に従い対応
づける。すなわち特徴点抽出部12によって得られた特
徴点から特徴点辞書部14に格納されている筆1[に対
応して、入カバターンと辞書文字の距離を求める。さら
に、特徴点対応づけ部15は求めた辞書文字との距離の
中から小さい距離を与える辞書文字の文字コードを認識
候補のコードとして認識コード格納部16に格納する。
The feature point matching unit 15 connects the results of the feature point extraction unit 12 stored in the input feature point storage unit 13 and the feature points of the input pattern stored in the feature point dictionary unit 14 to the feature point dictionary unit 14.
The feature points of dictionary characters stored in are associated according to the order of stroke order. That is, from the feature points obtained by the feature point extraction unit 12, the distance between the input cover pattern and the dictionary character is determined in correspondence with the brush 1 stored in the feature point dictionary unit 14. Furthermore, the feature point matching unit 15 stores the character code of the dictionary character that provides the smallest distance from the determined dictionary character in the recognition code storage unit 16 as a recognition candidate code.

同定条件格納部17には、文字カテゴリ毎にそのカテゴ
リであることを同定する条件を記憶している。同定条件
適用部18は認識コード格納部16に格納された認識候
補コードの候補上位のものから画定条件を検索し、人カ
バターン格納部11に格納された入カバターンに適用す
る。そして、適用の結果、その同定条件に合格した辞書
文字のコードを認識結果格納部19に格納する。
The identification condition storage unit 17 stores conditions for identifying each character category as belonging to that category. The identification condition application section 18 searches for a defining condition from the top recognition candidate codes stored in the recognition code storage section 16 and applies it to the input cover turn stored in the human cover turn storage section 11 . Then, as a result of the application, the code of the dictionary character that passes the identification condition is stored in the recognition result storage section 19.

前述の特徴点対応づけ部15は画から対応づけるもので
あるが、その対応づけによっては誤って対応づけする場
合がある。第6図は対応づいた入力文字と辞書文字との
従来の同定条件図表である。
The above-described feature point matching unit 15 performs matching from images, but depending on the matching, there may be cases where incorrect matching is made. FIG. 6 is a conventional identification condition diagram of matched input characters and dictionary characters.

入力文字が「2」であり、特徴点対応づけ部15によっ
て特徴点辞書部14内の文字「Z」を対応づけている。
The input character is "2", and the character "Z" in the feature point dictionary section 14 is associated with it by the feature point matching section 15.

このような時に同定条件格納部17には例えばrZ」の
[第1両目の書き始めの部分は上に凸の曲がりがない」
という条件を有しており、同定条件適用部18によって
辞書文字「Z」が入力文字の候補でないと確定する。ま
た、同様に入力文字が「3」であった場合、辞書文字の
「5」を候補文字とするならば、「5」は「2画以上で
ある」という条件より辞書文字の「5」を候補文字から
外す。
In such a case, the identification condition storage unit 17 contains, for example, "rZ" [The first part of the first pair of letters does not have an upwardly convex curve.]
The identification condition application unit 18 determines that the dictionary character "Z" is not an input character candidate. Similarly, if the input character is "3" and the dictionary character "5" is used as a candidate character, the dictionary character "5" is selected based on the condition that "5" is "more than two strokes". Remove from candidate characters.

またカタカナの「オ」を「才」と誤った場合には「才」
の同定条件「2両目と3両目には一定以上の交わりがあ
る」から候補を誤ったとしている。
Also, if you mistake "o" in katakana for "sai", it means "sai".
It was determined that the candidate was incorrect because of the identification condition, ``The second and third cars intersect more than a certain amount.''

又、さらには「漢」を「漠」に対応づけした場合、「漠
」の「9両目と12両目には交わりがない」という同定
条件より対応づけが誤りであることを検出している。
Moreover, when "Kan" is associated with "Baku", it is detected that the association is erroneous based on the identification condition that "the 9th and 12th digits of "Baku" do not intersect.

オンライン手書き文字認識の場合においては、上述のよ
うな画や筆順さらには画の交わる条件等からその候補文
字が正しいか否かを判別することができる。このため、
イメージスキャナ等で読み出した認識に比べはるかに高
い認識率を得ている。
In the case of online handwritten character recognition, it is possible to determine whether or not the candidate character is correct based on the above-mentioned strokes, stroke order, conditions for intersecting strokes, and the like. For this reason,
The recognition rate is much higher than that obtained using an image scanner or the like.

[発明が解決しようとする課題〕 前述したようなオンライン手書き文字認識装置において
は、リアルタイムで手書きの情報を取り込むことができ
、このリアルタイムの取り込みによってどのような筆順
で書いたかをも認識することができるので、その認識率
は高くなっている。
[Problem to be solved by the invention] In the online handwritten character recognition device as described above, handwritten information can be captured in real time, and by this real time capture, it is also possible to recognize the stroke order in which the text was written. Because it can be done, the recognition rate is high.

しかしながら、オンライン手書き文字入力の場合におい
ては常に入力者が同一とは限らず、続けてしまった文字
すなわち続は文字を入力してしまうことがある。
However, in the case of online handwritten character input, the inputter is not always the same, and consecutive characters, that is, continuation characters, may be input.

第7図は前述の同定条件では同定ができない入力文字図
表である。「3」を入力する時、「3」の上部と下部と
の間を一度筆をあげて書いたような場合、入力文字は2
画となり「5」の同定条件にも合格してしまう。また、
「オ」を続けて書いたすなわち1画と2画を続けて書い
たような場合には、3両目がないことになり、「才」の
同定条件が通用できない。同様に漢字の「漢」をサンズ
イの部分を続けて、さらにその後の一部をも続けたよう
な場合には12両目がなくなってしまい、「漢」の同定
条件が適用できない。
FIG. 7 is an input character chart that cannot be identified under the above-mentioned identification conditions. When inputting "3", if you raise the brush once and write between the top and bottom of "3", the input character will be 2.
This results in a picture that also passes the identification condition of "5". Also,
If "o" is written consecutively, that is, if the first and second strokes are written consecutively, there is no third eye, and the identification condition for "sai" cannot be applied. Similarly, if the kanji ``kan'' is followed by the ``sanzui'' part and then a part of it, the 12th ryo will be missing, and the identification condition for ``kan'' cannot be applied.

前述したような条件により入力者は漢字を続けて書いて
も同一漢字としているが、認識装置はそれが続けられた
ことによって画数が変化し、認識が狂ってしまう。
Due to the above-mentioned conditions, even if a person writes a kanji one after another, it is still considered the same kanji, but the number of strokes of the recognition device changes as a result of this, and recognition becomes incorrect.

すなわち従来の認識装置における類似文字同定では入力
文字に対して筆順に対して直接同定条件を適用している
。このため、画の接続特徴、交差特徴、曲がり特徴等の
文字の特徴を条件として抽出するには、基準となる点を
入カバターンから求めることが必要となる。しかしなが
ら、続は字も含む文字の同定を行うには、同定条件を与
える箇所を何両目という指定で行えない。さらに、入力
データから始めから基準点を求める手続きを同定条件に
付与すると、条件を記憶するメモリ等の記憶容量が多く
なるとともに条件を通用する時間が長いという問題を有
している。
That is, in similar character identification in conventional recognition devices, identification conditions are directly applied to the stroke order of input characters. Therefore, in order to extract character features such as stroke connection features, intersecting features, and curved features as conditions, it is necessary to find reference points from the input cover pattern. However, in order to identify characters that include zokuha characters, it is not possible to specify the number of cars to specify the identification conditions. Furthermore, if a procedure for determining a reference point from the beginning from input data is added to the identification condition, there is a problem that the storage capacity of a memory for storing the condition increases and the time required for the condition to be valid is long.

第6図における「2」に対する同定では1両目の書き始
めの部分を求める処理が必要であり、さらに「3」や「
漢」の様な例の場合については標準的な画数でない文字
について対処できない。また、標準的な画数でない文字
の書き方について1つづつ同定条件を格納することも考
えられるが、。
Identification for "2" in Figure 6 requires processing to find the starting part of the first car, and furthermore, "3" and "
In cases such as ``Kan'', it is not possible to deal with characters that do not have a standard number of strokes. It is also conceivable to store identification conditions one by one for how to write characters that do not have the standard number of strokes.

標準的な画数でない文字の書き方は多くあり、現実には
実施できない。
There are many ways to write characters that do not have the standard number of strokes, and cannot be implemented in reality.

本発明は認識率の高いオンラインの文字認識を目的とす
る。
The present invention aims at online character recognition with a high recognition rate.

〔課題を解決するための手段〕[Means to solve the problem]

第1図は本発明の機能ブロック図である。 FIG. 1 is a functional block diagram of the present invention.

特徴点抽出手段1は筆順を含む入力文字パターン情報が
加わり、該入力文字パターン情報から特徴点と該特徴点
の位相情報を求める。この入力文字パターン情報は筆順
と文字パターンを有する文字情報である。
The feature point extracting means 1 receives input character pattern information including stroke order, and obtains feature points and phase information of the feature points from the input character pattern information. This input character pattern information is character information having a stroke order and a character pattern.

特徴点辞書手段2は辞書文字の特徴点を記憶する。The feature point dictionary means 2 stores feature points of dictionary characters.

特徴点対応づけ手段3は筆順に従って前記入力文字パタ
ーン情報の特徴点と前記特徴点辞書手段2で記憶する辞
書文字の特徴点とを対応づけて対応情報を求めると共に
、対応づけ認識結果の候補カテゴリを求める。
The feature point associating means 3 associates the feature points of the input character pattern information with the feature points of the dictionary characters stored in the feature point dictionary means 2 according to the stroke order to obtain correspondence information, and also determines the candidate categories of the association recognition result. seek.

同定条件格納手段4は対象カテゴリ毎に同定条件を記憶
する。
The identification condition storage means 4 stores identification conditions for each target category.

同定条件適用手段5は前記特徴対応づけ手段3で得られ
たカテゴリから前記同定条件格納手段4を検索して同定
条件を求めるとともに、前記特徴点抽出手段1で求めた
特徴点の位置情報と前記特徴点対応づけ手段3からの対
応づけ認識結果をもとに前記カテゴリに対する同定条件
を満足するかを判別する。
The identification condition application means 5 searches the identification condition storage means 4 from the category obtained by the feature association means 3 to obtain an identification condition, and also uses the position information of the feature point obtained by the feature point extraction means 1 and the Based on the matching recognition result from the feature point matching means 3, it is determined whether the identification conditions for the category are satisfied.

〔作   用〕[For production]

入カバターン情報が加わると、特徴点抽出手段1はその
筆順を含む入力文字パターン情報から特徴点とその特徴
点の位置情報を求める。そして、特徴点対応づけ手段3
は特徴点抽出手段1で求めた特徴点と特徴点辞書手段2
で記憶する辞書文字の特徴点とを対応づけ、対応情報を
求めるとともに対応づけ認識結果の候補カテゴリを求め
る。この特徴点対応づけ手段3によって候補文字が求め
られる。同定条件適用手段5は特徴対応づけ手段3で得
られたカテゴリから同定条件格納手段4を検索して同定
条件を求めるとともに、特徴点抽出手段】で求めた特徴
点の位置情報と前記特徴点対応づけ情報からの対応づけ
認識結果をもとに、前記カテゴリに対する同定条件を満
足するかを判別する。
When the input cover pattern information is added, the feature point extraction means 1 obtains the feature points and the position information of the feature points from the input character pattern information including the stroke order. Then, feature point matching means 3
are the feature points obtained by the feature point extraction means 1 and the feature point dictionary means 2
The system associates the feature points of the dictionary characters stored in , obtains correspondence information, and obtains candidate categories as a result of the correspondence recognition. Candidate characters are found by this feature point matching means 3. The identification condition applying means 5 searches the identification condition storage means 4 from the categories obtained by the feature matching means 3 to obtain identification conditions, and also calculates the position information of the feature points obtained by the feature point extraction means and the feature point correspondence. Based on the association recognition result from the association information, it is determined whether the identification condition for the category is satisfied.

同定条件は画数ではなく、特徴点位置の関係によって同
定条件を満足するか否かを判別しているので、続は文字
が存在しても的確に入力文字を判別することができる。
Since the identification condition is determined not by the number of strokes but by the relationship between the positions of feature points, it is possible to accurately identify the input character even if the character exists.

〔実  施  例〕〔Example〕

以下、図面を用いて本発明の詳細な説明する。 Hereinafter, the present invention will be explained in detail using the drawings.

第2図は、実施例のオンライン文字認識装置の構成図で
ある。
FIG. 2 is a block diagram of the online character recognition device of the embodiment.

パターン入力部20は例えばデジタイザであり、入力者
がタブレット上をスタイラスペンで入力文字を書いた場
合、そのペンが移動した座標を読み込み、入カバターン
格納部21はその入力情報を記憶する。この入力情報は
スタイラスペンで書いた筆Jflで記憶される。すなわ
ち、この入カバターン情報は筆順をも含む情報である。
The pattern input section 20 is, for example, a digitizer, and when an inputter writes input characters on the tablet with a stylus pen, the coordinates of the movement of the pen are read, and the input pattern storage section 21 stores the input information. This input information is memorized with a brush Jfl written with a stylus pen. In other words, this input cover pattern information includes the stroke order.

入カバターン格納部21に格納した入カバターン情報を
特徴点抽出部22は読み出し、その入カバターン情報か
ら文字の特徴を表わす特徴点を抽出し、入力特徴点格納
部23に格納する。また、更にその特徴点の入カバター
ン情報における位置を特徴点位置格納部24に格納する
。そして、入力特徴点格納部23に格納された入力特徴
点の情報を特徴点対応づけ部25は読み出すと共に、特
徴点辞書部26に記憶する辞書文字の特徴点の対応づけ
を行い、対応づけの距離の少ない候補文字、例えば最も
少ない文字から3文字の候補文字を抽出する。そして、
抽出した結果を認識候補コード格納部27に加え、また
その特徴点対応づけした対応データを特徴点対応データ
格納部28に格納する。
The feature point extraction section 22 reads out the input cover turn information stored in the input cover turn storage section 21, extracts feature points representing character features from the input cover turn information, and stores them in the input feature point storage section 23. Furthermore, the position of the feature point in the input cover pattern information is stored in the feature point position storage section 24. Then, the feature point associating section 25 reads out the information on the input feature points stored in the input feature point storage section 23, and also performs the matching of the feature points of the dictionary characters stored in the feature point dictionary section 26. Candidate characters with short distances, for example, three candidate characters are extracted from the characters with the least distance. and,
The extracted result is added to the recognition candidate code storage section 27, and the correspondence data in which the feature points are associated is stored in the feature point correspondence data storage section 28.

認識候補コード格納部27に格納する候補コードとは、
1つの文字に対し割り当てられたコードであり、各文字
単位でそのコードは異なっている。
The candidate codes stored in the recognition candidate code storage section 27 are as follows:
This is a code assigned to one character, and the code is different for each character.

同定条件格納部29には入カバターン格納部21、特徴
点位置格納部24、特徴点対応データ格納部28、認識
コード格納部27の出力が加わっており、これらの各格
納部21,24.27.28に格納した情報と同定条件
格納部30で記憶する同定条件を用いて記載された辞書
文字の特徴点に対応している入力文字の特徴点を読み出
して同定条件が満足しているか否かを判別する。すなわ
ち、同定条件適用部29は同定条件格納部30で記憶す
る同定条件を読み出し、その同定条件で使用される辞書
文字の特徴点に対応している入力文字の特徴点を、前記
特徴点データ格納部28に格納した人力文字の特徴点と
辞書文字の特徴点の対応データから求める。さらに、特
徴点位置格納部24に格納したデータにより前記入力文
字の特徴点に対する入力文字パターン上の点を求める。
The identification condition storage section 29 includes the outputs of the input cover pattern storage section 21, the feature point position storage section 24, the feature point correspondence data storage section 28, and the recognition code storage section 27. Using the information stored in .28 and the identification conditions stored in the identification condition storage unit 30, the feature points of the input character corresponding to the feature points of the dictionary characters described are read out, and it is determined whether the identification conditions are satisfied. Determine. That is, the identification condition application unit 29 reads out the identification conditions stored in the identification condition storage unit 30, and applies the feature points of the input character corresponding to the feature points of the dictionary characters used in the identification conditions to the feature point data storage. This is determined from the correspondence data between the feature points of the human characters and the feature points of the dictionary characters stored in the section 28. Furthermore, points on the input character pattern corresponding to the feature points of the input character are determined using the data stored in the feature point position storage section 24.

そして、同定条件を入カバターンとその特徴点に適用す
る。
Then, the identification conditions are applied to the input pattern and its feature points.

すなわち、求めた同定条件を候補文字は満足しているか
を判断し、この同定条件を満足する候補文字を認識結果
とする。尚、複数の候補文字がそれぞれ同定条件を満足
した場合には、各特徴点間の距離の少ないものを認識結
果とする。
That is, it is determined whether the candidate character satisfies the determined identification condition, and the candidate character that satisfies the identification condition is determined as a recognition result. Note that when a plurality of candidate characters each satisfy the identification condition, the one with the smallest distance between each feature point is taken as the recognition result.

第5図に示した従来の技術と比べ、特徴点抽出部22は
人力特徴点格納部23に入力特徴点を格納する他に、特
徴点位置格納部24にその特徴点位置情報を格納する。
Compared to the conventional technique shown in FIG. 5, the feature point extraction section 22 stores the input feature points in the human feature point storage section 23 and also stores the feature point position information in the feature point position storage section 24.

また、特徴点対応づけ部25は認識コード格納部27に
認識候補コードを格納すると共に特徴点データ対応格納
部28に特徴点対応データを格納する。
Further, the feature point correspondence section 25 stores the recognition candidate code in the recognition code storage section 27 and stores the feature point correspondence data in the feature point data correspondence storage section 28 .

前述した従来の方式においては、この特徴点位置と特徴
点対応データを用いないで、同定条件適用部18が同定
条件を判別しているが、本発明の実施例においては特徴
点位置格納位置と特徴点対応データとを用いて判別を行
っているので、続は文字に対する誤判別を保護すること
ができる。すなわち、従来においては、画数に対応して
同定条件を用い判別しているが、本発明の実施例におい
ては複数の特徴点を筆順に設けその特徴点から先ず特徴
点辞書部14を検索し候補文字を求めている。そして、
同定条件適用部29では同定条件で指示されている箇所
を特徴点位置格納部24や入カバターン格納部21、更
には特徴点対応データ格納部28に記憶する各情報から
同定条件を満足しているかを求めている。
In the conventional method described above, the identification condition application unit 18 determines the identification condition without using the feature point positions and the feature point correspondence data, but in the embodiment of the present invention, the identification condition is determined by the feature point location and the feature point correspondence data. Since the discrimination is performed using the feature point correspondence data, it is possible to protect against erroneous discrimination of characters in the sequel. That is, in the past, identification conditions were used in correspondence to the number of strokes for discrimination, but in the embodiment of the present invention, a plurality of feature points are set in stroke order, and the feature point dictionary section 14 is first searched from the feature points to find candidates. I'm looking for characters. and,
The identification condition applying unit 29 determines whether the location specified by the identification condition satisfies the identification condition based on the information stored in the feature point position storage unit 24, the input cover pattern storage unit 21, and further the feature point correspondence data storage unit 28. I'm looking for.

尚、この結果は認識結果格納部31に格納される。Note that this result is stored in the recognition result storage section 31.

第3図は特徴点の対応と実施例における同定条件図表で
ある。入力文字が「2」であった場合、その候補文字を
「Z」とした時には「Z」の同定条件から〔「Z)の特
徴点である上部に設けられた特徴点A、Bに対応づいた
人力文字の特徴点A。
FIG. 3 is a chart of correspondence between feature points and identification conditions in an example. When the input character is "2" and the candidate character is "Z", based on the identification conditions of "Z" Characteristic point A of human-powered characters.

Bの間のパターンの部分が上に凸の曲がりがない〕を碍
子にの曲がりがないという同定条件によって入力文字が
「Z」ではないことを判別することができる。
It can be determined that the input character is not "Z" based on the identification condition that there is no upwardly convex bend in the part of the pattern between B and there is no bend in the insulator.

従来においては、「Z」の1両目の書き始めの部分を求
める処理を必要とするが、実施例においては特徴点A、
Bの対応づけに関してのみ凸の曲がりを求めるだけでよ
い。
Conventionally, it is necessary to calculate the starting part of the first car of "Z", but in this embodiment, the feature point A,
It is only necessary to find the convex curvature regarding the correspondence of B.

また、第6図における「3」、「オノ、「漢」の入力文
字を第7図のように入力した場合、候補文字である「5
」、r才J、r漠」の同定条件から同定条件に合格した
り、適用できなかったりすることが従来あったが、第3
図に示すように、人力文字が「3」であるならば、〔「
5」の特徴点A、Bに対応づいた入力文字の特徴点A、
Bは別の画の上にある〕ことから辞書文字の「5」が認
識結果でないと判別することができる。また、入力「オ
Jの場合には〔候補文字「才」の特徴点A。
Also, if the input characters "3", "ono", and "kan" in Figure 6 are input as shown in Figure 7, the candidate characters "5"
Previously, the identification conditions could be passed or not applied based on the identification conditions of
As shown in the figure, if the human character is "3",
Feature points A and B of input characters corresponding to feature points A and B of "5"
B is on another stroke], it can be determined that the dictionary character "5" is not a recognition result. In addition, in the case of the input "OJ", [feature point A of the candidate character "Sai".

B、C,Dに対応づいた入力文字の特徴点A、B。Feature points A and B of input characters corresponding to B, C, and D.

C,Dに関して線分A、B、C,Dには一定以上の交差
がない〕とし、これによって「オ」の第1画、第2両目
が続けられて書かれたとしてもこの文字が「才Jでない
ことを明確に判別することができる。また、「漢」のサ
ンズイの部分並びに8〜lO画を続けて書いた場合には
、従来の同定条件では9画と12両目には交わりがない
として「漠」の同定条件を満足してしまって誤認識する
が、実施例における同定条件では〔辞書文字「漠」の特
徴点A、B、C,Dに対応づいた入力文字の特徴点A、
B、C,Dに関して線分A、B、C,Dは交差しない〕
から「漠」が認識結果ではないことがわかる。「漠」の
A、Bは9画の線分を表わし、C,Dは12画を表わし
ている。本発明においてはこの線分を画で表わすのでは
なく、特徴点抽出によって得られた特徴点の線分によっ
て表わしているので同定条件によって的確に候補文字が
正しいかを判別することができる。
With respect to C and D, line segments A, B, C, and D do not intersect more than a certain amount], so that even if the first and second strokes of "o" were written consecutively, this character would be " It is possible to clearly determine that the person is not talented J. Also, when the Sanzui part of "Kan" and the 8th to 10th strokes are written consecutively, the 9th stroke and the 12th stroke do not intersect under the conventional identification conditions. However, under the identification conditions in the example, [the feature points of the input character corresponding to the feature points A, B, C, and D of the dictionary character "baku" are incorrectly recognized. A,
Line segments A, B, C, and D do not intersect with respect to B, C, and D]
From this, it can be seen that ``baku'' is not a recognition result. A and B of ``Maku'' represent 9 strokes, and C and D represent 12 strokes. In the present invention, this line segment is not represented by a stroke, but by a line segment of a feature point obtained by feature point extraction, so that it is possible to accurately determine whether a candidate character is correct based on the identification conditions.

第4図は同定条件適用部の構成図である。同定条件適用
部29は同定条件検索部291、人力文字条件作成部2
92、入力条件格納部293、判定部294より成る。
FIG. 4 is a configuration diagram of the identification condition application section. The identification condition application section 29 includes an identification condition search section 291 and a manual character condition creation section 2.
92, an input condition storage section 293, and a determination section 294.

同定条件検索部291は、認識候補コード格納部27か
ら認識候補コードを順に1つずつ読む。
The identification condition search unit 291 sequentially reads the recognition candidate codes one by one from the recognition candidate code storage unit 27.

読んだWm候補コードを同定文字とする同定条件を同定
条件格納部30において検索して読み出し、入力文字条
件作成部292に出力する。
The identification condition in which the read Wm candidate code is used as an identification character is searched and read out in the identification condition storage section 30, and output to the input character condition creation section 292.

入力文字条件作成部292は、辞書文字について記述さ
れた同定条件を、特徴点対応データ格納部28と特徴点
位置格納部24に格納したデータを用いて、人力文字に
ついて記述した同定条件に変換し、入力条件格納部29
3に格納する。このとき、判定部294は、入力条件格
納部293に格納された該条件を、入力特徴点格納部2
3に格納された入力特徴点データと、入カバターン格納
部21に格納されたデータを用いて入力文字に適用する
The input character condition creation unit 292 converts the identification conditions described for dictionary characters into the identification conditions described for human characters using the data stored in the minutiae correspondence data storage unit 28 and the minutiae position storage unit 24. , input condition storage section 29
Store in 3. At this time, the determination unit 294 applies the condition stored in the input condition storage unit 293 to the input feature point storage unit 293.
The input feature point data stored in 3 and the data stored in the input cover pattern storage section 21 are applied to the input character.

同定条件に合格した場合は、認識候補コードを認識結果
格納部31に格納する。そうでない場合は、次の候補コ
ードを27から順次上記動作を繰り返す。
If the identification condition is passed, the recognition candidate code is stored in the recognition result storage section 31. If not, the above operation is repeated sequentially from 27 for the next candidate code.

以上述べたように、本発明は順次書き込まれた筆順から
特徴点を抽出し、その特徴点における同定条件を用いて
判別しているので続き文字が発生しても、誤ることなく
的確に目的の文字を判別することができる。
As described above, the present invention extracts feature points from the sequentially written stroke order and uses the identification conditions at the feature points for discrimination, so even if consecutive characters occur, the target can be accurately identified without error. Able to distinguish between characters.

〔発明の効果〕〔Effect of the invention〕

以上述べたように本発明によれば、続は文字であっても
的確に文字の同定をも可能とし、同定条件の必要記憶容
量もこれによって小さくなり、更に実行時間も短くなる
という効果を奏する。
As described above, according to the present invention, it is possible to accurately identify characters even if the continuation is a character, thereby reducing the storage capacity required for identification conditions and further reducing the execution time. .

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の機能ブロック図、 第2図は実施例のオンライン文字認識装置の構成図、 第3図は特徴点の対応と一実施例における同定条件図表
、 第4図は同定条件適用部の構成図、 第5図は従来のオンライン文字認識装置の構成図、 第6図は対応づいた入力文字と辞書文字と従来の同定条
件図表、 第7図は同定条件では同定ができない入力文字図表であ
る。 I・・・特徴点抽出手段、 2・・・特徴点辞書手段、 3・・・特徴点対応づけ手段、 4・・・同定条件格納手段、 5・・・同定条件適用手段。 φ
Fig. 1 is a functional block diagram of the present invention, Fig. 2 is a configuration diagram of an online character recognition device according to an embodiment, Fig. 3 is a diagram of feature point correspondence and identification conditions in one embodiment, and Fig. 4 is an application of identification conditions. Fig. 5 is a block diagram of a conventional online character recognition device; Fig. 6 is a diagram of matched input characters, dictionary characters, and conventional identification conditions; Fig. 7 is an input character that cannot be identified under the identification conditions. This is a diagram. I... Feature point extraction means, 2... Feature point dictionary means, 3... Feature point matching means, 4... Identification condition storage means, 5... Identification condition application means. φ

Claims (1)

【特許請求の範囲】 筆順を含む入力文字パターン情報が加わり、該入力文字
パターン情報から特徴点と該特徴点の位置情報とを求め
る特徴点抽出手段(1)と、辞書文字の特徴点を記憶す
る特徴点辞書手段(2)と、 筆順に従って前記入力文字の特徴点と、前記特徴点辞書
手段(2)で記憶する辞書文字の特徴点とを対応づけて
対応情報を求めると共に、対応づけ認識結果の候補カテ
ゴリを求める特徴点対応づけ手段(3)と、 対象カテゴリ毎に同定条件を記憶する同定条件格納手段
(4)と、 前記特徴点対応づけ手段(3)で得られたカテゴリから
前記同定条件格納手段(4)を検索して同定条件を求め
ると共に、前記特徴点抽出手段(1)で求めた特徴点の
位置情報と前記特徴点対応づけ手段(3)からの対応づ
け認識結果をもとに前記カテゴリに対する同定条件を満
足するかを判別する同定条件適用手段(5)とよりなる
ことを特徴とするオンライン手書文字認識装置。
[Scope of Claims] A feature point extracting means (1) in which input character pattern information including stroke order is added to obtain feature points and position information of the feature points from the input character pattern information, and feature point extraction means (1) for storing feature points of dictionary characters. a feature point dictionary means (2) for determining the correspondence between the feature points of the input character and the feature points of the dictionary characters stored in the feature point dictionary means (2) according to the stroke order; Feature point matching means (3) for obtaining candidate categories as a result; Identification condition storage means (4) for storing identification conditions for each target category; Search the identification condition storage means (4) to obtain the identification conditions, and also search the position information of the feature points obtained by the feature point extraction means (1) and the matching recognition results from the feature point matching means (3). An online handwritten character recognition device comprising: identification condition application means (5) for determining whether an identification condition for the category is satisfied based on the category.
JP1038202A 1989-02-20 1989-02-20 Online handwriting recognition device Expired - Fee Related JP3066530B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1038202A JP3066530B2 (en) 1989-02-20 1989-02-20 Online handwriting recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1038202A JP3066530B2 (en) 1989-02-20 1989-02-20 Online handwriting recognition device

Publications (2)

Publication Number Publication Date
JPH02217981A true JPH02217981A (en) 1990-08-30
JP3066530B2 JP3066530B2 (en) 2000-07-17

Family

ID=12518757

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1038202A Expired - Fee Related JP3066530B2 (en) 1989-02-20 1989-02-20 Online handwriting recognition device

Country Status (1)

Country Link
JP (1) JP3066530B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0546860A2 (en) * 1991-12-11 1993-06-16 International Business Machines Corporation Character recognition
EP0548030A2 (en) * 1991-12-19 1993-06-23 Texas Instruments Incorporated Character recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58105388A (en) * 1981-12-16 1983-06-23 Matsushita Electric Ind Co Ltd Recognizing system for on-line handwritten character
JPS6089291A (en) * 1983-10-19 1985-05-20 Sharp Corp Character recognition method
JPS63301383A (en) * 1987-06-02 1988-12-08 Oki Electric Ind Co Ltd Handwritten character recognition device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58105388A (en) * 1981-12-16 1983-06-23 Matsushita Electric Ind Co Ltd Recognizing system for on-line handwritten character
JPS6089291A (en) * 1983-10-19 1985-05-20 Sharp Corp Character recognition method
JPS63301383A (en) * 1987-06-02 1988-12-08 Oki Electric Ind Co Ltd Handwritten character recognition device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0546860A2 (en) * 1991-12-11 1993-06-16 International Business Machines Corporation Character recognition
EP0546860A3 (en) * 1991-12-11 1994-05-11 Ibm Character recognition
EP0548030A2 (en) * 1991-12-19 1993-06-23 Texas Instruments Incorporated Character recognition
EP0548030A3 (en) * 1991-12-19 1994-05-11 Texas Instruments Inc Character recognition

Also Published As

Publication number Publication date
JP3066530B2 (en) 2000-07-17

Similar Documents

Publication Publication Date Title
TWI321294B (en) Method and device for determining at least one recognition candidate for a handwritten pattern
JP4787275B2 (en) Segmentation-based recognition
US5315667A (en) On-line handwriting recognition using a prototype confusability dialog
US7349576B2 (en) Method, device and computer program for recognition of a handwritten character
EP0114250B1 (en) Confusion grouping of strokes in pattern recognition method and system
US5515455A (en) System for recognizing handwritten words of cursive script
JPH0562391B2 (en)
JP2000353215A (en) Character recognition device and recording medium where character recognizing program is recorded
JPWO2014030399A1 (en) Object identification device, object identification method, and program
JPH02266485A (en) Information recognizing device
US5659633A (en) Character recognition method utilizing compass directions and torsion points as features
JP4188342B2 (en) Fingerprint verification apparatus, method and program
JPH02217981A (en) Device for recognizing on-line hand-written character
JP2761679B2 (en) Online handwritten character recognition device
JP2002163608A (en) Handwriting character recognizing device
CN112183538B (en) Manchu recognition method and system
JP3138665B2 (en) Handwritten character recognition method and recording medium
JP2671984B2 (en) Information recognition device
JP2851865B2 (en) Character recognition device
JP3365538B2 (en) Online character recognition method and apparatus
JP3017899B2 (en) Personal identification device and method
JPS60186980A (en) Recognition processing system for on-line handwritten character
JPS6022793B2 (en) character identification device
JPH0830717A (en) Character recognition method and device therefor
JPH0562392B2 (en)

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees