JPH04372085A - Character reading method - Google Patents

Character reading method

Info

Publication number
JPH04372085A
JPH04372085A JP3149038A JP14903891A JPH04372085A JP H04372085 A JPH04372085 A JP H04372085A JP 3149038 A JP3149038 A JP 3149038A JP 14903891 A JP14903891 A JP 14903891A JP H04372085 A JPH04372085 A JP H04372085A
Authority
JP
Japan
Prior art keywords
character
pattern
frame
character string
projection pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP3149038A
Other languages
Japanese (ja)
Inventor
Tatsuo Yamamura
山村 辰男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuji Electric Co Ltd
Original Assignee
Fuji Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuji Electric Co Ltd filed Critical Fuji Electric Co Ltd
Priority to JP3149038A priority Critical patent/JPH04372085A/en
Publication of JPH04372085A publication Critical patent/JPH04372085A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE:To improve the state that characters become impossible in some cases, to be read by the difference of the position of the segment frame for each character according to the size of the width of a character which is placed at the left end when the conventional reading of a character string is performed by setting a segment frame at a position for each word to come in accordance with the upper end and the left end positions of a rectangle circumscribing the character string and comparing a character pattern within this frame with a registered character pattern. CONSTITUTION:An upper end position 12 for a character string 11 is determined from a X direction projection pattern 16 of a binary picture of the character string 11 and a center position 13 of a left end character is determined from a Y direction projection pattern 17 of the same character string 11. By using these two positions 12, 13 as references, each character frame 14 is set at a right position regardless of the width of the left end character. Further, for reading characters, a character circumscribing rectangle 15 is detected from the X direction projection pattern and the Y direction projection pattern of the characters within this frame 14 and the collation of the character pattern within this rectangle 15 with registered character patterns is performed.

Description

【発明の詳細な説明】[Detailed description of the invention]

【0001】0001

【産業上の利用分野】本発明は、テレビカメラ等を用い
て文字列の像を電気信号として取り出し、その信号から
文字列の各文字を読取る方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method of extracting an image of a character string as an electrical signal using a television camera or the like and reading each character of the character string from the signal.

【0002】0002

【従来の技術】この種の文字読取方法としては、従来は
、文字列に外接する矩形の上端,左端位置に基づいて、
各文字の来るべき位置に切り出し枠を設定し、さらに各
切り出し枠内で文字の外接矩形を切り出し、この文字外
接矩形内の文字パターンに対し、登録した文字のマッチ
ィングを行い読取対象文字を判定していた。
[Prior Art] Conventionally, this type of character reading method uses the upper and left edges of a rectangle circumscribing a character string to
A clipping frame is set at the position where each character should appear, a circumscribing rectangle of the character is cut out within each clipping frame, and the registered characters are matched against the character pattern within this character circumscribing rectangle to determine the character to be read. was.

【0003】0003

【発明が解決しようとする課題】しかしながら上述の文
字読取方法では、左端に来る文字が幅の広い文字,例え
ば“4”であるか、幅の狭い文字,例えば“1”である
かによって、各文字の切り出し枠の位置が異なり、隣合
った文字同士が分離できなくなることが起こりうる。ま
た、正しく文字全体が切り出せなくなることがある。そ
こでこの発明は、上記のように、文字列の先頭にくる文
字種によって、各文字の分離の困難になる問題を除去す
ることができる文字読取方法を提供することを課題とす
る。
[Problems to be Solved by the Invention] However, in the above-mentioned character reading method, depending on whether the character that comes to the left end is a wide character, such as "4", or a narrow character, such as "1", The positions of the character extraction frames may be different, and adjacent characters may not be able to be separated. Additionally, the entire character may not be cut out correctly. Therefore, it is an object of the present invention to provide a character reading method that can eliminate the problem of difficulty in separating each character depending on the type of character that appears at the beginning of a character string, as described above.

【0004】0004

【課題を解決するための手段】前記の課題を解決するた
めに本発明の文字読取方法は、X方向またはY方向に並
ぶ文字列(11など)の2値化画像内の各文字の来るべ
き位置に夫々切り出し枠(文字枠14など)を設定し、
この枠内の(文字外接矩形15などの内の)被読取文字
パターンと(辞書パターンメモリ9などに)予め登録さ
れた文字パターンとを照合し、各被読取文字を判定する
文字読取方法において、前記文字列のX方向投影パター
ンからこの文字列の上端位置(12など)を求め、前記
文字列のY方向投影パターンからこの左端文字の中心位
置(13など)を求め、この上端位置および左端文字中
心位置で定まる点を基準点として前記切り出し枠を設定
するようにするものとする。
[Means for Solving the Problems] In order to solve the above-mentioned problems, the character reading method of the present invention provides a method for reading each character in a binary image of a character string (such as 11) arranged in the X direction or the Y direction. Set a cropping frame (character frame 14, etc.) at each position,
In a character reading method in which each character to be read is determined by comparing the character pattern to be read within this frame (within the character circumscribing rectangle 15, etc.) with a character pattern registered in advance (in the dictionary pattern memory 9, etc.), The top position (12, etc.) of this character string is determined from the X direction projection pattern of the character string, the center position (13, etc.) of this leftmost character is determined from the Y direction projection pattern of the character string, and this top position and leftmost character are calculated. The cutout frame is set using a point determined by the center position as a reference point.

【0005】[0005]

【作用】文字列の上端,左端位置を検出するだけでなく
、文字列の上下方向(Y方向)の射影パターンから左端
位置で分離される文字パターンの中心位置を求めて、改
めて文字例の左端とすることにより、左端に幅の異なる
文字が来ても、各文字毎に正しく文字切り出し枠を発生
して、読取を行うものである。
[Operation] In addition to detecting the top and left end positions of the character string, the center position of the character pattern separated at the left end position is determined from the vertical direction (Y direction) projection pattern of the character string, and the left end position of the character example is determined again. By doing this, even if characters with different widths appear at the left end, a character cutting frame is correctly generated for each character and reading is performed.

【0006】[0006]

【実施例】以下図1および図2を用いて本発明の実施例
を説明する。図2は本発明の一実施例としての構成を示
すブロック図、図1は読取対象の文字列を含む図2の動
作説明図である。なお図1において、“2,3,…,7
”および“1,2,…,6”と2列に水平方向に並ぶ数
字は読取対象の文字列11であるものとする。次に図1
を参照しつつ、図2の構成と動作を併せて説明する。 1は対象物(この場合読取対象文字列)を撮像する光電
変換センサとしてのTVカメラで、この撮像画面をラス
タ走査して対象物の濃淡レベルを示すアナログの画像信
号を出力する。2はA/D変換部で、カメラ1からの濃
淡の画像信号を2値のデジタル信号(2値化信号)2a
に変換する。この2値化信号2aは画像メモリ7に入力
され、文字列11の2値化画像として記憶されるほか、
X投影部3に入力される。X投影部3はこの変換された
2値化信号2aを入力し、図1に示すような、文字列1
1のX方向への投影パターン16を抽出する。そしてマ
イクロプロセッサ(MPUとも略記する)8が、この投
影パターン16を上方から読出して、最初に画像の表れ
る位置として、文字列の上端12を検出する。次にY投
影部4は画像メモリ7に記憶れれた文字列の画像データ
に対して、Y方向への走査をして、図1に示すようなY
方向への投影パターン17を抽出する。そしてMPU8
がこのY方向への投影パターン17を左方より読出して
、最初に現れるパターンの塊(つまり左端文字分の投影
パターン)についての、X方向での中心位置(左端文字
中心位置)13を求める。またさらに同様な方法で投影
パターンの右端の塊の中心位置を出す。
[Embodiment] An embodiment of the present invention will be described below with reference to FIGS. 1 and 2. FIG. 2 is a block diagram showing a configuration as an embodiment of the present invention, and FIG. 1 is an explanatory diagram of the operation of FIG. 2 including a character string to be read. In FIG. 1, “2, 3,…, 7
” and “1, 2, ..., 6”, which are horizontally arranged in two rows, are the character string 11 to be read.
The configuration and operation of FIG. 2 will be explained together with reference to FIG. Reference numeral 1 denotes a TV camera as a photoelectric conversion sensor that images an object (in this case, a character string to be read), and raster-scans this imaged screen to output an analog image signal indicating the gray level of the object. 2 is an A/D converter that converts the grayscale image signal from the camera 1 into a binary digital signal (binarized signal) 2a
Convert to This binarized signal 2a is input to the image memory 7 and is stored as a binarized image of the character string 11.
It is input to the X projection section 3. The X projection unit 3 inputs this converted binary signal 2a and converts it into a character string 1 as shown in FIG.
A projection pattern 16 of 1 in the X direction is extracted. A microprocessor (also abbreviated as MPU) 8 reads out this projection pattern 16 from above and detects the upper end 12 of the character string as the position where the image first appears. Next, the Y projection unit 4 scans the image data of the character string stored in the image memory 7 in the Y direction to produce a Y image as shown in FIG.
A projection pattern 17 in the direction is extracted. And MPU8
reads out this projected pattern 17 in the Y direction from the left, and determines the center position (leftmost character center position) 13 in the X direction of the first cluster of patterns (that is, the projected pattern for the leftmost character). Further, in a similar manner, the center position of the block at the right end of the projection pattern is determined.

【0007】次に枠発生部5はこのようにして検出され
た、文字列の上端位置12、左端文字中心位置13を基
準にして、各文字が来るべき位置に予め設定しておいた
文字切り出し枠14を発生して、画像メモリ7をこの枠
14内で走査し、画像メモリ7からこの枠14内の画像
データを出力させる。X投影部3,Y投影部4はこの画
像データを入力して枠14内で、X,Y方向の投影パタ
ーンを抽出し、MPU8がこのX方向投影パターンおよ
びY方向投影パターンを読出し、この読出されたX方向
投影パターンの上下端およびY方向投影パターンの左右
端を検出して文字の外接矩形15を求める。次にマッチ
ング処理部6は画像メモリから前記の文字外接矩形15
で切り出された文字パターンを読出し、この文字パター
ンと同様に外接矩形内で辞書パターンメモリ9内に登録
された辞書文字パターンとのマッチング処理を実施して
類似度を検出する。そして検出した類似度から、被読取
文字を最も類似度の高い辞書文字として読取る。
[0007] Next, the frame generation unit 5 extracts characters preset at the positions where each character should come, based on the upper end position 12 and leftmost character center position 13 of the character string detected in this way. A frame 14 is generated, the image memory 7 is scanned within this frame 14, and the image data within this frame 14 is outputted from the image memory 7. The X projection section 3 and the Y projection section 4 input this image data and extract projection patterns in the X and Y directions within the frame 14, and the MPU 8 reads out the X direction projection pattern and the Y direction projection pattern. The upper and lower ends of the X-direction projection pattern and the left and right ends of the Y-direction projection pattern are detected to obtain a circumscribed rectangle 15 of the character. Next, the matching processing unit 6 extracts the character circumscribing rectangle 15 from the image memory.
The character pattern cut out is read out, and a matching process is performed on this character pattern with a dictionary character pattern registered in the dictionary pattern memory 9 within the circumscribed rectangle to detect the degree of similarity. Based on the detected similarity, the character to be read is read as the dictionary character with the highest similarity.

【0008】[0008]

【発明の効果】本発明によれば文字列の2値化画像のX
方向投影パターンから文字列の上端位置12を求め、次
に文字列の2値化画像のY方向投影パターンから左端文
字の中心位置13を求め、この位置12,13を基準位
置として、文字の来るべき位置に、予め設定しておいた
文字枠14を設定し、その枠内で文字外接矩形15を検
出して、登録した辞書文字パターンとのマッチングによ
り当該被読取文字の読取を行うようにしたため、左端に
来る文字が幅の大きな文字でも、幅の小さい文字でも、
文字の種類に関係なく、各文字に対応した、正しい位置
に文字切り出し枠を設定でき、隣接文字との分離をして
正確に各文字を切り出し、読取りを行うことができる。
[Effects of the Invention] According to the present invention, X of a binary image of a character string
Find the upper end position 12 of the character string from the directional projection pattern, then find the center position 13 of the leftmost character from the Y direction projection pattern of the binary image of the character string, and use these positions 12 and 13 as reference positions to determine where the character will come from. A preset character frame 14 is set at the desired position, a character circumscribing rectangle 15 is detected within the frame, and the character to be read is read by matching with the registered dictionary character pattern. , even if the leftmost character is a large character or a small character,
Regardless of the type of character, a character cutting frame can be set at the correct position corresponding to each character, and each character can be accurately cut out and read by separating it from adjacent characters.

【図面の簡単な説明】[Brief explanation of drawings]

【図1】読取対象文字を含む本発明の説明図[Fig. 1] An explanatory diagram of the present invention including characters to be read

【図2】本
発明の実施例としての構成を示すブロック図
FIG. 2 is a block diagram showing a configuration as an embodiment of the present invention.

【符号の説明】[Explanation of symbols]

1    TVカメラ 2    A/D変換部 3    X投影部 4    Y投影部 5    枠発生部 6    マッチング処理部 7    画像メモリ 8    マイクロプロセッサ(MPU)9    辞
書パターンメモリ 10    MPバス 11    文字列 12    上端位置 13    左端文字中心位置 14    文字枠 15    文字外接矩形 16    X方向投影パターン 17    Y方向投影パターン
1 TV camera 2 A/D conversion unit 3 Center position 14 Character frame 15 Character circumscribing rectangle 16 X direction projection pattern 17 Y direction projection pattern

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】X方向またはY方向に並ぶ文字列の2値化
画像内の各文字の来るべき位置に夫々切り出し枠を設定
し、この枠内の被読取文字パターンと予め登録された文
字パターンとを照合し、各被読取文字を判定する文字読
取方法において、前記文字列のX方向投影パターンから
この文字列の上端位置を求め、前記文字列のY方向投影
パターンからこの左端文字の中心位置を求め、この上端
位置および左端文字中心位置で定まる点を基準点として
前記切り出し枠を設定するようにしたことを特徴とする
文字読取方法。
Claim 1: A cutting frame is set at each position of each character in a binarized image of character strings arranged in the X direction or the Y direction, and a character pattern to be read within this frame and a character pattern registered in advance are set. In a character reading method that determines each character to be read by comparing the characters, the upper end position of the character string is determined from the X-direction projection pattern of the character string, and the center position of this leftmost character is determined from the Y-direction projection pattern of the character string. , and the cutting frame is set using a point determined by the upper end position and the center position of the left end character as a reference point.
JP3149038A 1991-06-21 1991-06-21 Character reading method Pending JPH04372085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3149038A JPH04372085A (en) 1991-06-21 1991-06-21 Character reading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3149038A JPH04372085A (en) 1991-06-21 1991-06-21 Character reading method

Publications (1)

Publication Number Publication Date
JPH04372085A true JPH04372085A (en) 1992-12-25

Family

ID=15466301

Family Applications (1)

Application Number Title Priority Date Filing Date
JP3149038A Pending JPH04372085A (en) 1991-06-21 1991-06-21 Character reading method

Country Status (1)

Country Link
JP (1) JPH04372085A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014071866A (en) * 2012-10-02 2014-04-21 Nidec Sankyo Corp Image processor, image processing method, and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014071866A (en) * 2012-10-02 2014-04-21 Nidec Sankyo Corp Image processor, image processing method, and program

Similar Documents

Publication Publication Date Title
US7170647B2 (en) Document processing apparatus and method
JPH04372085A (en) Character reading method
JP2868134B2 (en) Image processing method and apparatus
JP3187894B2 (en) Document image tilt detection method
JPH0548510B2 (en)
JP3223878B2 (en) Character string collating device, method and recording medium
JP3187895B2 (en) Character area extraction method
JPH07230525A (en) Method for recognizing ruled line and method for processing table
JPH02273884A (en) Detecting and correcting method for distortion of document image
JPS6254380A (en) Character recognizing device
JP3850488B2 (en) Character extractor
JP2998443B2 (en) Character recognition method and device therefor
JP3162414B2 (en) Ruled line recognition method and table processing method
JP4439054B2 (en) Character recognition device and character frame line detection method
JPH0660220A (en) Area extracting method for document image
JP3100619B2 (en) Photo region extraction device
JP2004152048A (en) Vehicle number reading device
JPS63101983A (en) Character string extracting system
JP2843638B2 (en) Character image alignment method
JPH04311283A (en) Line direction discriminating device
JP2931041B2 (en) Character recognition method in table
JPH05128305A (en) Area dividing method
JPH05135202A (en) Document picture reader
JPH01140274A (en) Character row recognition system
JPH02166583A (en) Character recognizing device