JPH02240791A - Character recognizing device - Google Patents

Character recognizing device

Info

Publication number
JPH02240791A
JPH02240791A JP1061760A JP6176089A JPH02240791A JP H02240791 A JPH02240791 A JP H02240791A JP 1061760 A JP1061760 A JP 1061760A JP 6176089 A JP6176089 A JP 6176089A JP H02240791 A JPH02240791 A JP H02240791A
Authority
JP
Japan
Prior art keywords
character
input
recognition
feature
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP1061760A
Other languages
Japanese (ja)
Inventor
Tadayuki Morishita
森下 賢幸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to JP1061760A priority Critical patent/JPH02240791A/en
Publication of JPH02240791A publication Critical patent/JPH02240791A/en
Pending legal-status Critical Current

Links

Landscapes

  • Character Discrimination (AREA)

Abstract

PURPOSE:To recognize a character pattern having a positional deviation and a deformation by extracting a feature from an inputted character pattern, inputting its feature and allowing a neural network to execute learning. CONSTITUTION:A character is read by a photodiode array 28, and from character data, a feature is extracted by a feature extracting layer 29, inputted to an intermediate layer 30 of a neural net and from an output layer 31, a result of recognition is outputted. When this result of recognition is different from an input character, a correct character recognition result is inputted as an educator signal 32. Subsequently, learning is executed by varying weight of the neural net by a back propagation method so that a difference of the educator signal and the result of recognition of the neural net becomes small, and learning is repeated until a correct recognition is executed. In such a way, a character recognition which reduces a positional deviation and a deformation of an input character pattern by a feature extracting surface placed on the whole input surface can be executed.

Description

【発明の詳細な説明】 産業上の利用分野 この発明は、文字認識装置に関するもので、例えば、手
書き文字のコンピュータへの入力装置等に利用できる。
DETAILED DESCRIPTION OF THE INVENTION Field of Industrial Application This invention relates to a character recognition device, and can be used, for example, as an input device for handwritten characters into a computer.

従来の技術 従来、ニューラルネットワークを利用して文字の認識を
行なう場合、入力文字パターンをm行n列の画素に分割
してニューラルネットワークに入力した後、mxn個の
画素データがら正しい文字認識が行なわれるようにバッ
クプロパゲーション等の方法で学習が行なわれている。
Conventional technology Conventionally, when character recognition is performed using a neural network, an input character pattern is divided into m rows and n columns of pixels and input to the neural network, and then correct character recognition is performed using mxn pixel data. Learning is performed using methods such as backpropagation to ensure that

例えば、第5図に示すように入力層・中間層・出力層か
らなる階層的ネットワークを用いる。入力層は64 (
=8X8)個の画素34から成り立っており、中間層の
ユニット数は8個、出力層は4個のユニット35で構成
されている。入力層の各ユニットは全ての中間層と、ま
た中間層の各ユニットは全ての出力層と接続されている
。中間層・出力層の各ユニット35は、しきい値開数と
してシグモイド関数を用いたニューロンで構成されてい
る。
For example, as shown in FIG. 5, a hierarchical network consisting of an input layer, a middle layer, and an output layer is used. The input layer is 64 (
=8x8) pixels 34, the number of intermediate layer units is eight, and the output layer is composed of four units 35. Each unit of the input layer is connected to all hidden layers, and each unit of the hidden layer is connected to all output layers. Each unit 35 of the intermediate layer and output layer is composed of neurons using a sigmoid function as a threshold numerical value.

発明が解決しようとする課題 例えば、入カバターンを第6図に示すように5×5個の
画素に分割し、これを入力として学習を行なう場合を考
える。この入力は25個のデータ列とみなせるため、同
じ入カバターンでも1行位置がずれると、第7図に示す
ようにデータ列は完全に異なるパターン列となり認識が
できなくなる。入カバターンが変形しても同様なことが
起こるため、この方法では手書き文字のような位置すれ
や変形のある文字パターンを認識することは極めて困難
である。
Problems to be Solved by the Invention For example, consider a case where an input pattern is divided into 5×5 pixels as shown in FIG. 6, and learning is performed using these as input. Since this input can be regarded as 25 data strings, even if the input pattern is the same, if the position of one row is shifted, the data string becomes a completely different pattern string and cannot be recognized as shown in FIG. A similar problem occurs even if the input cover pattern is deformed, so it is extremely difficult to recognize character patterns such as handwritten characters that are misaligned or deformed using this method.

この発明の目的は、位置ずれや変形のある文字パターン
を認識するための文字認識装置を提供することにある。
An object of the present invention is to provide a character recognition device for recognizing character patterns that are misaligned or deformed.

課題を解決するための手段 本発明の文字認識装置は、入力される文字パターン中に
含まれた特徴を抽出し、その特徴を入力として学習を行
なうニューラルネットワークを構成する。特徴抽出面は
、特徴抽出面を1行1列ごとにずらして入力面全面に配
置する。
Means for Solving the Problems The character recognition device of the present invention extracts features included in an input character pattern and configures a neural network that performs learning using the features as input. The feature extraction surface is arranged over the entire input surface by shifting the feature extraction surface one row and one column at a time.

作用 上記した手段によれば、入力文字パターンの位置ずれや
変形を入力面全面に配置された特徴抽出面により低減し
て文字認識を行なうことができる。
Effect: According to the above-described means, character recognition can be performed by reducing positional deviation and deformation of an input character pattern using the feature extraction surface arranged over the entire input surface.

実施例 第1図には、この発明が適用される文字認識装置の一実
施例の特徴抽出面が示されている。この例では、第1図
(a)に示す5×5の入力面に対して第1図(b)に示
す3×3の特徴抽出面A、B、・・・・・・■が存在し
、それぞれが入力面の番号の位置の画素と対応している
。この特徴抽出面内に第2図のようなパターンが存在す
れば“1′、存在しなければ“0゛の値を対応させてニ
ューラルネットワークへの入力とし、バックプロパゲー
ションにより学習を行なう。第2図の中で26はパター
ンの存在しない画素、27はパターンの存在する画素を
表わす。
Embodiment FIG. 1 shows a feature extraction surface of an embodiment of a character recognition device to which the present invention is applied. In this example, there are 3 x 3 feature extraction planes A, B,...■ shown in Figure 1 (b) for the 5 x 5 input plane shown in Figure 1 (a). , each corresponds to the pixel at the numbered position on the input surface. If a pattern like that shown in FIG. 2 exists in this feature extraction surface, a value of "1" is associated, and if it does not exist, a value of "0" is input to the neural network, and learning is performed by backpropagation. In FIG. 2, 26 represents a pixel where no pattern exists, and 27 represents a pixel where a pattern exists.

第3図に本発明の文字認識装置の構成図を示す。文字を
フォトダイオードアレイ28で読み取り、文字データか
ら特徴を特徴抽出層29で抽出し、ニューラルネットの
中間層30に入力して出力層31から認識結果を出力す
る。この認識結果が入力文字と異なる場合、正しい文字
認識結果を教師信号32として入力する。教師信号とニ
ューラルネットの認識結果の差を小さくするように、バ
ックプロパゲーション法によりニューラルネットの重み
を変化させて学習を行ない、正しい認識が行なわれるま
で学習を繰り返す。文字データの入力は、光学的に読取
る以外にタブレットを使用した手書き入力であってもよ
い。特徴抽出層29には本発明の方法を適用する。この
文字認識装置の動作のフローチャートを第4図に示す。
FIG. 3 shows a block diagram of the character recognition device of the present invention. Characters are read by a photodiode array 28, features are extracted from the character data by a feature extraction layer 29, inputted to an intermediate layer 30 of a neural network, and a recognition result is output from an output layer 31. If this recognition result differs from the input character, the correct character recognition result is input as the teacher signal 32. Learning is performed by changing the weights of the neural network using the backpropagation method so as to reduce the difference between the teacher signal and the recognition result of the neural network, and the learning is repeated until correct recognition is achieved. Character data may be input by handwriting using a tablet instead of by optical reading. The method of the present invention is applied to the feature extraction layer 29. A flow chart of the operation of this character recognition device is shown in FIG.

また、本発明の実施例において、並列処理による高速性
を生かすために特徴抽出面を入力面全面に配置している
が、回路を簡単化するために、特徴抽出面と入力面の接
続を時間的に変化させて順次入力面全面を走査して特徴
の抽出を行なってもよい。
In addition, in the embodiment of the present invention, the feature extraction surface is placed over the entire input surface in order to take advantage of the high speed provided by parallel processing, but in order to simplify the circuit, the connection between the feature extraction surface and the input surface is Features may be extracted by sequentially scanning the entire input surface while changing the input area.

特徴抽出面で抽出する特徴は、3×3の画素で構成され
ているため、組合せとしては29通り考えられるがその
中で実際に文字パターン中に出現する可能性のあるもの
で文字の区別に役立つもののみを抽出に使用すればよい
。本発明の実施例では例えば第2図に示すような27個
の特徴について抽出を行なっている。この例では3×3
の画素の特徴を抽出しているが、4×3とか5×5のよ
うに画素が多くても正方形でも長方形でもよい。
The features extracted on the feature extraction plane are composed of 3 x 3 pixels, so there are 29 possible combinations, but among these, the features that may actually appear in the character pattern are used to distinguish between characters. Only useful things should be used for extraction. In the embodiment of the present invention, for example, 27 features as shown in FIG. 2 are extracted. In this example 3×3
Although features of pixels are extracted, it may be square or rectangular, even if there are many pixels such as 4×3 or 5×5.

特徴抽出面は、例えば第2図に示すような特徴に対応し
た出力を出すように接続を固定して構成してもよいがこ
の部分もニューラルネットワークとして構成し、第2図
の特徴に対応した出力を出すように事前にバックプロパ
ゲーションにより学習を行ない、特徴抽出面として利用
してもよい。
The feature extraction plane may be configured by fixing connections so as to output an output corresponding to the features shown in Fig. 2, for example, but this part may also be configured as a neural network and output corresponding to the features shown in Fig. 2 may be configured. It is also possible to perform learning by backpropagation in advance so as to output an output and use it as a feature extraction surface.

発明の効果 以上のように、本発明によれば、入力された文字パター
ンから特徴を抽出し、その特徴を入力としてニューラル
ネットワークに学習させることにより、位置ずれや変形
のある文字パターンを認識することができる。
Effects of the Invention As described above, according to the present invention, character patterns with misalignment or deformation can be recognized by extracting features from input character patterns and using the features as input for learning by a neural network. I can do it.

【図面の簡単な説明】[Brief explanation of drawings]

第1図(a) 、 (b)は本発明の一実施例の文字認
識装置の文字入力面((a) )と特徴抽出面((b)
 )の図、第2図は本発明の文字認識装置の抽出する特
徴パターンの図、第3図は本発明の文字認識装置の構成
図、第4図は本発明の文字認識装置の文字認識のフロー
チャート、第5図は従来のニューラルネットワークを使
用した文字認識装置の図、第6図、第7図は従来の文字
認識装置の文字読取の例を示す図である。 1〜25・・・・・・文字読取用画素、26,36.4
0・・・・・・パターンの存在する画素、27,37.
39・・・・・・存在しない画素、28・・・・・・フ
ォトダイオードアレイ、29・・・・・・特徴抽出層、
30・・・・・・中間層、31・・・・・・出力層、3
2・・・・・・教師信号、33・・・・・・バックプロ
パゲーション法による学習制御部、34・・・・・・入
力層、35・・・・・・ニューロン。 代理人の氏名 弁理士 粟野重孝 ほか1名第 図 第 図 傷 図 第 図 第 図
Figures 1(a) and 1(b) show the character input surface ((a)) and feature extraction surface ((b)) of a character recognition device according to an embodiment of the present invention.
), FIG. 2 is a diagram of feature patterns extracted by the character recognition device of the present invention, FIG. 3 is a block diagram of the character recognition device of the present invention, and FIG. 4 is a diagram of character recognition by the character recognition device of the present invention. 5 is a flowchart, FIG. 5 is a diagram of a conventional character recognition device using a neural network, and FIGS. 6 and 7 are diagrams showing examples of character reading by the conventional character recognition device. 1 to 25... Character reading pixels, 26, 36.4
0...Pixel where pattern exists, 27, 37.
39...Non-existent pixel, 28...Photodiode array, 29...Feature extraction layer,
30... Middle layer, 31... Output layer, 3
2... Teacher signal, 33... Learning control unit using back propagation method, 34... Input layer, 35... Neuron. Name of agent: Patent attorney Shigetaka Awano and one other person

Claims (1)

【特許請求の範囲】[Claims] ニューラルネットワークで構成され、入力された文字パ
ターンから特徴を抽出し、前記特徴の種類を、前記ニュ
ーラルネットワークにバックプロパゲーションにより学
習させることを特徴とする文字認識装置。
A character recognition device comprising a neural network, extracting features from an input character pattern, and causing the neural network to learn the types of the features by backpropagation.
JP1061760A 1989-03-14 1989-03-14 Character recognizing device Pending JPH02240791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1061760A JPH02240791A (en) 1989-03-14 1989-03-14 Character recognizing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1061760A JPH02240791A (en) 1989-03-14 1989-03-14 Character recognizing device

Publications (1)

Publication Number Publication Date
JPH02240791A true JPH02240791A (en) 1990-09-25

Family

ID=13180426

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1061760A Pending JPH02240791A (en) 1989-03-14 1989-03-14 Character recognizing device

Country Status (1)

Country Link
JP (1) JPH02240791A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5481621A (en) * 1992-05-28 1996-01-02 Matsushita Electric Industrial Co., Ltd. Device and method for recognizing an image based on a feature indicating a relative positional relationship between patterns
US6101270A (en) * 1992-08-31 2000-08-08 International Business Machines Corporation Neural network architecture for recognition of upright and rotated characters
JP2009217454A (en) * 2008-03-10 2009-09-24 Kyodo Printing Co Ltd Character recognition method, character recognition device, and character recognition program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5481621A (en) * 1992-05-28 1996-01-02 Matsushita Electric Industrial Co., Ltd. Device and method for recognizing an image based on a feature indicating a relative positional relationship between patterns
US5793932A (en) * 1992-05-28 1998-08-11 Matsushita Electric Industrial Co., Ltd. Image recognition device and an image recognition method
US6101270A (en) * 1992-08-31 2000-08-08 International Business Machines Corporation Neural network architecture for recognition of upright and rotated characters
JP2009217454A (en) * 2008-03-10 2009-09-24 Kyodo Printing Co Ltd Character recognition method, character recognition device, and character recognition program

Similar Documents

Publication Publication Date Title
US5067164A (en) Hierarchical constrained automatic learning neural network for character recognition
Kussul et al. Improved method of handwritten digit recognition tested on MNIST database
US5058179A (en) Hierarchical constrained automatic learning network for character recognition
Radzi et al. Character recognition of license plate number using convolutional neural network
EP0907140B1 (en) Feature extraction device
Das et al. Handwritten arabic numeral recognition using a multi layer perceptron
US5511134A (en) Image recognition device and image recognition method
Hurlbert et al. Making machines (and artificial intelligence) see
Cohen Event-based feature detection, recognition and classification
Fukushima Character recognition with neural networks
CN110738213B (en) Image identification method and device comprising surrounding environment
JPH02240791A (en) Character recognizing device
CN117173422A (en) Fine granularity image recognition method based on graph fusion multi-scale feature learning
US4318083A (en) Apparatus for pattern recognition
Koyuncu et al. Handwritten character recognition by using convolutional deep neural network; review
Mori et al. Neural networks that learn to discriminate similar Kanji characters
Choi et al. Biologically motivated visual attention system using bottom-up saliency map and top-down inhibition
Fukushima et al. Symmetry axis extraction by a neural network
Kussul et al. LIRA neural classifier for handwritten digit recognition and visual controlled microassembly
JPH0367381A (en) Character recognition device
Kamentsky Pattern and character recognition systems: Picture processing by nets of neuron-like elements
JPH02240790A (en) Character recognizing device
US5420939A (en) Method and apparatus for a focal neuron system
AU2021107299A4 (en) A system for deep neural network based handwritten digit classification for low resource bengali script
JPH0644376A (en) Image feature extracting device, image recognizing method and image recognizing device