JPH0283688A - Character recognition system - Google Patents

Character recognition system

Info

Publication number
JPH0283688A
JPH0283688A JP63237065A JP23706588A JPH0283688A JP H0283688 A JPH0283688 A JP H0283688A JP 63237065 A JP63237065 A JP 63237065A JP 23706588 A JP23706588 A JP 23706588A JP H0283688 A JPH0283688 A JP H0283688A
Authority
JP
Japan
Prior art keywords
pattern
feature
character
degree
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP63237065A
Other languages
Japanese (ja)
Inventor
Atsushi Tsukumo
津雲 淳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to JP63237065A priority Critical patent/JPH0283688A/en
Publication of JPH0283688A publication Critical patent/JPH0283688A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE:To improve the performance of character recognition by calculating difference between a feature pattern and a reference pattern by performing non-linear matching between each system of scalar quantity in a feature pattern and each system of scalar quantity in a reference pattern corresponding to each system of scalar quantity, and outputting a classification result by performing the sequencing of the difference. CONSTITUTION:A feature extracting part 2 reads in a character pattern signal from a character pattern storage part 1, and outputs the feature of each partial area decided by dividing a character pattern setting the feature pattern described as feature systems in plural directions as a signal. A feature pattern storage part 3 stores a feature pattern signal outputted from the feature extracting part 2, and a classification processing part 5 reads in the feature pattern signal from the feature pattern storage part 3, and reads in a reference pattern signal at every targeted kind of character from a reference pattern storage part 4. Then, the difference at every direction is found in each partial area, and the classification result is decided by performing the sequencing of the difference, and a classification result signal is outputted. In such a way, it is possible to find the difference in which the correspondence of an input character pattern with the reference pattern of each kind of character can be obtained partially, which improves the performance of the character recognition.

Description

【発明の詳細な説明】 (産業上の利用分野) 本発明は、文字認識技術に関する。[Detailed description of the invention] (Industrial application field) The present invention relates to character recognition technology.

(従来の技術) 情報処理システムの多様化に伴い様々なデータ人力方法
が要求されている。その中でも、文字認識技術は有力な
データ入力方法として実用化が進められている。しかし
現在の文字認識技術は、文字の読取り性能の点で人間に
比べてはるかに劣っており、より高い文字認識性能を有
する文字読取り装置が望まれている。
(Prior Art) With the diversification of information processing systems, various manual data processing methods are required. Among these, character recognition technology is being put into practical use as a powerful data input method. However, current character recognition technology is far inferior to humans in terms of character reading performance, and a character reading device with higher character recognition performance is desired.

文字認識性能を高めるためには、文字認識を構成する、
前処理、特徴抽出処理、分類・識別処理、後処理のそれ
ぞれにおいて改良が努められている。この中で、特徴抽
出処理については、方向性の特徴の有効性が例えば文献
(電子情報通信学会論文誌(D)、Vol、J65−D
、 No、 5. pp、 550〜557.斉藤、山
田、山水:[手書き漢字の方向パターン・マツチング法
による解析J)等で示されている。方向性の特徴の一般
的な抽出方法は、まず、文字パタン平面上で方向成分の
分布を求め、次に文字パタンを複数の部分領域に分割し
、各部分領域内で方向成分を統合することであり、これ
によって方向成分の分布が圧縮された形式の方向特徴パ
タンを抽出することができる。
In order to improve character recognition performance, configure character recognition,
Efforts are being made to improve each of pre-processing, feature extraction processing, classification/identification processing, and post-processing. Among these, regarding the feature extraction process, the effectiveness of directional features is discussed, for example, in the literature (Transactions of the Institute of Electronics, Information and Communication Engineers (D), Vol. J65-D
, No, 5. pp, 550-557. Saito, Yamada, Sansui: [Analysis of handwritten kanji using directional pattern matching method J), etc. The general method for extracting directional features is to first find the distribution of directional components on the character pattern plane, then divide the character pattern into multiple subregions, and integrate the directional components within each subregion. As a result, it is possible to extract a directional feature pattern in which the distribution of directional components is compressed.

このような方向特徴パタンを用いた文字認識の分類・識
別処理では、例えば、文献(電子情報通信学会パターン
認識・理解研究会資料、PRU87−104. pp。
In the classification/identification process of character recognition using such directional feature patterns, for example, the literature (IEICE Pattern Recognition and Understanding Study Group Materials, PRU87-104.pp.

41〜48.1988.2.溶雪、口中:r階層的な位
置ずれ補正処理に基づく手書き漢字認識」)等で示され
るように、単純な重ね合わせ処理、ずらし整合処理等が
適用されている。一方、方向特徴の性能向上のために、
本願と同一出願人同一発明者による出願であり、本願と
同日に出願された[特徴抽出方式]に示されるような、
部分領域内の二次元的に広がる特徴を複数方向の特徴系
列の組とする特徴抽出方法の改良が試みられている。
41-48.1988.2. As shown in "Handwritten Kanji Recognition Based on Hierarchical Positional Displacement Correction Processing"), simple overlay processing, shift matching processing, etc. are applied. On the other hand, to improve the performance of directional features,
This application is filed by the same applicant and the same inventor as the present application, and was filed on the same day as the present application.
Attempts have been made to improve a feature extraction method that uses two-dimensionally expanding features within a partial region as a set of feature sequences in multiple directions.

(発明が解決しようとする問題点) 文字認識の分類・識別処理ではマツチングが行なわれる
が、文字の多様な変形に対応するためにはパタン全体を
一括して重ね合わせるよりも部分的に重ね合わせる方が
柔軟なマツチングが実現できる。従って前出力文献(電
子情報通信学会パターン認識・理解研究会資料、PRU
87−104.pp、 44〜48゜1988、2.溶
雪、国中:「階層的な位置ずれ補正処理に基づく手書き
漢字認識」)の中で使われているずらし処理の手法を前
出の特許出願に示されるような部分領域内の複数方向の
特徴系列に対して適用することが考えられ、単純なマツ
チングよりは精度のよいマツチングが行なえるが、文字
パタンの局所的な変形には対応できないという問題が存
在する。
(Problem to be solved by the invention) Matching is performed in the classification and identification process of character recognition, but in order to deal with various deformations of characters, it is necessary to overlap patterns partially rather than all at once. This allows for more flexible matching. Therefore, the previous output document (IEICE Pattern Recognition and Understanding Study Group material, PRU
87-104. pp, 44-48゜1988, 2. Yuyuki, Kuninaka: ``Handwritten Kanji Recognition Based on Hierarchical Positional Displacement Correction Processing'') It is conceivable that this method can be applied to feature sequences, and it can perform matching that is more accurate than simple matching, but there is a problem that it cannot deal with local deformation of character patterns.

(問題を解決するための手段) 本発明によると、二次元格子状の配列として与えられる
文字パタンを格納する文字パタン記憶部と、前記文字パ
タン記憶部から文字パタンを読込み、方向成分を決定し
、方向パタンを作成し、文字パタン上の複数に分割され
た部分領域に対応する方向パタン上の部分領域において
、方向成分ごとに定まる走査方向に部分的な投影を求め
ることによって各部分領域の特徴情報を方向成分ごとに
スカラー量の系列で記述することによって特徴パタンを
作成する特徴抽出部と、前記特徴抽出部によって求めら
れた特徴パタンを格納する特徴パタン記憶部と、前記特
徴パタン記憶部の特徴パタンと同一の記述形式で、各字
種ごとの参照パタンを格納している参照パタン記憶部と
、前記特徴パタン記憶部から特徴パタンを読込み、前記
参照パタン記憶部から各字種ごとの参照パタンを読込み
、特徴パタンの各スカラー量の系列と各スカラー量の系
列に対応する各参照パタンのスカラー量の系列との間で
非線形マツチングを行なうことによって特徴パタンと各
参照パタンとの各部分領域の相違度を計算1〜、各部分
領域における相違度から特徴パタンと各参照パタン相違
度を計算し、相違度の順位付けを行なって分類結果を出
力する分類処理部を具備する文字認識方式を提供できる
(Means for Solving the Problem) According to the present invention, there is provided a character pattern storage unit that stores character patterns given as a two-dimensional grid array, and a character pattern is read from the character pattern storage unit and a directional component is determined. , a directional pattern is created, and the characteristics of each partial area are determined by calculating a partial projection in the scanning direction determined for each direction component in the partial area on the directional pattern corresponding to the partial area divided into multiple parts on the character pattern. a feature extraction unit that creates a feature pattern by describing information as a series of scalar quantities for each direction component; a feature pattern storage unit that stores the feature pattern obtained by the feature extraction unit; and a feature pattern storage unit that stores the feature pattern obtained by the feature extraction unit. A reference pattern storage section stores reference patterns for each character type in the same description format as the feature patterns, and a feature pattern is read from the feature pattern storage section, and a reference pattern for each character type is read from the reference pattern storage section. By reading the pattern and performing nonlinear matching between each scalar quantity series of the feature pattern and the scalar quantity series of each reference pattern corresponding to each scalar quantity series, each subregion of the feature pattern and each reference pattern is Calculate the degree of dissimilarity of 1 to 1, calculate the degree of dissimilarity between the feature pattern and each reference pattern from the degree of dissimilarity in each partial region, rank the degree of dissimilarity, and output the classification result. Can be provided.

(作用) 以下、本発明の原理について図を用いて説明する。第2
図のように、1つの部分領域の特徴が、水平方向、右上
り方向、垂直方向、左方向の4方向に投影され、それぞ
れの方向において、8個の要素の系列で記述されている
とする。このときの特徴をf(i、k)(i=1.2.
・・・、8.に=1.2,3.4)とし、f(i、 1
)を水平方向の系列の要素、毎、2)を右上り方向の系
列の要素、f(i、 3)を垂直方向の系列の要素、f
(i、 4)を左上り方向の系列の要素とする。2つの
文字パタンFとG(例えば入力文字パタンと参照文字パ
タン)との対応する部分領域を亀+ k; m+ n)
+ g(It k: ”+ n) (it J = 1
v 2+ ”’8;m、nは第3図のような部分領域を
示すインデックスで、m= 1.−、 M; n= 1
.−、 N)とすると、2つの系列の有効な非線形マツ
チング法による相違度は、文献(音響学会誌、27.9
. pp、 483〜500(昭和46年)、迫江、千
葉、「動的計画法を利用した音声の時間正規化に基づく
連続単語認識」)で紹介されているDPマツチング法が
有効であることが知られている。DPマツチング法は以
下に示す漸化式(1)、 (2)、 (3)による計算
法で実現できる。
(Operation) Hereinafter, the principle of the present invention will be explained using the drawings. Second
As shown in the figure, assume that the features of one partial region are projected in four directions: horizontally, upwardly to the right, vertically, and to the left, and are described in each direction as a series of eight elements. . The characteristics at this time are f(i, k) (i=1.2.
..., 8. = 1.2, 3.4), and f(i, 1
) is the element of the horizontal series, 2) is the element of the upward-right series, f(i, 3) is the element of the vertical series, f
Let (i, 4) be an element of the sequence in the upward left direction. The corresponding partial areas of two character patterns F and G (for example, an input character pattern and a reference character pattern) are defined as turtle + k; m + n).
+ g(It k: ”+ n) (it J = 1
v 2+ ”'8; m and n are indices indicating partial regions as shown in Fig. 3, m = 1.-, M; n = 1
.. -, N), the degree of dissimilarity between two series by an effective nonlinear matching method is given by the literature (Journal of the Acoustical Society of Japan, 27.9
.. It has been shown that the DP matching method introduced in ``Continuous word recognition based on temporal normalization of speech using dynamic programming'' by Sakoe and Chiba, pp. 483-500 (1970) is effective. Are known. The DP matching method can be realized by the calculation method using recurrence formulas (1), (2), and (3) shown below.

d(1,1,に;m、n)=lf(1,に;rn、n)
−g(1,に;m、n)l  (1)D(i、j、 k
; m、 n)= d(i、j、 k; m、 n)d
(i、j−1,に;m、n)  (2)D(k;m、n
)=D(8,8,に;m、n)         (3
)上記のうちD(k; m、 n)は部分領域(m、n
)における方向にの系列のパタンの相違度であり、部分
領域(m。
d(1,1,to;m,n)=lf(1,to;rn,n)
-g(1, to; m, n)l (1)D(i, j, k
; m, n) = d(i, j, k; m, n) d
(i, j-1,; m, n) (2) D(k; m, n
)=D(8,8,to;m,n)(3
) Among the above, D(k; m, n) is the partial area (m, n
) is the degree of dissimilarity of the pattern of the series in the direction of the subregion (m.

n)における相違度D(m、n)は各方向の系列のパタ
ンの相違度の総和として、次式(4)に示す計算法で定
まる。
The degree of dissimilarity D(m, n) in n) is determined by the calculation method shown in the following equation (4) as the sum of the degrees of dissimilarity of the patterns of the series in each direction.

D(m、 n) =ΣD(k; m、 n)     
    (4)k=1 パタン全体の相違度りは次式(5)に示すように各部分
領域の相違度の総和を求める計算で求まる。
D(m, n) = ΣD(k; m, n)
(4) k=1 The degree of dissimilarity of the entire pattern is determined by calculating the sum of the degrees of dissimilarity of each partial area as shown in the following equation (5).

以上のような計算を行なって、人力文字パタンと各読取
り対象字種ごとの参照パタンとの相違度を求め、相違度
の少ない順番に順位づけを行なうことによって分類処理
を実現できる。
The classification process can be realized by performing the above calculations to determine the degree of difference between the human character pattern and the reference pattern for each character type to be read, and ranking them in descending order of degree of difference.

尚、本説明では複数の方向を4方向としたが、本発明に
おいてはこれに限るものではない。
Note that in this description, the plurality of directions is four directions, but the present invention is not limited to this.

(実施例) 第1図は本発明の構成の一実施例を示すブロック図であ
る。文字パタン記憶部1は人力文字パタンを格納するも
ので通常の記憶手段でよい。特徴抽出部2は、前記文字
パタン記憶部1から文字パタン信号を読み込み、文字パ
タンを分割して定まる各部分領域の特徴を、複数の方向
の特徴系列として記述された特徴パタンを信号として出
力するものである。その詳細について第4図を参照して
説明する。方向分布抽出部22は、前記文字パタン記憶
部1から文字パタン信号を読込み、各方向ごとの方向成
分の分布パタンを抽出し、各方向の方向分布パタンを出
力するもので、前出の文献等に方向抽出アルゴリズムが
示されており、通常の論理演算素子、算術演算素子、記
憶素子等を用いる従来技術で容易に実現できる。方向分
布パタン記憶部23は前記方向分布抽出部22から出力
される各方向の方向分布パタンを格納するもので、通常
の記憶手段でよい。投影領域記憶部24は部分領域のイ
ンデックス、方向のインデックスおよび各部分領域の各
方向ごとに部分投影を求める投影領域の位置座標を格納
するもので、通常の記憶手段でよい。部分投影抽出部2
5は、前記投影領域記憶部4から、部分領域のインデッ
クス、方向のインデックス及び投影領域の位置座標を順
次読込み、対応する方向分布パタンを前記方向分布パタ
ン記憶部3から読込み、部分投影を求めることによって
、各部分領域において各方向の投影情報として記述され
る特徴パタンを出力するもので、通常の論理演算素子、
算術演算素子、記憶素子等を用いる従来技術で容易に実
現できる。特徴パタン記憶部3は前記特徴抽出部2から
出力される特徴パタン信号を格納するもので、通常の記
憶手段でよい。参照パタン記憶部4は前記特徴パタン記
憶部3と同一の形式で記述された各読取り対象字種ごと
の特徴パタンを参照パタンとして格納するもので通常の
記憶手段でよい。分類処理部5は前記特徴パタン記憶部
3から特徴パタン信号を読込み、前記参照パタン記憶部
4から各読取り対象字種ごとの参照パタン信号を読込み
、(作用)の項で説明したように、各部分領域において
各方向ごとの特徴系列をDPマツチングで相違度を求め
、各方向ごとの相違度の総和で各部分領域ごとの相違度
を求め、各部分領域ごとの相違度の総和で入力文字パタ
ンと各参照パタンとの相違度を決定し、相違度を順序づ
けることによって分類結果を決定して分類結果信号を出
力するもので、従来のパタン認識装置に適用された技術
で容易に実現できる。
(Embodiment) FIG. 1 is a block diagram showing an embodiment of the configuration of the present invention. The character pattern storage section 1 stores human character patterns, and may be a normal storage means. The feature extraction unit 2 reads the character pattern signal from the character pattern storage unit 1, and outputs, as a signal, a feature pattern in which the characteristics of each partial region determined by dividing the character pattern are described as feature sequences in a plurality of directions. It is something. The details will be explained with reference to FIG. The directional distribution extraction section 22 reads the character pattern signal from the character pattern storage section 1, extracts the distribution pattern of the directional component for each direction, and outputs the directional distribution pattern for each direction. A direction extraction algorithm is shown in , and can be easily realized by conventional techniques using ordinary logical operation elements, arithmetic operation elements, memory elements, etc. The directional distribution pattern storage section 23 stores the directional distribution patterns in each direction output from the directional distribution extraction section 22, and may be a normal storage means. The projection area storage unit 24 stores indexes of partial areas, indexes of directions, and position coordinates of projection areas for which partial projections are obtained for each direction of each partial area, and may be a normal storage means. Partial projection extractor 2
5, sequentially reading the index of the partial area, the index of the direction, and the position coordinate of the projection area from the projection area storage unit 4, and reading the corresponding directional distribution pattern from the directional distribution pattern storage unit 3 to obtain the partial projection. This outputs a feature pattern described as projection information in each direction in each partial region, and uses ordinary logical operation elements,
This can be easily realized using conventional techniques using arithmetic operation elements, memory elements, and the like. The feature pattern storage section 3 stores the feature pattern signal output from the feature extraction section 2, and may be a normal storage means. The reference pattern storage section 4 stores, as reference patterns, characteristic patterns for each character type to be read, which are written in the same format as the feature pattern storage section 3, and may be a normal storage means. The classification processing unit 5 reads the feature pattern signal from the feature pattern storage unit 3, reads the reference pattern signal for each character type to be read from the reference pattern storage unit 4, and as explained in the (effect) section, each The degree of dissimilarity of the feature series for each direction in the partial region is determined by DP matching, the degree of dissimilarity for each partial region is determined by the sum of the degree of dissimilarity for each direction, and the degree of dissimilarity for each partial region is determined by the sum of the degree of dissimilarity for each partial region. This method determines the degree of dissimilarity between the pattern and each reference pattern, determines the classification result by ordering the degree of dissimilarity, and outputs a classification result signal, and can be easily realized using technology applied to conventional pattern recognition devices.

以上のように本発明によれば、文字パタン上の各部分領
域において各方向ごとの特徴系列に非線形マツチングを
適用することによって人力文字パタンと各字種の参照パ
タンとの局所的に対応のとれる相違度を求めることがで
き、文字認識の性能向上におおいに役立つ。
As described above, according to the present invention, by applying nonlinear matching to the feature series for each direction in each partial region on the character pattern, it is possible to locally correspond between the human character pattern and the reference pattern of each character type. The degree of dissimilarity can be determined, which is extremely useful for improving character recognition performance.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の構成の一実施例を示すブロック図、第
2図は部分領域の特徴を水平方向、右上り方向、垂直方
向、左上つ方向の系列として記述していることを説明す
るための図、第3図は文字パタン領域を複数の部分領域
に分割した例を示すための図、第4図は特徴抽出部2の
構成例を示すための図である。 図中、1・・・文字パタン記憶部、2・・・特徴抽出部
、3・・・特徴パタン記憶部、4・・・参照パタン記憶
部、5・・・分類処理部である。
Fig. 1 is a block diagram showing an embodiment of the configuration of the present invention, and Fig. 2 explains that the characteristics of a partial area are described as a series in the horizontal direction, upward right direction, vertical direction, and upper left direction. FIG. 3 is a diagram showing an example of dividing a character pattern area into a plurality of partial areas, and FIG. 4 is a diagram showing an example of the configuration of the feature extraction section 2. In the figure, 1... Character pattern storage section, 2... Feature extraction section, 3... Feature pattern storage section, 4... Reference pattern storage section, 5... Classification processing section.

Claims (1)

【特許請求の範囲】[Claims] 二次元格子状の配列として与えられる文字パタンを格納
する文字パタン記憶部と、前記文字パタン記憶部から文
字パタンを読込み、方向成分を決定し、方向パタンを作
成し、文字パタン上の複数に分割された部分領域に対応
する方向パタン上の部分領域において、方向成分ごとに
定まる走査方向に部分的な投影を求めることによって各
部分領域の特徴情報を方向成分ごとにスカラー量の系列
で記述することによって特徴パタンを作成する特徴抽出
部と、前記特徴抽出部によって求められた特徴パタンを
格納する特徴パタン記憶部と、前記特徴パタン記憶部の
特徴パタンと同一の記述形式で、各字種ごとの参照パタ
ンを格納している参照パタン記憶部と、前記特徴パタン
記憶部から特徴パタンを読込み、前記参照パタン記憶部
から各字種ごとの参照パタンを読込み、特徴パタンの各
スカラー量の系列と各スカラー量の系列に対応する各参
照パタンのスカラー量の系列との間で非線形マッチング
を行うことによって特徴パタンと各参照パタンとの各部
分領域の相違度を計算し、各部分領域における相違度か
ら特徴パタンと各参照パタンの相違度を計算し、相違度
の順位付けを行って分類結果を出力する分類処理部を具
備する文字認識方式。
A character pattern storage section that stores a character pattern given as a two-dimensional grid array, reads the character pattern from the character pattern storage section, determines a directional component, creates a directional pattern, and divides the character pattern into multiple parts. In the partial area on the directional pattern corresponding to the partial area that has been created, the feature information of each partial area is described as a series of scalar quantities for each directional component by obtaining a partial projection in the scanning direction determined for each directional component. a feature extraction unit that creates a feature pattern by the feature extraction unit; a feature pattern storage unit that stores the feature pattern obtained by the feature extraction unit; A feature pattern is read from a reference pattern storage section storing reference patterns and the feature pattern storage section, a reference pattern for each character type is read from the reference pattern storage section, and each scalar quantity series of the feature pattern and each The degree of dissimilarity of each subregion between the feature pattern and each reference pattern is calculated by performing nonlinear matching between the series of scalar quantities of each reference pattern corresponding to the series of scalar quantities, and the degree of dissimilarity in each subregion is calculated. A character recognition method that includes a classification processing unit that calculates the degree of difference between a feature pattern and each reference pattern, ranks the degree of difference, and outputs a classification result.
JP63237065A 1988-09-20 1988-09-20 Character recognition system Pending JPH0283688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP63237065A JPH0283688A (en) 1988-09-20 1988-09-20 Character recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP63237065A JPH0283688A (en) 1988-09-20 1988-09-20 Character recognition system

Publications (1)

Publication Number Publication Date
JPH0283688A true JPH0283688A (en) 1990-03-23

Family

ID=17009900

Family Applications (1)

Application Number Title Priority Date Filing Date
JP63237065A Pending JPH0283688A (en) 1988-09-20 1988-09-20 Character recognition system

Country Status (1)

Country Link
JP (1) JPH0283688A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120141030A1 (en) * 2010-12-01 2012-06-07 Institute For Information Industry Code Recognition Method, Device and Computer Readable Storage Medium for Storing Code Recognition Method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120141030A1 (en) * 2010-12-01 2012-06-07 Institute For Information Industry Code Recognition Method, Device and Computer Readable Storage Medium for Storing Code Recognition Method
US8965128B2 (en) * 2010-12-01 2015-02-24 Institute For Information Industry Code recognition method, device and computer readable storage medium for storing code recognition method

Similar Documents

Publication Publication Date Title
KR101991763B1 (en) Dense searching method and image processing device
Grenander et al. Pattern theory: from representation to inference
US10755145B2 (en) 3D spatial transformer network
US6400853B1 (en) Image retrieval apparatus and method
JP3400151B2 (en) Character string region extraction apparatus and method
CN102110284B (en) Information processing apparatus and information processing method
Wang et al. Efficient Euclidean distance transform algorithm of binary images in arbitrary dimensions
EP3582188B1 (en) Image processing device, image processing method, image processing program, and recording medium storing program
CN112149694B (en) Image processing method, system, storage medium and terminal based on convolutional neural network pooling module
EP1760636B1 (en) Ridge direction extraction device, ridge direction extraction method, ridge direction extraction program
KR19980070101A (en) Method and apparatus for deriving a coupling rule between data, and method and apparatus for extracting orthogonal convex region
JPH0981730A (en) Method and device for pattern recognition and computer controller
CN109785283B (en) Texture feature matching method and device for fabric segmentation
JP2006099602A (en) Image construction method, fingerprint image construction apparatus and program
Dyshkant An algorithm for calculating the similarity measures of surfaces represented as point clouds
US20090129699A1 (en) Image processing system
JPH0283688A (en) Character recognition system
CN112749576B (en) Image recognition method and device, computing equipment and computer storage medium
CN113158970B (en) Action identification method and system based on fast and slow dual-flow graph convolutional neural network
CN113486848A (en) Document table identification method, device, equipment and storage medium
JP4394926B2 (en) Image processing method and apparatus, program, and recording medium
CN111325194A (en) Character recognition method, device and equipment and storage medium
JPH0283689A (en) Character recognition system
WO2022196060A1 (en) Information processing device, information processing method, and non-transitory computer-readable medium
JP7327656B2 (en) Image processing device, image processing method, and program