JPS62130481A - Character recognition system - Google Patents

Character recognition system

Info

Publication number
JPS62130481A
JPS62130481A JP60270214A JP27021485A JPS62130481A JP S62130481 A JPS62130481 A JP S62130481A JP 60270214 A JP60270214 A JP 60270214A JP 27021485 A JP27021485 A JP 27021485A JP S62130481 A JPS62130481 A JP S62130481A
Authority
JP
Japan
Prior art keywords
mapping
feature
discrimination
category
category name
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP60270214A
Other languages
Japanese (ja)
Inventor
Hiroyuki Kami
上 博行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to JP60270214A priority Critical patent/JPS62130481A/en
Publication of JPS62130481A publication Critical patent/JPS62130481A/en
Pending legal-status Critical Current

Links

Landscapes

  • Character Discrimination (AREA)

Abstract

PURPOSE:To reduce the number of checks, and to reduce the lowering of a recognition speed due to an increase in the number of character fonts by converting a feature of one kind with plural mapping conversions, and using an obtained conversion feature for the check. CONSTITUTION:The output of a photoelectric transducing part 10 is given to a feature extraction part 20, and a feature value is outputted. A discrimination part 30 consists of a discrimination mapping storage part 31, a feature conversion part 32, a dictionary storage part 33, and a distance calculation part 34, and outputs the category name of a mapping value to make the minimum distance between the mapping value of an input pattern and that of a recognition objected category at the dictionary storage part 33. A discrimination control part 40 selects discrimination parts 51-5n based on the category name from the discrimination part 30, and an effective category is outputted from selected discrimination parts 51-5n. A discrimination result editing part 60 performs the edit processing of the category name that is the output from a discrimination part 50.

Description

【発明の詳細な説明】 (産業上の利用分野) 本発明は、認識対象が複数フォントの文字で、認識速度
が問題となる場合に適する文字認識方式(従来技術とそ
の問題点) 統計的パターン認識理論の一分野として線形判別分析が
あり、認識対象カテゴリの標準パターンから抽出した特
徴値を判別分析し得られた判別写像の各軸における各カ
テゴリごとの写像値の平均値(写像平均値と呼ぶ)を各
カテゴリの認識辞書の要素として、判別写像の軸数の写
像平均値を辞書とすると認識に利用できる。
[Detailed Description of the Invention] (Industrial Application Field) The present invention is a character recognition method suitable for cases where the recognition target is characters of multiple fonts and recognition speed is an issue (prior art and its problems).Statistical pattern Linear discriminant analysis is a field of recognition theory, and is calculated by discriminantly analyzing the feature values extracted from the standard pattern of the recognition target category. It can be used for recognition by using the mapping average value of the number of axes of the discriminant mapping as an element of the recognition dictionary for each category.

線形判別分析は、長尾 真編1パターン認識と図形処理
」(昭和58年3月、岩波書店)の34ページから36
ページにあるように、2クラスを対象としたフィッシャ
ーの判別関数を多クラスの場合へ拡張したものであり、
判別関数の組からの次元を減らし、同時にクラス間の分
離を強調する線形写像と考えることができる。
Linear discriminant analysis is based on Makoto Nagao's ``Pattern Recognition and Shape Processing'' (March 1980, Iwanami Shoten), pages 34 to 36.
As shown on the page, it is an extension of Fisher's discriminant function for two classes to the case of multiple classes.
It can be thought of as a linear mapping that reduces the dimensionality from the set of discriminant functions and at the same time emphasizes the separation between classes.

対象カテゴリ数かに個で、標準パターンから抽出するM
個の特徴に対応するM次元の特徴値の線形判別写像は共
分散行列の場合、次の固有値問題を解くことである。
M to extract from the standard pattern with the number of target categories
In the case of a covariance matrix, the linear discriminant mapping of M-dimensional feature values corresponding to individual features is to solve the following eigenvalue problem.

Σ8A−Σ、AΔ ここに Σa:カテゴリ間共公共分散行列XN次元)ΣW:カテ
ゴリ内共公共分散行列XN次元)△ :固有値行列  
   (NXN次元)A :固有ベクトル行列  (N
XN次元)85m1n(K−1,M) 求まる固有ベクトル行列の行ベクトルが判別写像の軸に
なる。
Σ8A-Σ, AΔ where Σa: inter-category co-public variance matrix (XN dimension) ΣW: within-category co-public variance matrix
(NXN dimensions) A: Eigenvector matrix (N
XN dimension) 85m1n(K-1, M) The row vector of the eigenvector matrix to be found becomes the axis of the discriminant mapping.

判別分析による識別は、前述のように各カテゴリの標準
パターンを写像して得られる写像平均値を認識辞書とし
、未知パターンから抽出される特徴値を線形写像して得
られる辞書カテゴリ名を照合結果とする。
Identification by discriminant analysis uses the mapping average value obtained by mapping the standard pattern of each category as the recognition dictionary as described above, and uses the dictionary category name obtained by linear mapping the feature values extracted from the unknown pattern as the matching result. shall be.

文字認識に判別分析を用いる方法は、大津展之著「パタ
ーン認識における特徴抽出に関する数理的研究」、電子
総合研究所報告第818号、昭和56年7月の188ペ
ージから191ページにあり、単一フォントの文字を対
象として一つの判別分析による結果が報告されている。
The method of using discriminant analysis for character recognition is described in Nobuyuki Otsu, “Mathematical Research on Feature Extraction in Pattern Recognition,” Electronics Research Institute Report No. 818, July 1988, pages 188 to 191. The results of one discriminant analysis for characters of one font have been reported.

認識対象が複数フォントになると標準パターンからの特
徴値の写像値が分散し平均値を辞書とする方法での識別
では異なるカテゴリへの距離が近くなるパターンで識別
性能の低下が生じる。
When there are multiple fonts to be recognized, the mapped values of the feature values from the standard pattern are dispersed, and identification using a method that uses the average value as a dictionary results in a decline in identification performance for patterns that are close to different categories.

複数フォントの文字が認識対象の場合、精度を落とさず
に認識をするためには各フォントの各カテゴリ毎に認識
用辞書を用意するのが簡単である。しかしこの方法では
、認識対象フォントが増加すると認識速度が低下すると
いう問題が生じる。認識速度を高速化するには、照合に
用いる辞書の数、各辞書の距離又は類似度の計算回数を
減らせばよいので、次のような方法が使われる。初段に
はフォントの影響が少ない小数の特徴で作られた辞書を
用意し、次の段には初段で類似したカテゴリを区別する
辞書を用意し、初段の辞書による照合で候補カテゴリを
求め、次の段では候補のカテゴリの辞書のみと照合する
方法が良く知られている。
When characters from multiple fonts are to be recognized, it is easy to prepare recognition dictionaries for each category of each font in order to recognize them without reducing accuracy. However, this method has a problem in that the recognition speed decreases as the number of fonts to be recognized increases. In order to increase the recognition speed, it is sufficient to reduce the number of dictionaries used for matching and the number of times the distance or similarity of each dictionary is calculated, so the following method is used. In the first stage, we prepare a dictionary made of decimal features that are less affected by the font, and in the next stage, we prepare a dictionary that distinguishes between similar categories in the first stage, find candidate categories by matching with the dictionary in the first stage, and then A well-known method is to check only the dictionary of the candidate category.

また初段の特徴と次段の特徴は異なる性質の特徴が一般
に使われる。複数の特徴抽出機構があるとさには特徴の
組合わせ方が問題となる。
Also, the characteristics of the first stage and the characteristics of the second stage are generally used with different characteristics. When there are multiple feature extraction mechanisms, the problem is how to combine features.

そこで、本発明の目的は一つの特徴だけで段階的な照合
を行なうことにより前述の問題点を解決する文字認識方
式を提供することにある。
SUMMARY OF THE INVENTION Accordingly, an object of the present invention is to provide a character recognition method that solves the above-mentioned problems by performing step-by-step verification using only one feature.

(問題点を解決するための手段) 前述の問題点を解決するために本発明が提供する手段は
、入力パターンからの特徴値を写像変換し求まる変換特
徴値と予め変換し記憶している認識対象標準パターンの
変換特徴値との照合により入力パターンを判定する文字
認識方式であって、写像変換は特徴値を判別分析し求ま
る判別写像の軸である複数のベクトルで行ない、1段目
の写像変換で得られる変換特徴値による照合でカテゴリ
名を求め、得られたカテゴリ名により2段目の写像変換
を選択し、入力パターンからの特徴値を選択された写像
変換で求まる変換特徴値による照合でカテゴリ名を求め
、得られたカテゴリ名を入力パターンのカテゴリ名とす
ることを特徴とする。
(Means for Solving the Problems) In order to solve the above-mentioned problems, the present invention provides a means for solving the above-mentioned problems. This is a character recognition method that determines the input pattern by comparing it with the converted feature values of the target standard pattern. Mapping conversion is performed using multiple vectors that are the axes of the discriminant mapping determined by discriminant analysis of the feature values. A category name is determined by matching using the conversion feature value obtained from the conversion, a second-stage mapping transformation is selected based on the obtained category name, and the feature value from the input pattern is matched using the conversion feature value found by the selected mapping transformation. The method is characterized in that the category name is obtained by using , and the obtained category name is used as the category name of the input pattern.

(作用) 本発明では、一種類の特徴を複数の写像変換で変換し求
まる変換特徴を照合に用いるので特徴抽出機構は一つで
すむ、また後述のように写像変換の軸を判別分析により
求まる判別写像の軸とすることで文字フォントごとの辞
書を必要としないので照合回数が少なくてすみ、文字フ
ォントの増加による認識速度の低下は少ない。
(Operation) In the present invention, since the transformed features obtained by transforming one type of feature through multiple mapping transformations are used for matching, only one feature extraction mechanism is required.Also, as described later, the mapping transformation axis is determined by discriminant analysis. By using it as the axis of the discriminant mapping, there is no need for a dictionary for each character font, so the number of times of matching can be reduced, and there is little reduction in recognition speed due to an increase in the number of character fonts.

(実施例) 以下本発明の実施例について図面を参照して詳細に説明
する。
(Example) Examples of the present invention will be described in detail below with reference to the drawings.

第1図は本発明の一実施例のブロック図であり、認識対
象パターンは文字パターンである。
FIG. 1 is a block diagram of an embodiment of the present invention, and the recognition target pattern is a character pattern.

光電変換部10は、文字の印字された帳票を走査し、帳
票の黒の点は“ド、白の点は“0“の2値に光1!変換
し、得られた2値の画像を一文字ずつに切り出し順次出
力する。特徴抽出部20は光電変換部10の出力である
2値パターンの位置合わせを行ない、予め定めた位置に
おける2値パターンと2次元重みマスクとの積和である
値を特徴要素として複数位置における積和の値を特徴値
として出力する。従って出力される特徴は2値パターン
をぼかしたパターン(ぼかしパターン)に相当する。
The photoelectric conversion unit 10 scans the form with characters printed on it, converts the black dots of the form into binary values of "do" and the white dots of "0", and converts the obtained binary image into a binary image. Each character is cut out and output sequentially.The feature extraction unit 20 aligns the binary pattern output from the photoelectric conversion unit 10, and obtains a value that is the sum of products of the binary pattern and the two-dimensional weight mask at a predetermined position. The value of the sum of products at a plurality of positions is output as a feature value using the feature element as a feature element.Therefore, the output feature corresponds to a pattern obtained by blurring a binary pattern (blur pattern).

例えばマスクMのサイズを5X5に、2値パターンをF
’(i、j)とすると、ぼかしパターンB(k、1)は
次式で表わされる。
For example, the size of the mask M is 5X5, and the binary pattern is F.
'(i, j), the blurring pattern B(k, 1) is expressed by the following equation.

ぼかしパターンB(k、 1 )を一定の順序で並べる
と特徴値となり、ベクトルの形式で表現出来る。
When the blur patterns B(k, 1) are arranged in a certain order, they become feature values, which can be expressed in the form of a vector.

判別部30は、判別写像記憶部31、特徴変換部32、
辞書記憶部33、及び距離計算部34からなる。
The discrimination unit 30 includes a discriminant mapping storage unit 31, a feature conversion unit 32,
It consists of a dictionary storage section 33 and a distance calculation section 34.

判別写像記憶部31は写像変換の軸に相当する写像ベク
トルを記憶し、特徴変換部32は特徴抽出部20からの
出力である特徴値を前記判別写像記憶部31の写像ベク
トルで変換し、入力パターンの写像値を出力する。
The discriminant mapping storage unit 31 stores mapping vectors corresponding to the axes of mapping transformation, and the feature conversion unit 32 converts the feature values that are output from the feature extraction unit 20 with the mapping vectors of the discriminant mapping storage unit 31, and converts them into inputs. Output the mapped value of the pattern.

特徴値をF、写像の各軸をH9とすると、写像の軸数に
相当する次元数のベクトルの形となる。
Assuming that the feature value is F and each axis of the mapping is H9, the shape is a vector with the number of dimensions corresponding to the number of axes of the mapping.

F匂−(F、、F、、・・・・、F。)−1Hl−’ 
−(tb□、Hl!+・・・・+HIa)−1に一’ 
−(K、、に□、・・・・、に、)K、  −[H,、
Fコ −H,+ XF、+1(、*XF、+−−−−+H&、
×F。
F-(F,,F,,...,F.)-1Hl-'
-(tb□, Hl!+...+HIa)-1 to 1'
-(K,, ni□,...,ni,)K, -[H,,
F co-H, + XF, +1(, *XF, +----+H&,
×F.

辞書記憶部33は認識対象の標準パターンから得られた
写像値とカテゴリ名とを組にして記憶し、距離計算部3
4は特徴変換部32の出力である入力パターンの写像値
と、辞書記憶部33の認識対象カテゴリの写像値との間
で距離計算を行ない最小距離となる写像値のカテゴリ名
を出力する。
The dictionary storage unit 33 stores the mapping value obtained from the standard pattern to be recognized and the category name as a pair, and stores the mapping value and category name as a pair.
4 calculates the distance between the mapped value of the input pattern output from the feature converter 32 and the mapped value of the recognition target category in the dictionary storage unit 33, and outputs the category name of the mapped value that provides the minimum distance.

写像値に1とに1との距離S12は次式で求める。The distance S12 between the mapped values 1 and 1 is determined by the following equation.

SI!−[K’−K”、K”+K”コにただし[A、B
]はベクトルAとBとの内積を表わすとする。
SI! - [K'-K", K"+K" but [A, B
] represents the inner product of vectors A and B.

51から50は2段目の判別部であり、判定制御部40
は前記判別部30からの出力であるカテゴリ名をもとに
判別部51から50までのうちの駆動する判別部を選ぶ
制御を行なう。ここで判別部51から5nまでは前述の
判別部30と同じ構成であり、判別写像記憶部、特徴変
換部、辞書記憶部及び距離計算部とからなり、同様な処
理を行なう。従って判別部51から50までのうちで前
記判定制御部40により選ばれた判別部からのみ有効な
カテゴリ名が出力きれる。判定結果編集部60は、判別
部50からの出力であるカテゴリ名の編集処理を行なう
。すなわち判定結果編集部60は判別部50からのカテ
ゴリ名が一種類であればそのカテゴリ名を、複数あれば
決定出来なかったことを表わすリジェクトのカテゴリ名
を入力パターンの認識結果として出力する。
51 to 50 are second-stage determination units, and the determination control unit 40
controls to select which discriminating section to be driven from among discriminating sections 51 to 50 based on the category name output from the discriminating section 30. Here, the discriminating units 51 to 5n have the same configuration as the above-described discriminating unit 30, and include a discriminant mapping storage unit, a feature conversion unit, a dictionary storage unit, and a distance calculation unit, and perform similar processing. Therefore, valid category names can be output only from the discriminating section selected by the discriminating section 40 among the discriminating sections 51 to 50. The determination result editing section 60 edits the category name output from the determination section 50. That is, the determination result editing unit 60 outputs, as the recognition result of the input pattern, if there is only one type of category name from the determining unit 50, that category name is output, and if there is more than one, a rejected category name indicating that it could not be determined.

第2図は、判別部30及び51から5n内に記憶する辞
書作成の一方法を示す流れ図である。認識対象カテゴリ
の標準2値パターンを特徴抽出し、各パターンごとの特
徴値を求める。次に求まった特徴値より各特徴要素ごと
にカテゴリ間分散値とカテゴリ内分散値との比であるF
比を求め、F比の値の大きい順に一定個数の特徴又はF
比の値が一定値以上である特徴の選択を行なう。選択さ
れた特徴要素だけの特徴値を判別分析し、写像の各軸を
求める。さらにカテゴリごとの特徴値を求めた写像の各
軸に変換し、変換値の平均により写像平均値を得る。前
述のように求まったカテゴリ名と写像平均値とを辞書と
する。
FIG. 2 is a flowchart showing one method for creating dictionaries to be stored in the discriminating units 30 and 51 to 5n. Features are extracted from the standard binary pattern of the recognition target category, and the feature value for each pattern is determined. Next, from the feature values found, F is the ratio of the between-category variance value and the within-category variance value for each feature element.
Find the ratio, and select a certain number of features or F in descending order of the F ratio value.
Features whose ratio value is greater than or equal to a certain value are selected. Discriminant analysis is performed on the feature values of only the selected feature elements to determine each axis of the mapping. Furthermore, the feature values for each category are converted to each axis of the obtained mapping, and a mapping average value is obtained by averaging the converted values. The category names and mapping average values determined as described above are used as a dictionary.

判別部50の内の各判別部51から5nは一段目の判別
部30だけでは誤る可能性のあるカテゴリの組内で判別
を行なう機能を持つ。
Each of the discriminating sections 51 to 5n of the discriminating section 50 has a function of making a discrimination within a set of categories in which there is a possibility that the first discriminating section 30 alone may make an error.

第3図は判別部51から5nまでの辞書作成に必要な認
識対象カテゴリの標準パターンの分類手順の一例を説明
する流れ図で、一段目の判別を用い、各認識対象カテゴ
リの標準パターンを判別して得られる判別結果から、誤
読マトリックスを作り、互に誤るカテゴリ名を組にする
ことで分類する。
FIG. 3 is a flowchart illustrating an example of the procedure for classifying standard patterns of recognition target categories necessary for dictionary creation in the discriminating units 51 to 5n. Based on the discrimination results obtained, a misreading matrix is created and classification is performed by pairing mutually incorrect category names.

以上の説明では特徴値は、ぼかしパターンとしたが、本
発明はこの特徴を用いた方法に限定きれるものではない
In the above description, the feature value is a blur pattern, but the present invention is not limited to a method using this feature.

(発明の効果) 以上説明したように本発明では、一種類の特徴を複数の
写像変換で変換し求まる変換特徴値の照合に用いるので
、特徴抽出機構は一つですみ、また文字フォントごとの
辞書を必要としないので従来の文字フォントごとに辞書
を用意して照合する方式に比較して認識速度が向上する
(Effects of the Invention) As explained above, in the present invention, one type of feature is converted by multiple mapping transformations and used to collate the converted feature values obtained, so only one feature extraction mechanism is required, and Since it does not require a dictionary, recognition speed is improved compared to the conventional method of preparing a dictionary for each character font and checking it.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の文字認識方式の一実施例を示すブロッ
ク図、第2図は一段目の辞書作成の一方法を示す流れ図
、第3図は二段目の辞書作成に必要な標準パターンの分
類方法の一例を説明する流れ図である。 図において、10は光電変換部、20は特徴抽出部、3
0は一段目の判別部、31は判別写像記憶部、32は特
徴変換部、33は辞書記憶部、34は距離計算部、51
から50は二段目の各判別部である。
Fig. 1 is a block diagram showing an embodiment of the character recognition method of the present invention, Fig. 2 is a flow chart showing one method for creating a first-stage dictionary, and Fig. 3 is a standard pattern necessary for creating a second-stage dictionary. 2 is a flowchart illustrating an example of a classification method. In the figure, 10 is a photoelectric conversion unit, 20 is a feature extraction unit, and 3
0 is a first-stage discriminator, 31 is a discriminant map storage, 32 is a feature conversion unit, 33 is a dictionary storage unit, 34 is a distance calculation unit, 51
to 50 are the respective discrimination units in the second stage.

Claims (1)

【特許請求の範囲】[Claims] 入力パターンからの特徴値を写像変換し求まる変換特徴
値と予め変換し記憶している認識対象標準パターンの変
換特徴値との照合により入力パターンを判定する文字認
識方式において、写像変換は特徴値を判別分析し求まる
判別写像の軸である複数のベクトルで行ない、1段目の
写像変換で得られる変換特徴値による照合でカテゴリ名
を求め、得られたカテゴリ名により2段目の写像変換を
選択し、入力パターンからの特徴値を選択された写像変
換で求まる変換特徴値による照合でカテゴリ名を求め、
得られたカテゴリ名を入力パターンのカテゴリ名とする
ことを特徴とする文字認識方式。
In a character recognition method that determines an input pattern by comparing the converted feature values obtained by mapping the feature values from the input pattern with the converted feature values of a standard pattern to be recognized that has been converted and stored in advance, mapping conversion converts the feature values. Perform discriminant analysis using multiple vectors that are the axes of the discriminant mapping, find the category name by matching the transformation feature values obtained in the first mapping transformation, and select the second mapping transformation based on the obtained category name. Then, find the category name by matching the feature values from the input pattern with the transformed feature values found by the selected mapping transformation,
A character recognition method characterized in that the obtained category name is used as the category name of an input pattern.
JP60270214A 1985-11-30 1985-11-30 Character recognition system Pending JPS62130481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP60270214A JPS62130481A (en) 1985-11-30 1985-11-30 Character recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP60270214A JPS62130481A (en) 1985-11-30 1985-11-30 Character recognition system

Publications (1)

Publication Number Publication Date
JPS62130481A true JPS62130481A (en) 1987-06-12

Family

ID=17483128

Family Applications (1)

Application Number Title Priority Date Filing Date
JP60270214A Pending JPS62130481A (en) 1985-11-30 1985-11-30 Character recognition system

Country Status (1)

Country Link
JP (1) JPS62130481A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007066310A (en) * 2005-08-26 2007-03-15 Fujitsu Ltd Character string recognition program, method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007066310A (en) * 2005-08-26 2007-03-15 Fujitsu Ltd Character string recognition program, method and device

Similar Documents

Publication Publication Date Title
US6272242B1 (en) Character recognition method and apparatus which groups similar character patterns
JP4311552B2 (en) Automatic document separation
Kanai et al. Automated evaluation of OCR zoning
US6021220A (en) System and method for pattern recognition
EP0436819B1 (en) Handwriting recognition employing pairwise discriminant measures
US5909510A (en) Method and apparatus for document classification from degraded images
US20020154815A1 (en) Character recognition device and a method therefore
JP2005242579A (en) Document processor, document processing method and document processing program
US6539115B2 (en) Pattern recognition device for performing classification using a candidate table and method thereof
Lerner et al. A classification-driven partially occluded object segmentation (CPOOS) method with application to chromosome analysis
JP3155616B2 (en) Character recognition method and device
US5689584A (en) Method of and apparatus for pattern recognition and method of creating pattern recognition dictionary
CN111950592B (en) Multi-modal emotion feature fusion method based on supervised least square multi-class kernel canonical correlation analysis
JPS62130481A (en) Character recognition system
Edan Cuneiform symbols recognition based on k-means and neural network
Hancherngchai et al. An individual local mean-based 2DPCA for face recognition under illumination effects
JPS62130482A (en) Character recognition system
JPS58169265A (en) Data conversion processing system
JP3180477B2 (en) Pattern recognition device
JP2965165B2 (en) Pattern recognition method and recognition dictionary creation method
JP2797721B2 (en) Character recognition device
JP2728117B2 (en) Character recognition device
JP2639314B2 (en) Character recognition method
JP2906743B2 (en) Character recognition device
KR950011065B1 (en) A character recognition method