JPH01161487A - Object recognizing method - Google Patents

Object recognizing method

Info

Publication number
JPH01161487A
JPH01161487A JP31852187A JP31852187A JPH01161487A JP H01161487 A JPH01161487 A JP H01161487A JP 31852187 A JP31852187 A JP 31852187A JP 31852187 A JP31852187 A JP 31852187A JP H01161487 A JPH01161487 A JP H01161487A
Authority
JP
Japan
Prior art keywords
model
line
line segments
explanatory diagram
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP31852187A
Other languages
Japanese (ja)
Inventor
Shoji Shimomura
昭二 下村
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuji Electric Co Ltd
Original Assignee
Fuji Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuji Electric Co Ltd filed Critical Fuji Electric Co Ltd
Priority to JP31852187A priority Critical patent/JPH01161487A/en
Publication of JPH01161487A publication Critical patent/JPH01161487A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PURPOSE:To attain recognition even when two vertexes are deficient by recognizing by the use of the descriptive form and the connecting relation of an object. CONSTITUTION:A polygonal object in which a contour form can be represented by a straight line element is described as the aggregation of segment Lines 1-12, the quantity of features such as the length of the segment, an angle between the segments and the parallelism of the segments is used, respective features are combined to detect the partial constitution (partial construction) of a part of the model in model data in the segment aggregation of an unknown object and search a candidate model from the models. Then, the partial construction is updated to contract the candidate model to one and reduce the matching processing of the model and the unknown object 10. Namely, since the vicinity including a vertex is often unstable, the extraction of the data of the vertex is stopped to enable the recognition even when more than two vertexes are hidden.

Description

【発明の詳細な説明】 〔産業上の利用分野〕 この発明は、物体像を撮像装置で撮像し、画像処理して
得られる特徴Aを用いてその物体が何であるか認識する
物体認識方法に関する。
[Detailed Description of the Invention] [Industrial Application Field] This invention relates to an object recognition method for capturing an image of an object with an imaging device and recognizing what the object is by using features A obtained by processing the image. .

〔従来の技術〕[Conventional technology]

重なり合った物体を認識するものとして、出願人は先に
隠れた角部については物体輪郭の2直線を延長し、交わ
る点を隠れている頂点であるとして物体形状全体を推定
し、マツチングをとる方法を提案している(特願昭61
−68608号)。
To recognize overlapping objects, the applicant proposes a matching method in which two straight lines of the object outline are extended for hidden corners, the point where they intersect is assumed to be a hidden vertex, and the entire object shape is estimated. (Patent application 1986)
-68608).

〔発明が解決しようとする問題点〕[Problem that the invention seeks to solve]

しかしながら、このような方法では、多角形未知物体の
1つの頂点が隠れている場合にしか適用できないと云う
問題点がある。
However, this method has a problem in that it can only be applied when one vertex of the unknown polygonal object is hidden.

したがって、この発明は多角形物体の2つ以上の頂点が
他物体との重なりにより隠れている場合であっても、重
なり合った物体の各々を順に認識することができる物体
認識方法を提供することを目的とする。
Therefore, it is an object of the present invention to provide an object recognition method that can sequentially recognize each of the overlapping objects even when two or more vertices of a polygonal object are hidden by overlapping with other objects. purpose.

〔問題点を解決するための手段〕[Means for solving problems]

複数の認識対象多角形物体の各々につきその輪郭を追跡
して直線部を検出し、物体形状を線分の集合として記述
すべく、少なくとも各線分の長さ、a分間の角度および
平行度を含む形状特徴量を抽出し、これをモデルデータ
としてメモリに登録する処理を予め行い、しかる後未知
物体についてもモデルと同様に線分の集合として記述す
べく、その形状特徴徂を抽出してメモリに記憶し、未知
物体の3つ以上の線分にてル成される部分構造を作成し
て複数のモデルの中から候補モデルを探索し、該候補と
なったモデルと部分構造とのマツチングできるまで繰り
返し行なう。
For each of the plurality of polygonal objects to be recognized, the outline is traced to detect straight line parts, and in order to describe the object shape as a set of line segments, at least the length of each line segment, the angle between a and the parallelism are included. The process of extracting shape features and registering them in memory as model data is performed in advance.Then, in order to describe the unknown object as a set of line segments in the same way as the model, the shape features are extracted and stored in memory. Create a partial structure made up of three or more line segments of the unknown object, search for candidate models from among multiple models, and match the candidate model with the partial structure. Do it repeatedly.

^ 〔作用〕 直線要素で輪郭形状が表現できる多角形物体を線分の集
合として記述し、■線分の長さ、■線分N]の角度、■
線分どうしの平行度、という特徴量を用い、それぞれの
特徴を組み合せていくことで未知物体中の線分集合の中
からモデルデータ中のモデルの一部分をなす線分構成(
部分構造)をみつけだし、モデルの中から候補モデルを
探索しようとするものであり、部分構造を更新して候補
モデルを1つにしぼり、モデルと未知物体のマツチング
処理を軽減しようとするものである。つまり、頂点を含
む近傍は不安定になることが多いため、頂点のデータを
抽出するのをやめて上記の如くすることにより、頂点が
2つ以上隠れている場合でも認識ができるようにする。
^ [Operation] A polygonal object whose outline shape can be expressed by linear elements is described as a set of line segments, and ■ the length of the line segment, ■ the angle of the line segment N], and ■
By using the feature quantity of parallelism between line segments and combining each feature, we can calculate the line segment configuration (
It attempts to search for candidate models from among the models by finding substructures (substructures), and it attempts to reduce the matching process between models and unknown objects by updating the substructures and narrowing down the candidate models to one. be. In other words, since the neighborhood including a vertex is often unstable, by stopping extracting vertex data and doing the above, it is possible to recognize even if two or more vertices are hidden.

〔実施例〕〔Example〕

第1図はこの発明による第1段階処理を説明するための
説明図、第2図は同じくその第2段階処理を説明するた
めの説明図、第3図は同じくその第3段階処理を説明す
るための説明図、第4図は第1段階処理後に再li1成
される特徴データを説明するための説明図、第5図は第
2段階処理後に再編成される特徴データを説明するため
の説明図、第6図は未知物体の一例を説明するための説
明図、第6A図は第6図の抽出線分を説明するための説
明図、第6B図は第6図の特徴データを説明するための
説明図、第7図はモデルIの形状とその抽出線分の関係
を説明するための説明図、第7A図はモデルIの特徴デ
ータを説明するための説明図、@8図はモデル■の形状
とその抽出線分の関係を説明するための説明図、第9図
はモデル■の形状とその抽出線分の関係を説明するため
の説明図、第10図はモデル■の形状とその抽出線分の
関係を説明するための説明図、第11図はモデルVの形
状とその抽出線分の関係を説明するための説明図である
FIG. 1 is an explanatory diagram for explaining the first stage processing according to the present invention, FIG. 2 is an explanatory diagram for explaining the second stage processing, and FIG. 3 is an explanatory diagram for explaining the third stage processing. FIG. 4 is an explanatory diagram for explaining the feature data that is reorganized after the first stage processing, and FIG. 5 is an explanatory diagram for explaining the feature data that is reorganized after the second stage processing. Figure 6 is an explanatory diagram for explaining an example of an unknown object, Figure 6A is an explanatory diagram for explaining extracted line segments in Figure 6, and Figure 6B is an explanatory diagram for explaining feature data in Figure 6. Figure 7 is an explanatory diagram to explain the relationship between the shape of model I and its extracted line segments, Figure 7A is an explanatory diagram to explain the feature data of model I, and Figure @8 is an explanatory diagram to explain the relationship between the shape of model I and its extracted line segments. An explanatory diagram for explaining the relationship between the shape of model ■ and its extracted line segments, Figure 9 is an explanatory diagram for explaining the relationship between the shape of model ■ and its extracted line segments, and Figure 10 is an explanatory diagram for explaining the relationship between the shape of model ■ and its extracted line segments. FIG. 11 is an explanatory diagram for explaining the relationship between the extracted line segments. FIG. 11 is an explanatory diagram for explaining the relationship between the shape of the model V and the extracted line segments.

この発明は、良く知られている画像処理技術により、以
下の如き手j[Kて互いに重なり合う多角形物体の8!
i1を行なうものである。
This invention utilizes well-known image processing techniques to generate the following polygonal objects 8!
i1.

いま、第6図に符@10で示す如き未知物体があるもの
とし、その背景との境界を輪郭追跡し、大きさと方向を
もつベクトルとして表わすものとfると、第6A図の如
(Linel 〜Line12の線分が抽出される。こ
−で、各線分の特徴量としては法線長ρ(第6A図の基
準APからの距離)、法線方向または基準軸coに対す
る角度θ(第6A図参照)、始点座標、終点座標、長さ
Now, let us assume that there is an unknown object as shown by the symbol @10 in Figure 6, and if we trace the outline of its boundary with the background and express it as a vector with magnitude and direction. Line segments ~Line 12 are extracted.The feature values for each line segment include the normal length ρ (distance from the reference AP in Figure 6A), the normal direction or the angle θ with respect to the reference axis co (6A (see figure), start point coordinates, end point coordinates, length.

隣接する線分間の角度(θ□−θ(1)+線分の接続関
係、平行線の組(θの値かはχ等しいもの)15よび平
行線組の距11j1(ρ□−ρn)等のデータを抽出し
、所定のメモリに格納する。第6B図はこれらのデータ
を示すもので、(イ)Kは法腺長、法線方向、始点座標
、終点座標および長さの各データが、同図(ロ)には接
続関係のデータが、また同図(ハ)には平行線の組とそ
の距離データがそれぞれ示されている。
Angle between adjacent line segments (θ □ - θ (1) + connection relationship of line segments, set of parallel lines (the value of θ is equal to χ) 15, distance of the set of parallel lines 11j1 (ρ □ - ρn), etc. Figure 6B shows these data, (a) K is the normal length, normal direction, starting point coordinate, ending point coordinate, and length data. , Figure (b) shows connection relationship data, and Figure (c) shows sets of parallel lines and their distance data.

なお、第6図の如き未知物体を認識するに当たっては、
そのモデルとなる物体のデータ(モデルデータ)を上記
と同様にして抽出し、記憶しておくものとする。こ\で
は例えば、第7図〜第11図の如きモデルエ〜Vのデー
タが格納されている。
In addition, when recognizing an unknown object as shown in Figure 6,
The data (model data) of the object serving as the model shall be extracted and stored in the same manner as described above. For example, data of models E to V as shown in FIGS. 7 to 11 are stored here.

第7図のモデルIのデータの例を第7A図に示すが、モ
デル■〜Vについては省略されている。
An example of data for model I in FIG. 7 is shown in FIG. 7A, but models ① to V are omitted.

以下、主として第1図(イ)を参照して説明する。The following description will be made mainly with reference to FIG. 1(A).

■未知物体の特徴データ(第6B図(イ)参照)の中か
ら、最長の線分を探索する。こ−ではLinelが選択
される。
- Search for the longest line segment from the feature data of the unknown object (see Figure 6B (a)). In this case, Linel is selected.

■第6B図(ロ)に示す接続関係データを参照し、Li
nelにつながる線分を探索する。こ−ではLine2
が選択される。
■ Referring to the connection relationship data shown in Figure 6B (b),
Search for line segments connected to nel. Here, Line 2
is selected.

■LinelとLine2でできる角C1の角度T1を
、例えばθ2−θ1の計算をして求める。
(2) Find the angle T1 of the angle C1 formed by Line 2 and Line 2 by calculating, for example, θ2-θ1.

■角度T1なる角が、モデルデータ中に存在するか否か
を探索する。存在すれば、図示されない記憶バッファに
一時記憶する。
(2) Search whether the angle T1 exists in the model data. If it exists, it is temporarily stored in a storage buffer (not shown).

■第6B図(ロ)の接続関係リストを参照し、Line
2に接続する線分を探索する。Line3が選択される
■Refer to the connection relationship list in Figure 6B (b) and select Line
Search for a line segment that connects to 2. Line 3 is selected.

■Line2とLi ne3のなす角度を求め、■項と
同様モデルデータ中より探索する=■L i ne2と
Line3の如く角が存在しない場合は、以後の線分探
索を終了する。
(2) Find the angle formed by Line 2 and Line 3 and search from the model data in the same manner as in section (2) = (2) If the angle does not exist as in Line 2 and Line 3, the subsequent line segment search is terminated.

■Linelにもどって反対方向に線分を探索し、0〜
0項の処理を同様に行なう。その結果、Line9が見
つかる。
■ Return to Line and search for line segments in the opposite direction, 0~
Processing for the 0th term is performed in the same way. As a result, Line 9 is found.

■探索した線分Linel 、Line2.Line9
と一時記憶した角のデータC1,C2から、次のような
部分構造を作成する。
■Searched line segment Linel, Line2. Line9
The following partial structure is created from the temporarily stored corner data C1 and C2.

部分構造(Line2−cl−Linel−C’1−L
1ne9)[相]作成した部分構造からモデルの候補を
選択する。
Partial structure (Line2-cl-Line-C'1-L
1ne9) [Phase] Select model candidates from the created partial structure.

こ\では、モデルI、n、mが候補として選択される。Here, models I, n, and m are selected as candidates.

0部分構造のLinelとモデルI、I[、IIIの各
線分長を比較し、許容課長範囲にあるもののみを残す。
The line segment lengths of the 0 substructure Linel and models I, I[, and III are compared, and only those within the allowable section length range are retained.

これによってモデル候補をしぼる。候補はモデルエ、モ
デル■となる。
This narrows down the model candidates. The candidates are Modele and Model■.

@上記までの処理で候補が一つにしぼれない場合は、部
分構造の各線分と平行な線分を第6B図(ハ)に示す平
行線リストから探索する。Linel2が見つかる。
@If the above processing does not result in one candidate, a line segment parallel to each line segment of the partial structure is searched from the parallel line list shown in FIG. 6B (c). Line 2 is found.

0部分構造(Line2−CI−Linel−c2−L
ine9)と、Line12O幾何学的矛盾を調べる。
0 partial structure (Line2-CI-Line-c2-L
ine9) and Line12O geometric contradiction is investigated.

つまり、二値画像についてLine120両側の画素値
を比較領域AI 、A2で比較し、線分のどちら側に物
体が存在するか調べる。
That is, the pixel values on both sides of the line 120 in the binary image are compared in the comparison areas AI and A2, and it is determined on which side of the line segment the object is present.

矛盾がなければLinelZを部分構造に追加する。そ
の結果、部分構造は第1図(ロ)に示す構成となる。
If there is no contradiction, add LinelZ to the partial structure. As a result, the partial structure becomes the configuration shown in FIG. 1(b).

■第1図(ロ)の部分構造データから、さらに候補モデ
ルをしぼる。候補はモデルIのみとなる。
■ Further narrow down candidate models from the partial structure data shown in Figure 1 (b). Model I is the only candidate.

OモデルIと部分構造とのマツチングをとり、検証する
O Model I and the partial structure are matched and verified.

0検証の結果モデルIであることが確認できた場合は、
部分構造0Linel 、Line2.Line9.L
inel2を未知物体中から削除する。
0 If it is confirmed that it is model I as a result of verification,
Partial structure 0 Linel, Line2. Line9. L
Delete inel2 from the unknown objects.

その結果を第2図(イ)に示す。The results are shown in Figure 2 (a).

◎残った線分について、特徴データ表を第4図の如く作
り直す。線分の削除によって接続情報を失った端点は、
距離が近い他線分の端点と接続する。
◎Recreate the feature data table for the remaining line segments as shown in Figure 4. Endpoints that have lost connection information due to line segment deletion are
Connect end points of other line segments that are close together.

ただし、条件として接続点ペアは始点と終点のベアとし
、この条件と他線分の端点が近い距離にない場合は、接
続しないものとする。
However, as a condition, the connection point pair is a bare start point and end point, and if this condition and the end point of another line segment are not close to each other, the connection point pair will not be connected.

[相]第2図(イ)の処理により、同図(ロ)の如き部
分構造が抽崩される。これをモデルとマツチングをとり
、その結果モデル■であることが確認されたら、Lin
e3,4.11を削除し、第5図(イ)〜(ハ)の如き
データを作成する。
[Phase] By the process shown in FIG. 2(A), the partial structure shown in FIG. 2(B) is bolted. Match this with the model, and if it is confirmed that it is model ■, then Lin
Delete e3, 4.11 and create data as shown in FIGS. 5(a) to 5(c).

@最後に、第3図(イ)の処理により、同図(口〕の如
き部分構造が抽出される。これをモデルとマツチングを
とり、その結果モデルVであることが確認されたら、残
る部分構造はなくなったので、処理を終了する。
@Finally, through the process shown in Figure 3 (a), a partial structure like the one shown in the figure (mouth) is extracted.Match this with the model, and if it is confirmed that it is model V, the remaining part Since there is no longer any structure, the process ends.

なお、上記の処理において、接読関係のかわりに隣接関
係を使うことにより、濃淡画像も認識することができる
In addition, in the above processing, by using the adjacency relationship instead of the close reading relationship, it is also possible to recognize grayscale images.

〔発明の効果〕〔Effect of the invention〕

この発明によれば、物体の記述形式および接読関係を利
用して認識を行うようにしたので、頂点が2つ欠けるよ
うな場合でも認識が可能となり、Eg識率が向上すると
云う利点がもたらされる。
According to this invention, since recognition is performed using the description format of the object and the close reading relationship, recognition is possible even when two vertices are missing, and the advantage is that Eg recognition rate is improved. It will be done.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図はこの発明による第1段階処理を説明するための
説明図、第2図は同じくその2段階処理を説明するため
の説明図、8g3図は同じくその第3段階処理を説明す
るための説明図、第4図は第1段階処理後に再編成され
る特徴データを説明するための説明図、第5図は第2段
階処理後に再編成される特徴データを説明するための説
明図、第6図は未知物体の一例を説明するための説明図
、第6A図は第6図の抽出線分を説明するための説明図
、第6B図は第6囚の特徴データを説りjするための説
明図、第7図はモデル■の形状とその抽出、線分の関係
を説明するための説明図、第7A図はモデル■の特徴デ
ータを説明するための説明図、第8図はモデルHの形状
とその抽出線分の関係を説明するための説明図、第9図
はモデルI[の形状とその抽出成分の関係を説明するた
めの説明図、8g10図はモデル■の形状とその抽出線
分の関係を説明するための説明図、第11図はモデルV
の形状とその抽出線分の関係を説明するための説明図で
ある。 符号説明 1・・・・・・モデル■、2・・・・・・モデル■、3
・・・・・・モデル■、4・・・・・・モデル■、5・
・・・・・モデルV、10・・・・・・未知物体、P・
・・・・・基準点、co・・・・・・基準軸。 代理人 弁理士 並 木 昭 夫 代理人 弁理士 松 崎    消 灯1図 (ロ) 第2図 (/I) (ロ) 第3図 第4図 (/1) 第5図 (づ) C口)             (ハ)第6図 第6A図 第6B図 第7図 (ロ) 第7A図 (ロ) (ハ) 第8図 ine3 第9図 ine 3 第10図 (/1) ine2
Figure 1 is an explanatory diagram for explaining the first stage processing according to the present invention, Figure 2 is an explanatory diagram for explaining the two stage processing, and Figure 8g3 is an explanatory diagram for explaining the third stage processing. An explanatory diagram, FIG. 4 is an explanatory diagram for explaining the feature data that is reorganized after the first stage processing, and FIG. 5 is an explanatory diagram for explaining the feature data that is reorganized after the second stage processing. Figure 6 is an explanatory diagram for explaining an example of an unknown object, Figure 6A is an explanatory diagram for explaining the extracted line segment in Figure 6, and Figure 6B is an explanatory diagram for explaining the feature data of prisoner 6. Figure 7 is an explanatory diagram to explain the shape of model ■, its extraction, and the relationship between line segments. Figure 7A is an explanatory diagram to explain the characteristic data of model ■. Figure 8 is an explanatory diagram to explain the model ■. Figure 9 is an explanatory diagram to explain the relationship between the shape of H and its extracted line segments, Figure 9 is an explanatory diagram to explain the relationship between the shape of model I and its extracted components, and Figures 8g and 10 are the shape of model ■ and its An explanatory diagram for explaining the relationship between extracted line segments, Figure 11 is model V
FIG. 2 is an explanatory diagram for explaining the relationship between the shape of , and its extracted line segment. Code explanation 1...Model ■, 2...Model ■, 3
...Model ■, 4...Model ■, 5.
...Model V, 10...Unknown object, P.
...Reference point, co...Reference axis. Agent Patent Attorney Akio Namiki Agent Patent Attorney Matsuzaki Lights out Figure 1 (B) Figure 2 (/I) (B) Figure 3 Figure 4 (/1) Figure 5 (Z) Entrance C) ( c) Fig. 6 Fig. 6A Fig. 6B Fig. 7 (b) Fig. 7A (b) (c) Fig. 8 ine3 Fig. 9 ine 3 Fig. 10 (/1) ine2

Claims (1)

【特許請求の範囲】[Claims] 複数の認識対象多角形物体の各々につきその輪郭を追跡
して直線部を検出し、物体形状を線分の集合として記述
すべく、少なくとも各線分の長さ、線分間の角度および
平行度を含む形状特徴量を抽出し、これをモデルデータ
としてメモリに登録する処理を予め行い、しかる後未知
物体についてもモデルと同様に線分の集合として記述す
べく、その形状特徴量を抽出してメモリに記憶し、未知
物体の3つ以上の線分にて形成される部分構造を作成し
て複数のモデルの中から候補モデルを探索し、該候補と
なつたモデルと部分構造とのマッチングをとつて候補モ
デルを絞る処理を部分構造が作成できなくなるまで繰り
返し行うことを特徴とする物体認識方法。
For each of the plurality of polygonal objects to be recognized, the contour is traced to detect straight line parts, and in order to describe the object shape as a set of line segments, at least the length of each line segment, the angle and parallelism between the line segments are included. The process of extracting shape features and registering them in memory as model data is performed in advance, and then the shape features are extracted and stored in memory in order to describe the unknown object as a set of line segments in the same way as the model. Create a partial structure formed by three or more line segments of the unknown object, search for a candidate model from among multiple models, and match the candidate model with the partial structure. An object recognition method characterized by repeatedly performing a process of narrowing down candidate models until a partial structure cannot be created.
JP31852187A 1987-12-18 1987-12-18 Object recognizing method Pending JPH01161487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP31852187A JPH01161487A (en) 1987-12-18 1987-12-18 Object recognizing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP31852187A JPH01161487A (en) 1987-12-18 1987-12-18 Object recognizing method

Publications (1)

Publication Number Publication Date
JPH01161487A true JPH01161487A (en) 1989-06-26

Family

ID=18100038

Family Applications (1)

Application Number Title Priority Date Filing Date
JP31852187A Pending JPH01161487A (en) 1987-12-18 1987-12-18 Object recognizing method

Country Status (1)

Country Link
JP (1) JPH01161487A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4418217A1 (en) * 1993-05-26 1994-12-01 Matsushita Electric Works Ltd Shape recognition method
JPH07311783A (en) * 1994-05-16 1995-11-28 Kawasaki Heavy Ind Ltd Graphic characteristic inspection system
JP2012128666A (en) * 2010-12-15 2012-07-05 Fujitsu Ltd Arc detector, arc detection program and portable terminal device
JP2012212460A (en) * 2012-06-22 2012-11-01 Nintendo Co Ltd Image processing program, image processing apparatus, image processing system, and image processing method
JP2015069512A (en) * 2013-09-30 2015-04-13 株式会社Nttドコモ Information processing apparatus and information processing method
CN112981515A (en) * 2021-05-06 2021-06-18 四川英创力电子科技股份有限公司 Current regulation and control method
CN113627399A (en) * 2021-10-11 2021-11-09 北京世纪好未来教育科技有限公司 Topic processing method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60204086A (en) * 1984-03-28 1985-10-15 Fuji Electric Co Ltd Object discriminating device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60204086A (en) * 1984-03-28 1985-10-15 Fuji Electric Co Ltd Object discriminating device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4418217A1 (en) * 1993-05-26 1994-12-01 Matsushita Electric Works Ltd Shape recognition method
US5546476A (en) * 1993-05-26 1996-08-13 Matsushita Electric Works, Ltd. Shape recognition process
JPH07311783A (en) * 1994-05-16 1995-11-28 Kawasaki Heavy Ind Ltd Graphic characteristic inspection system
JP2012128666A (en) * 2010-12-15 2012-07-05 Fujitsu Ltd Arc detector, arc detection program and portable terminal device
JP2012212460A (en) * 2012-06-22 2012-11-01 Nintendo Co Ltd Image processing program, image processing apparatus, image processing system, and image processing method
JP2015069512A (en) * 2013-09-30 2015-04-13 株式会社Nttドコモ Information processing apparatus and information processing method
CN112981515A (en) * 2021-05-06 2021-06-18 四川英创力电子科技股份有限公司 Current regulation and control method
CN113627399A (en) * 2021-10-11 2021-11-09 北京世纪好未来教育科技有限公司 Topic processing method, device, equipment and storage medium
CN113627399B (en) * 2021-10-11 2022-02-08 北京世纪好未来教育科技有限公司 Topic processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Tombre et al. Stable and robust vectorization: How to make the right choices
JP3914864B2 (en) Pattern recognition apparatus and method
JPH01161487A (en) Object recognizing method
JP2823882B2 (en) Line image vectorization method
JPH11134509A (en) Method for drawing recognizing process and method for construction drawing recognizing process
JP3130869B2 (en) Fingerprint image processing device, fingerprint image processing method, and recording medium
Qiu et al. Computer-assisted auto coloring by region matching
Su et al. Dimension recognition and geometry reconstruction in vectorization of engineering drawings
JP2846486B2 (en) Image input device
JP5051174B2 (en) Form dictionary generation device, form identification device, form dictionary generation method, and program
JPS61208184A (en) Compression system for pattern information amount
Bilodeau et al. Part segmentation of objects in real images
Wang et al. Calligraphy image processing with stroke extraction and representation
JP2006323511A (en) Symbol-identifying method and device thereof
JPH02264373A (en) Graphic recognizing device
JPH04112276A (en) Binary picture contour line chain encoding device
JPS62208181A (en) Graphic extracting system
Zeng et al. Shape completion for depth image via repeated objects registration
JPH06103366A (en) Fingerprint collating method/device
JPH0420221B2 (en)
JPS61286984A (en) Line graphic recognizing device
Prokaj et al. Scale space based grammar for hand detection
JP2595361B2 (en) Inside / outside judgment method for figures consisting of broken lines
Zeng et al. A fast restoring method for arbitrarily warped images of Chinese document
JP2650443B2 (en) Line figure vectorization method