JPH04238585A - Drawing interpretation processing system - Google Patents

Drawing interpretation processing system

Info

Publication number
JPH04238585A
JPH04238585A JP2162391A JP2162391A JPH04238585A JP H04238585 A JPH04238585 A JP H04238585A JP 2162391 A JP2162391 A JP 2162391A JP 2162391 A JP2162391 A JP 2162391A JP H04238585 A JPH04238585 A JP H04238585A
Authority
JP
Japan
Prior art keywords
graphic
interpretation
scene
identification
figures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2162391A
Other languages
Japanese (ja)
Inventor
Isamu Yoroisawa
鎧沢 勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2162391A priority Critical patent/JPH04238585A/en
Publication of JPH04238585A publication Critical patent/JPH04238585A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PURPOSE:To perform consistent automation from the identification of a graphic to the interpretation of the whole picture by performing collation with a scene dictionary in which relation between the graphic and a scene is stored after separating the superposition of the graphic in an image. CONSTITUTION:A graphic separation part 12 which separates the superposition when plural graphics are superimposed in a fetched image in a direction to connect them at an intersection smoothly by tracing a line segment is provided. Also, a graphic identification part 13 which identifies separated graphics, and a drawing interpretation part 14 which interprets a content represented in a drawing by collating it with the scene dictionary in which the relation between the graphic and the scene is stored based on the identified result of the graphic are provided. By employing such constitution, a binary image can be obtained from a drawing input part 11, and it is separated at the graphic separation part 12 when those binary images are superimposed. The graphic is identified at the graphic identification part 13, and the symbol of each graphic can be decided. The drawing interpretation part 14 interprets the whole drawing based on an obtained symbol.

Description

【発明の詳細な説明】[Detailed description of the invention]

【0001】0001

【産業上の利用分野】本発明は,図面処理の自動化に関
するものであり,描かれた図形の自動識別にとどまらず
,図面の内容を解釈するまでの全処理を自動化する図面
解釈処理方式に関する。図面には,地図,設計図,劇画
など,各種のものがあり,それらを解釈する必要は,仕
分け,診断,評価など,多くの場面で生じる。現在それ
らの作業は,ほとんど人手で行なっている。本発明によ
り,その省力化が期待できる。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to automation of drawing processing, and relates to a drawing interpretation processing method that automates not only automatic identification of drawn figures but also all processing up to interpretation of the content of drawings. There are various types of drawings, such as maps, blueprints, and graphic drawings, and the need to interpret them arises in many situations, such as sorting, diagnosis, and evaluation. Currently, most of this work is done manually. The present invention can be expected to save labor.

【0002】0002

【従来の技術】従来,図面中の要素(単一図形や文字)
の自動識別については,各種の方式が提案されているが
,それらの要素から図面全体の内容を解釈する方式は提
案されていない。また,従来,情景解釈処理方法として
提案されているもの(特願平1−130185号,特願
平1−191035号)は,最終段階の図面解釈に応用
できるが,そこでは,図形の分離識別の自動化について
は,考慮されていなかった。
[Prior Art] Conventionally, elements in drawings (single figures and characters)
Various methods have been proposed for automatic identification of drawings, but no method has been proposed for interpreting the contents of the entire drawing from these elements. Furthermore, the methods that have been proposed as scene interpretation processing methods (Japanese Patent Application No. 1-130185, Japanese Patent Application No. 1-191035) can be applied to the final stage of drawing interpretation; Automation was not considered.

【0003】0003

【発明が解決しようとする課題】従来の技術では,図面
中の図形の自動識別までしか行なわれておらず,最終的
な図面の解釈は,人手によっていた。より省力化を図る
ためには,その解釈まで含め一貫した自動化が必要であ
る。
In the prior art, only the automatic identification of figures in drawings was performed, and the final interpretation of the drawings was done manually. In order to achieve further labor savings, consistent automation, including interpretation, is necessary.

【0004】本発明は,図面中の図形の自動識別から図
面全体の内容の解釈に至る一貫した自動化をはかること
を目的としている。
[0004] An object of the present invention is to achieve consistent automation ranging from automatic identification of figures in drawings to interpretation of the contents of the entire drawing.

【0005】[0005]

【課題を解決するための手段】図1は,本発明の原理構
成図を示す。11は,対象とする図面を2値画像として
取り込む図面入力部を,12は,交差点で滑らかに接続
する方向に線分を追跡することにより取り込んだ画像の
中の図形を人が知覚するのと同様に分離する図形分離部
を,13は,分離された図形を,複素対数変換と周波数
面への変換との組合せにより,回転や大きさの変化に対
して不変に識別する図形識別部を,14は,図形の識別
結果に基づき図形と情景との関係を格納した情景辞書と
照合することにより,図面の表わす内容を解釈する図面
解釈部を,それぞれ示す。この構成により,上記課題で
ある図面解釈の全自動化が可能になる。
[Means for Solving the Problems] FIG. 1 shows a diagram of the basic configuration of the present invention. 11 is a drawing input unit that imports the target drawing as a binary image, and 12 is a drawing input unit that allows humans to perceive figures in the imported image by tracing line segments in directions that connect smoothly at intersections. A figure separator 13 separates the figures in the same way, and a figure identifier 13 identifies the separated figures invariably against changes in rotation and size by a combination of complex logarithmic transformation and frequency plane conversion. Reference numeral 14 denotes a drawing interpretation unit that interprets the contents of the drawing by checking it with a scene dictionary that stores the relationship between the drawing and the scene based on the identification result of the drawing. This configuration makes it possible to fully automate drawing interpretation, which is the problem mentioned above.

【0006】[0006]

【作用】図面入力部11から2値画像が得られ,当該2
値画像に重なりがある場合に図形分離部12において分
離される。そして,図形識別部13において,上記分離
された各図形について識別を行い各図形のシンボルを決
定する。図面解釈部14は当該得られたシンボルにもと
づいて図面全体を解釈する。
[Operation] A binary image is obtained from the drawing input section 11, and the
If the value images overlap, they are separated in the graphic separation unit 12. Then, the figure identification section 13 identifies each of the separated figures and determines the symbol of each figure. The drawing interpretation unit 14 interprets the entire drawing based on the obtained symbols.

【0007】[0007]

【実施例】図2ないし図5は図1各部での処理結果の例
を示す。これにより,本発明の作用が示される。図2は
図面入力部11の処理結果である。入力図面は一般に多
値画像であるが,図面入力部11の処理により,2値画
像に変換される。図3は図形分離部12の処理結果であ
る。図2において重なりあっていた図形が,図形分離部
12の処理により,分離されている。図4は図形識別部
13の処理結果である。図3の各図形が,図形識別部1
3の処理により,シンボルに変換される。図5は図面解
釈部14の処理結果である。図4の各シンボルが,図面
解釈部14の処理により,活性化され,それらの活性値
の総和が最大となる「勉強部屋の図面」という解釈が出
力されている。
Embodiment FIGS. 2 to 5 show examples of processing results in each part of FIG. 1. This shows the effect of the present invention. FIG. 2 shows the processing results of the drawing input section 11. Although the input drawing is generally a multivalued image, it is converted into a binary image through processing by the drawing input unit 11. FIG. 3 shows the processing results of the figure separation unit 12. The overlapping figures in FIG. 2 are separated by the process of the figure separation unit 12. FIG. 4 shows the processing results of the figure identification section 13. Each figure in Figure 3 is
Through the process in step 3, it is converted into a symbol. FIG. 5 shows the processing results of the drawing interpretation section 14. Each symbol in FIG. 4 is activated by the process of the drawing interpretation unit 14, and the interpretation "drawing of the study room" is output, which has the maximum sum of the activation values.

【0008】図1の図面入力部11は,市販のイメージ
スキャナと,適当な2値化処理により,容易に実現でき
る。
The drawing input section 11 shown in FIG. 1 can be easily realized using a commercially available image scanner and appropriate binarization processing.

【0009】図形分離部12は,各図形が単純閉ループ
(交差のない1重閉曲線)である場合,図6に示す方法
で実現できる。即ちまず,細線化31において,入力線
画像の8連結細線化を行なう。次に,ラベリング32に
おいて,ラベリングを行ない,連結図形に分離する。次
に,交差点検出33において,各連結図形ごとに交差点
の抽出を行なう。交差点は,8連結数が「4」となる点
として検出できる。交差点がない場合には,すでに単純
閉ループになっているとみなし,次の連結図形の処理に
移る。交差点がある場合には,複合ループであると判断
して,処理34を行なう。単純閉ループの抽出34では
,交差点で滑らかに接続する方向に線分を追跡すること
により,単純閉ループを抽出する。すべての連結図形に
ついて処理33,34が終了したところで,再ラベリン
グ処理35により,図形を完全に分離する。なお,ここ
では,交差点で滑らかに接続する方向に線分を追跡する
ことにより,ゲシュタルト心理学で言われる「滑らかな
連続の法則」に従っている。すなわち,人が知覚するの
と同様に,図形分離を行なっている。また,人が知覚す
るのと同様な図形分離についてのより綿密な方法は,文
献(1)に提案されており,それを用いることができる
The figure separator 12 can be realized by the method shown in FIG. 6 when each figure is a simple closed loop (single closed curve with no intersections). That is, first, in line thinning 31, 8-connected line thinning is performed on the input line image. Next, in labeling 32, labeling is performed to separate connected figures. Next, in intersection detection 33, intersections are extracted for each connected figure. An intersection can be detected as a point where the number of 8 connections is "4". If there is no intersection, it is assumed that a simple closed loop has already been established, and processing moves on to the next connected figure. If there is an intersection, it is determined that it is a complex loop, and processing 34 is performed. In simple closed loop extraction 34, simple closed loops are extracted by tracing line segments in directions that connect smoothly at intersections. When processes 33 and 34 are completed for all connected figures, the figures are completely separated by relabeling process 35. Note that the ``law of smooth continuity'' described in Gestalt psychology is followed here by tracing line segments in directions that connect them smoothly at intersections. In other words, figure separation is performed in the same way as humans perceive it. Further, a more detailed method for figure separation similar to that perceived by humans is proposed in Reference (1), and can be used.

【0010】図1の図形識別部13は,特願平1−14
1232号(パターン識別方法)に示される方法を応用
することにより実現可能である。図7はその処理の流れ
を示す。まず,図形重心の算出41において,対象図形
の重心を求める。次に,複素対数マッピング42におい
て,対象図形を表現する座標系を直交系から,上記重心
を中心にした対数極座標系に変換する。この処理により
,図形の回転や大きさの変化は,位置の変化に変換され
る。次に,R変換43により,位置の変化に不変になる
よう周波数面に変換する。ここで,R変換は,フーリエ
変換に比べ処理が簡単であるため,用いている。その詳
細は,文献(2)に示されている。最後に,図形辞書と
の照合44において,図形辞書と照合することにより対
象図形の識別が完了する。なお,図形辞書は,想定され
る全ての図形について,重心抽出,重心を中心とした複
素対数マッピング,および,R変換の処理がなされたも
のを蓄積したものである。
[0010] The figure identification section 13 in FIG.
This can be realized by applying the method shown in No. 1232 (Pattern Identification Method). FIG. 7 shows the flow of the process. First, in figure centroid calculation 41, the centroid of the target figure is determined. Next, in complex logarithm mapping 42, the coordinate system expressing the target figure is converted from an orthogonal system to a logarithmic-polar coordinate system centered on the center of gravity. Through this processing, the rotation or size change of a figure is converted into a change in position. Next, by R transformation 43, the signal is converted into a frequency plane so as to be invariant to changes in position. Here, the R transform is used because it is easier to process than the Fourier transform. The details are shown in document (2). Finally, in the comparison 44 with the figure dictionary, identification of the target figure is completed by comparing it with the figure dictionary. The figure dictionary is a collection of all possible figures that have been subjected to centroid extraction, complex logarithm mapping around the centroid, and R conversion.

【0011】図1の図面解釈部14は,特願平1−13
0185号(情景解釈処理方法)に示される方法を応用
することにより実現可能である。その処理の流れを図8
に示す。まず,情景辞書の作成51において,情景辞書
を作成する。情景辞書は想定される情景(図面内容)と
図形辞書中の各図形との関係の強さを表す数値(特性値
)を格納したものである。これは,1度作成すれば,後
には,作成の必要がない。ただし,特願平1−1910
35号(情景解釈処理方法)に示される方法を応用して
,辞書を逐次現行化する(学習する)ことも可能である
The drawing interpretation section 14 in FIG.
This can be realized by applying the method shown in No. 0185 (Scene Interpretation Processing Method). Figure 8 shows the process flow.
Shown below. First, in scene dictionary creation 51, a scene dictionary is created. The scene dictionary stores numerical values (characteristic values) representing the strength of the relationship between an assumed scene (drawing content) and each figure in the figure dictionary. Once this is created, there is no need to create it later. However, patent application No. 1-1910
It is also possible to sequentially update (learn) the dictionary by applying the method shown in No. 35 (Scene Interpretation Processing Method).

【0012】次に,特性値の加算52において,図形識
別部13の出力結果である図形名に該当する情景辞書中
の特性値を情景ごとに加算する。最後に,情景の判定5
3において,特性値の加算52における加算値中の最大
値を示す情景を入力図面の内容と判定して,出力する。
Next, in addition 52 of characteristic values, characteristic values in the scene dictionary corresponding to the figure name which is the output result of the figure identification section 13 are added for each scene. Finally, scene judgment 5
3, the scene showing the maximum value among the added values in the characteristic value addition step 52 is determined to be the content of the input drawing and is output.

【0013】参考文献 (1)島谷明,鎧沢勇:図形分節特性の定量的分析,テ
レビ学技報,VAI90−18(1990)(2)Re
itboeck H.J. and Altmann 
J.:A model for size− and 
rotation−invariant patter
n processing in the visua
l system, Biol.Cybern.,Vo
l.51,pp.113−121(1984)
References (1) Akira Shimatani, Isamu Yorizawa: Quantitative Analysis of Figure Segmentation Characteristics, Television Science and Technology Report, VAI90-18 (1990) (2) Re
itboeck H. J. and Altmann
J. :A model for size and
rotation-invariant patter
n processing in the visual
l system, Biol. Cyber. ,Vo
l. 51, pp. 113-121 (1984)

【0014】[0014]

【発明の効果】本発明によれば,図面入力から図面解釈
までの全過程を自動化することが可能になる。本発明を
応用することにより,大量の図面の仕分け,診断,評価
などの作業が,簡単に行なえるようになり,大幅な省力
化が図れる。
[Effects of the Invention] According to the present invention, it is possible to automate the entire process from drawing input to drawing interpretation. By applying the present invention, tasks such as sorting, diagnosing, and evaluating a large number of drawings can be easily performed, resulting in significant labor savings.

【0015】なお,本発明は,元来2値画像として作成
された図面だけにでなく,濃淡を有する自然画像であっ
ても,精度のよい輪郭画像が得られるならば,適用可能
であり,その効用は甚大である。
Note that the present invention is applicable not only to drawings originally created as binary images, but also to natural images with shading, as long as a highly accurate contour image can be obtained. Its effectiveness is enormous.

【図面の簡単な説明】[Brief explanation of the drawing]

【図1】本発明の原理構成図である。FIG. 1 is a diagram showing the principle configuration of the present invention.

【図2】図面入力部の処理結果を示す。FIG. 2 shows processing results of a drawing input unit.

【図3】図形分離部の処理結果を示す。FIG. 3 shows processing results of a figure separation unit.

【図4】図形識別部の処理結果を示す。FIG. 4 shows processing results of a figure identification unit.

【図5】図面解釈部の処理結果を示す。FIG. 5 shows processing results of the drawing interpretation section.

【図6】図形分離部の処理の流れを示す。FIG. 6 shows the flow of processing of a figure separation unit.

【図7】図形識別部の処理の流れを示す。FIG. 7 shows the flow of processing of a figure identification unit.

【図8】図面解釈部の処理の流れを示す。FIG. 8 shows the flow of processing of the drawing interpretation section.

【符号の説明】[Explanation of symbols]

11  図面入力部 12  図形分離部 13  図形識別部 14  図面解釈部 11 Drawing input section 12 Figure separation section 13 Shape identification section 14 Drawing interpretation department

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】  図面中に描かれた図形を識別し,その
識別結果に基づいて,図面の表す内容を解釈する図面解
釈処理方式において,図面入力部と,取り込んだ画像の
中に複数の図形の重なりがある場合,その重なりを,交
差点で滑らかに接続する方向に線分を追跡することによ
り,分離する図形分離部と,分離された図形を識別する
図形識別部と,図形の識別結果に基づき,図形と情景と
の関係を格納した情景辞書と照合することにより,図面
の表わす内容を解釈する図面解釈部とを具備することを
特徴とする図面解釈処理方式。
Claim 1: In a drawing interpretation processing method that identifies figures drawn in a drawing and interprets the contents of the drawing based on the identification result, a drawing input unit and a plurality of figures in the captured image are used. If there is an overlap, the overlap is traced in the direction of smooth connection at the intersection, and the shape separation part separates the shape, the shape identification part identifies the separated shapes, and the shape identification result is calculated. 1. A drawing interpretation processing method, comprising: a drawing interpretation unit that interprets the contents of a drawing by comparing it with a scene dictionary that stores relationships between figures and scenes based on the drawings.
JP2162391A 1991-01-22 1991-01-22 Drawing interpretation processing system Pending JPH04238585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2162391A JPH04238585A (en) 1991-01-22 1991-01-22 Drawing interpretation processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2162391A JPH04238585A (en) 1991-01-22 1991-01-22 Drawing interpretation processing system

Publications (1)

Publication Number Publication Date
JPH04238585A true JPH04238585A (en) 1992-08-26

Family

ID=12060193

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2162391A Pending JPH04238585A (en) 1991-01-22 1991-01-22 Drawing interpretation processing system

Country Status (1)

Country Link
JP (1) JPH04238585A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002208008A (en) * 2001-01-05 2002-07-26 Olympus Optical Co Ltd Image evaluating device, its method and recording medium with image evaluation program recorded
US7535460B2 (en) 2004-06-03 2009-05-19 Nintendo Co., Ltd. Method and apparatus for identifying a graphic shape
US7771279B2 (en) 2004-02-23 2010-08-10 Nintendo Co. Ltd. Game program and game machine for game character and target image processing
US8558792B2 (en) 2005-04-07 2013-10-15 Nintendo Co., Ltd. Storage medium storing game program and game apparatus therefor

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002208008A (en) * 2001-01-05 2002-07-26 Olympus Optical Co Ltd Image evaluating device, its method and recording medium with image evaluation program recorded
US7771279B2 (en) 2004-02-23 2010-08-10 Nintendo Co. Ltd. Game program and game machine for game character and target image processing
US7535460B2 (en) 2004-06-03 2009-05-19 Nintendo Co., Ltd. Method and apparatus for identifying a graphic shape
US8558792B2 (en) 2005-04-07 2013-10-15 Nintendo Co., Ltd. Storage medium storing game program and game apparatus therefor

Similar Documents

Publication Publication Date Title
Tao et al. Detection of power line insulator defects using aerial images analyzed with convolutional neural networks
Sun et al. Traffic sign detection and recognition based on convolutional neural network
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN105574550A (en) Vehicle identification method and device
CN107679531A (en) Licence plate recognition method, device, equipment and storage medium based on deep learning
EP3822830B1 (en) Feature processing method and device for motion trajectory, and computer storage medium
CN105740910A (en) Vehicle object detection method and device
Gil-Jimenez et al. Traffic sign shape classification evaluation. Part II. FFT applied to the signature of blobs
CN105654066A (en) Vehicle identification method and device
EP3734496A1 (en) Image analysis method and apparatus, and electronic device and readable storage medium
CN108596102A (en) Indoor scene object segmentation grader building method based on RGB-D
CN108256454B (en) Training method based on CNN model, and face posture estimation method and device
CN110599463B (en) Tongue image detection and positioning algorithm based on lightweight cascade neural network
CN109447007A (en) A kind of tableau format completion algorithm based on table node identification
CN112364807B (en) Image recognition method, device, terminal equipment and computer readable storage medium
CN113762274B (en) Answer sheet target area detection method, system, storage medium and equipment
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
Zhou et al. Fast circle detection using spatial decomposition of Hough transform
CN117611994A (en) Remote sensing image target detection method based on attention mechanism weighting feature fusion
JPH04238585A (en) Drawing interpretation processing system
Khin et al. License plate detection of Myanmar vehicle images captured from the dissimilar environmental conditions
CN109241819A (en) Based on quickly multiple dimensioned and joint template matching multiple target pedestrian detection method
Oluchi et al. Development of a Nigeria vehicle license plate detection system
CN115393589A (en) Universal DCS process flow chart identification conversion method, system and medium
Jeong et al. Homogeneity patch search method for voting-based efficient vehicle color classification using front-of-vehicle image