JPS60171570A - System for setting decision area for classification - Google Patents

System for setting decision area for classification

Info

Publication number
JPS60171570A
JPS60171570A JP59027038A JP2703884A JPS60171570A JP S60171570 A JPS60171570 A JP S60171570A JP 59027038 A JP59027038 A JP 59027038A JP 2703884 A JP2703884 A JP 2703884A JP S60171570 A JPS60171570 A JP S60171570A
Authority
JP
Japan
Prior art keywords
area
decision
file
decision area
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP59027038A
Other languages
Japanese (ja)
Inventor
Yoichi Seto
洋一 瀬戸
Nobuo Hamano
浜野 亘男
Fuminobu Furumura
文伸 古村
Tetsuo Yokoyama
哲夫 横山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP59027038A priority Critical patent/JPS60171570A/en
Publication of JPS60171570A publication Critical patent/JPS60171570A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To distinguish specific objects from other kinds and to enable a system for extracting with high accuracy by deleting an overlapped part of plural clusters in a targeted decision area by using training data and test data. CONSTITUTION:Firstly, an extracted picture 22 is displayed as shown in 21 on a graphic terminal 10, from which internal organs as a bone (A) 23, lung (B) 24, stomach (C) 25, etc., are indicated. Secondly, a decision area correction processing 26 displays the feature amount of the specified organ as 34, of which an area of a material A becomes 27; B area 28; C area 29. The system reads out a decision area 40 already prepared from the training data and also stored in a decision area file 30 and overlaps it as shown in 35. The system uses the decision from which extracted material is deleted as a corrected decision area 41, and the system is stored in the file 30. Thirdly, a comparison extracting process 31 uses the area 41 in the file 30, decides whether or not the respective feature amounts of the picture 22 are within the decision area. If they are within the area, the system extracts it as a targeted material.

Description

【発明の詳細な説明】 〔発明の利用分野〕 本発明は、画像解析技術に係り、特に多変量情報を用い
、高精度に目標物を抽出する方式に関する。
DETAILED DESCRIPTION OF THE INVENTION [Field of Application of the Invention] The present invention relates to an image analysis technique, and particularly to a method of extracting a target object with high precision using multivariate information.

〔発明の背景〕[Background of the invention]

従来、多変量情報、たとえばマルチスペクトル(多重波
長)画像より目標物を分類・抽出する方法として最尤法
が一般に用いられる。
Conventionally, the maximum likelihood method is generally used as a method for classifying and extracting targets from multivariate information, such as multispectral (multiple wavelength) images.

最尤法は、医療画像あるいはりモートセンシングの分野
でよく用いられる手法であり未知データの光度が最大の
クラスに属するもの同志を分類していく方法である。
The maximum likelihood method is a method often used in the fields of medical imaging and remote sensing, and is a method of classifying unknown data that belong to a class with the highest luminosity.

光度はデータが正規分布する場合の確率密度関係であり
、 式(1) %式% ここで、S:分散−共分散行列 第1図、第2図を用い最尤法の概念および短所を説明す
る。多変量情報として、マルチスペクトル画像を仮定す
る。
Luminous intensity is a probability density relationship when data is normally distributed, and is expressed as follows: Formula (1) %Formula% Here, S: Variance-covariance matrix The concept and disadvantages of the maximum likelihood method are explained using Figures 1 and 2. do. Assume a multispectral image as the multivariate information.

第1図は、対象物A、B、Cのスペクトル情報を示す。FIG. 1 shows spectral information of objects A, B, and C.

対象物A、B、Cの特定波長λ1.λ2における物理量
を特徴量空間(υ1 112座標系)で表わしたものが
第2図(a)である。すべての波長において物理量を特
徴量空間へ写像すると第2図(b)のように対象物の特
徴に従い特徴空間である領域をもつクラスタを形成する
Specific wavelengths λ1 of objects A, B, and C. FIG. 2(a) shows the physical quantities at λ2 expressed in the feature space (υ1 112 coordinate system). When physical quantities are mapped to the feature space at all wavelengths, clusters are formed that have regions in the feature space according to the characteristics of the object, as shown in FIG. 2(b).

つまり最尤法はn次元のトルースデータからなる9個の
クラスの確率分布が与えられたとき、未知のデータが帰
属すべきクラスを0個のクラスのうち最も尤度の高いク
ラスに分類する方法である。
In other words, the maximum likelihood method is a method of classifying the class to which unknown data should belong to the class with the highest likelihood among the 0 classes, given the probability distribution of 9 classes consisting of n-dimensional truth data. It is.

この最尤法では第2図(b)のAとBのように複数クラ
スタの重なりがある場合、すなわち抽出対象物質と類似
特性を持つ他種物質がある場合分類精度が低ドするとい
う欠点があった。
This maximum likelihood method has the disadvantage that the classification accuracy is low when multiple clusters overlap, as in A and B in Figure 2(b), that is, when there are other substances with similar characteristics to the target substance to be extracted. there were.

〔発明の目的〕[Purpose of the invention]

本発明の目的は多変量情報を用い、高精度に特定目標物
を類似特性を持つ他種物と区別して抽出する方式を提供
することにある。
An object of the present invention is to provide a method for extracting a specific target object by distinguishing it from other types of objects having similar characteristics with high accuracy using multivariate information.

〔発明の概要〕[Summary of the invention]

上記目的を達成するため、本発明では第2図に示すよう
に特定目標の特徴空間でのクラスタの重なりにより重複
部分の分類にあいまいさを生じ、分類精度が悪くなるこ
とがら、あいまいさの生ずる重複部分は分類判定域から
削除するこ°とで分類精度を向上させる点に特徴がある
In order to achieve the above object, in the present invention, clusters overlap in the feature space of a specific target, as shown in FIG. The feature is that the classification accuracy is improved by deleting overlapping parts from the classification judgment area.

最尤法などの従来の分類手法は、この様な判定域削減を
学習データを使って行っているが1本願発明では更に、
分類対象データ(テストデータ)を使って分類判定域の
削減を行う、これは、テストデータ中に、属性が事前情
報により明らかである領域の存在する場合に可能な有効
方式である。
Conventional classification methods such as the maximum likelihood method use learning data to reduce the decision range, but the present invention further
This method of reducing the classification decision area using the classification target data (test data) is an effective method that is possible when there is an area in the test data whose attributes are clear from prior information.

〔発明の実施例〕[Embodiments of the invention]

以下1本発明の一実施例を説明する。 An embodiment of the present invention will be described below.

実施例は、現在、医療画像の分野で注目されているNM
R(核磁気共鳴)スキャナー画像より特定の体内臓器、
例えば病変した肝臓を自動抽出する画像解析システムで
ある。
Examples include NM, which is currently attracting attention in the field of medical imaging.
specific internal organs from R (nuclear magnetic resonance) scanner images,
For example, it is an image analysis system that automatically extracts a diseased liver.

NMRスキャナー画像の多変量情報としては、たて緩和
時間T1とよこ緩和時間T2の2つを用いる。
As multivariate information of the NMR scanner image, two are used: a vertical relaxation time T1 and a horizontal relaxation time T2.

通常、スピンをもつ核磁気双極子は勝手な方向を向いて
いるが、磁場中に置くと双極子が磁力線方向に配向(磁
化ベクトル)する、励起磁場を取去ると磁化ベクトルは
最初の定常状態に戻る。この平衡値への回復は2通りあ
り緩和時間(T sおよびT2)で特徴づけられる。
Normally, a nuclear magnetic dipole with spin points in an arbitrary direction, but when placed in a magnetic field, the dipole is oriented in the direction of the magnetic field lines (magnetization vector), and when the excitation magnetic field is removed, the magnetization vector returns to its initial steady state. Return to There are two ways to recover to this equilibrium value, which are characterized by relaxation times (Ts and T2).

緩和時間は1体内器官の種類あるいは、組織の変成度に
より異なる。
The relaxation time varies depending on the type of internal organ or the degree of tissue degeneration.

本システムの目的は、病変した肝臓を自動抽出すること
である。病変した肝臓の特徴量は、正常な肝臓あるいは
他の臓器と異なっている。
The purpose of this system is to automatically extract the diseased liver. The features of a diseased liver are different from a normal liver or other organs.

しかしこの場合、特徴量空間における各対象物の分布特
性は明確でなく(分散が大きく)他のクラスタとの重な
りが多くあいまい領域が多い。
However, in this case, the distribution characteristics of each object in the feature space are not clear (the variance is large), and there are many overlaps with other clusters, and there are many ambiguous regions.

第3図に沿い抽出処理フローを説明する。The extraction process flow will be explained with reference to FIG.

第3図は病変組織の抽出処理ブローである。FIG. 3 is a flowchart of the extraction process of diseased tissue.

(1)既知情報の入カニ 病変肝臓等に関するたて/よこ緩和時間Ti。(1) Input of known information Vertical/horizontal relaxation time Ti for diseased liver, etc.

1゛2の既知情報5を入力する。Input the known information 5 of 1゛2.

(2)トレーニングデータの作成: 入力した既知情報を特徴量空間に写像する(ステップ4
)。
(2) Creating training data: Mapping the input known information to the feature space (step 4)
).

つまり第4図に示す既知情報を用いTl、T2から形成
される特徴鰍空間に病変肝臓の目標判定域13、および
他の情報、例えば正常肝ill 4゜他の臓器15の特
徴量を写像する。第4図に示す特徴量空間は2次元で表
わされているが一般には多次元特徴量空間(3次元以上
)を成す。(ここでは判定域削減法についての説明を容
易にするため2次元とした。第3の特徴量であるプロト
ン密度情報ρを用いず緩和時間Tl、T2のみを特徴量
として採用した。) まず最初に特徴量空間に写像された目標判定域13を抽
出する。
In other words, using the known information shown in FIG. 4, the target determination area 13 of the diseased liver and other information, such as normal liver ill 4°, and the feature amounts of other organs 15 are mapped onto the feature space formed from Tl and T2. . Although the feature space shown in FIG. 4 is expressed in two dimensions, it generally constitutes a multidimensional feature space (three or more dimensions). (Here, we assumed two dimensions to facilitate the explanation of the decision area reduction method. We did not use the third feature, proton density information ρ, and only the relaxation times Tl and T2 were adopted as features.) First of all. The target judgment area 13 mapped onto the feature space is extracted.

この場合、第4図に示すように目標判定域13外の特徴
量域(正常肝臓14.他の臓器15重複部分はトレーニ
ングデータの精度を劣化させるので、判定域より取り除
く。その結果を第5図に示す。正常肝fi19および他
の臓器20の特徴量を判定域から削除したものが修正目
標判定域18である。が重複することが多々ある。) これはグラフィック端末6を用いて人間が指示する。
In this case, as shown in FIG. 4, the feature area (normal liver 14, other organs 15) outside the target judgment area 13 is removed from the judgment area because it degrades the accuracy of the training data. This is shown in the figure.The corrected target judgment area 18 is obtained by deleting the feature values of the normal liver fi 19 and other organs 20 from the judgment area. Instruct.

(3)判定域データの格納: 上記のように人間が端末6より指示した目標判定域デー
タを判定域ファイル7に格納する。
(3) Storing judgment area data: As described above, the target judgment area data instructed by the human from the terminal 6 is stored in the judgment area file 7.

(++)目標物の分類・抽出処理: ここでは、2段階の処理を行なう(ステップ6)。(++) Target object classification/extraction processing: Here, two-stage processing is performed (step 6).

第6図に目標物の分類抽出処理の詳細処理フローを示す
FIG. 6 shows a detailed processing flow of target object classification extraction processing.

第1に抽出対象画像22をグラフィック端末】0に21
のごとく表示する。この中から骨(A)23、肺(B)
24.胃(C)25等の臓器を指示する。
First, extract the image 22 from the graphic terminal]0 to 21
Display as follows. Among these, bones (A) 23, lungs (B)
24. Indicates organs such as the stomach (C) 25.

第2に判定域修正処理26では、上記指示された臓器の
特徴量を34のごとく表示する。この中で物質Aの領域
は27.Bの領域は28.Cの領域は29となる。34
の表示に、すでに1〜し一二ングデータから作成し判定
域ファイル30に格納されている判定域40を読み出し
35のごとく重ねる。そして該判定域から抽出対象物以
外の上記A、B、Cの領域を削除したものを修正された
判定域4】とする。これを判定域ファイル30に格納す
る。上記処理は、トレーニングデータから作成した各物
質の標準的特徴量分布にもとづく判定域に、抽出対象画
像22から作成した該画像固有の特徴量分布に適合する
修正を加えるものである。
Second, in the judgment region modification process 26, the feature amounts of the designated organs are displayed as shown in 34. Among these, the area of substance A is 27. The area of B is 28. The area of C is 29. 34
The judgment area 40, which has already been created from the 1 to 12 matching data and stored in the judgment area file 30, is read out and superimposed on the display as shown in 35. Then, the area obtained by deleting the areas A, B, and C other than the extraction target from the judgment area is defined as the corrected judgment area 4]. This is stored in the judgment area file 30. The above processing adds a modification to the judgment region based on the standard feature distribution of each substance created from the training data to match the image-specific feature distribution created from the extraction target image 22.

第3に比較抽出処理3jでは判定域ファイル30中の目
標判定域41を用い、抽出対象画像22の各点の特徴量
が該判定域内か否かを判定し、域内であれば着目物質と
して抽出する。
Thirdly, in the comparison extraction process 3j, the target judgment area 41 in the judgment area file 30 is used to determine whether the feature amount of each point of the extraction target image 22 is within the judgment area, and if it is within the area, it is extracted as the target substance. do.

(S)抽出結果の出カニ 抽出した結果は抽出部分33 (病変した肝臓)を明示
しグラフィック端末IOあるいは磁気テープ11に出力
される。
(S) Output of extraction results The extracted results clearly indicate the extracted portion 33 (lesioned liver) and are output to the graphic terminal IO or the magnetic tape 11.

本システムによれば多数のN M R画像より自動的に
特定臓器の病変部分を未知画像中より抽出できる。
According to this system, a lesioned part of a specific organ can be automatically extracted from unknown images from a large number of NMR images.

以上のように本方式は、 (1)1−レーニングデータ(判定域)作成時にあいま
い領域を判定域より削除する。
As described above, this method: (1) 1- When creating training data (judgment area), ambiguous areas are deleted from the judgment area.

(2) (1)に加え抽出対象画像の特性がバラツキを
補償するため目標判定域外の特徴量の分布を特徴空間よ
り削除する。
(2) In addition to (1), in order to compensate for variations in the characteristics of images to be extracted, the distribution of feature amounts outside the target determination area is deleted from the feature space.

の処理によってあいまい領域を削除することにより、分
類抽出精度の向上化を図っている。
The classification extraction accuracy is improved by removing ambiguous areas through this process.

上記処理は、どららか1方のみの領域削除でも問題はな
い。
In the above process, there is no problem even if only one area is deleted.

以上の抽出処理は、多変量データがマルチスペク1−ラ
ル(多重波長)で構成される衛星画像を用いても有効で
ある。
The above extraction process is also effective using a satellite image in which multivariate data is composed of multispectral (multiple wavelength) data.

〔発明の効果〕〔Effect of the invention〕

本発明によれば、目標判定域内にあるあいまいな領域つ
まり複数のクラスタが重複する部分をトレーニングデー
タのみならずテストデータをも使って削除する判定域削
減法を用いることで、分類・抽出精度を向F化できる。
According to the present invention, classification and extraction accuracy can be improved by using a decision region reduction method that uses not only training data but also test data to delete ambiguous regions within the target decision region, that is, parts where multiple clusters overlap. It can be converted to F.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は対象物のスペクトル分布特性を示す図、第2図
は最尤法には特徴量空間への写像結果を示す図、第3図
は病変対象物の抽出処理フローを示す図、第4図はNM
R情報データの特徴量空間への写像図、第5図はあいま
い領域を削除した判定域を示す図、第6図は目標物の分
類・抽出処理の詳細フローを示す図である。 33・・・病変した肝臓の抽出部分。 第 1 口 第 2 図 内 u1 第 3 口 第40 y5昭 第 z 図
Figure 1 is a diagram showing the spectral distribution characteristics of the object, Figure 2 is a diagram showing the mapping result to the feature space for the maximum likelihood method, Figure 3 is a diagram showing the extraction processing flow of a lesion target, Figure 4 is NM
FIG. 5 is a diagram showing the mapping of the R information data to the feature space, FIG. 5 is a diagram showing a judgment area from which ambiguous regions have been deleted, and FIG. 6 is a diagram showing a detailed flow of target object classification/extraction processing. 33...Extracted part of the diseased liver. 1st mouth 2nd figure u1 3rd mouth 40 y5 Showa z figure

Claims (1)

【特許請求の範囲】[Claims] 目標対象物を多変量データから抽出する処理において、
特徴量空間に写像した多変量データの分布特性より、目
樟対象物の判定域を決定する際、判定域に重複する他の
対象物による分布特性の領域を処理対象多変量データか
らめ削除することを特徴とする分類用判定域の設定方式
In the process of extracting a target object from multivariate data,
When determining the judgment area of a Mekuri object from the distribution characteristics of multivariate data mapped to the feature space, areas of distribution characteristics due to other objects that overlap with the judgment area are deleted from the multivariate data to be processed. A classification judgment area setting method characterized by:
JP59027038A 1984-02-17 1984-02-17 System for setting decision area for classification Pending JPS60171570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP59027038A JPS60171570A (en) 1984-02-17 1984-02-17 System for setting decision area for classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP59027038A JPS60171570A (en) 1984-02-17 1984-02-17 System for setting decision area for classification

Publications (1)

Publication Number Publication Date
JPS60171570A true JPS60171570A (en) 1985-09-05

Family

ID=12209895

Family Applications (1)

Application Number Title Priority Date Filing Date
JP59027038A Pending JPS60171570A (en) 1984-02-17 1984-02-17 System for setting decision area for classification

Country Status (1)

Country Link
JP (1) JPS60171570A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7280694B2 (en) 2002-07-23 2007-10-09 Medison Co., Ltd. Apparatus and method for identifying an organ from an input ultrasound image signal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7280694B2 (en) 2002-07-23 2007-10-09 Medison Co., Ltd. Apparatus and method for identifying an organ from an input ultrasound image signal

Similar Documents

Publication Publication Date Title
Dileep et al. AyurLeaf: a deep learning approach for classification of medicinal plants
Welsh et al. Transferring color to greyscale images
JP6855850B2 (en) Similar case image search program, similar case image search device and similar case image search method
Benson et al. Interpretation of Landsat-4 Thematic Mapper and Multispectral Scanner data for forest surveys
WO2013191975A1 (en) Machine-learnt person re-identification
Vannier et al. Magnetic resonance imaging multispectral tissue classification
Weeks et al. Automating the identification of insects: a new solution to an old problem
JPH10508727A (en) Contrast enhancement by spatial histogram analysis
Bai et al. Quantifying tree cover in the forest–grassland ecotone of British Columbia using crown delineation and pattern detection
JP2018075069A (en) Similar case image retrieval program, similar case image retrieval device, and similar case image retrieval method
Sako et al. Computer image analysis and classification of giant ragweed seeds
Pham et al. Attribute profiles on derived textural features for highly textured optical image classification
JPS60171570A (en) System for setting decision area for classification
Ge et al. Description of a new species of Megischus Brullé (Hymenoptera, Stephanidae), with a key to the species from China
Kartika et al. Combining of Extraction Butterfly Image using Color, Texture and Form Features
CN108630301B (en) Feature marking method and device and computer storage medium
JP6702118B2 (en) Diagnosis support device, image processing method in the diagnosis support device, and program
Mutia et al. Improving the performance of CBIR on islamic women apparels using normalized PHOG
JPS60171575A (en) Feature amount expression system by dimensional reduction method
CN114140408A (en) Image processing method, device, equipment and storage medium
Derganc et al. Nonparametric segmentation of multispectral MR images incorporating spatial and intensity information
Kolyaie et al. Transferability and the effect of colour calibration during multi-image classification of Arctic vegetation change
Yang et al. Accurate anatomical landmark detection based on importance sampling for infant brain MR images
JP3037495B2 (en) Object image extraction processing method
Da Rugna et al. Color coarse segmentation and regions selection for similar images retrieval