JPH05130989A - Processor for ct image - Google Patents
Processor for ct imageInfo
- Publication number
- JPH05130989A JPH05130989A JP3321511A JP32151191A JPH05130989A JP H05130989 A JPH05130989 A JP H05130989A JP 3321511 A JP3321511 A JP 3321511A JP 32151191 A JP32151191 A JP 32151191A JP H05130989 A JPH05130989 A JP H05130989A
- Authority
- JP
- Japan
- Prior art keywords
- image
- dimensional
- area
- mask pattern
- original image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000004072 lung Anatomy 0.000 abstract description 4
- 230000009466 transformation Effects 0.000 abstract 1
- 238000000034 method Methods 0.000 description 26
- 238000006243 chemical reaction Methods 0.000 description 14
- 210000004872 soft tissue Anatomy 0.000 description 14
- 238000000605 extraction Methods 0.000 description 10
- 210000000621 bronchi Anatomy 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 5
- 201000005202 lung cancer Diseases 0.000 description 5
- 208000020816 lung neoplasm Diseases 0.000 description 5
- 210000001519 tissue Anatomy 0.000 description 4
- 210000004204 blood vessel Anatomy 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008602 contraction Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 210000004185 liver Anatomy 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 201000009030 Carcinoma Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
Description
【0001】[0001]
【産業上の利用分野】本発明は、被検体の断層像を得る
CT装置において、複数の断層像を効率的に観察するた
めのCT画像の処理装置に関する。The present invention relates to a CT image processing apparatus for efficiently observing a plurality of tomographic images in a CT apparatus for obtaining a tomographic image of a subject.
【0002】[0002]
【従来の技術】現在のX線CT装置など画像診断用機器
の場合、画像の観察はCRT又はフィルムで行う。CR
T上での観察方法は各スライス画像を順次表示するか、
縮小しまたは一部のスライス画像を複数枚同時に表示す
る。また、特別な方法として膨大な演算時間を掛けて3
次元画像を再構成し、トラックボールやマウスなどで視
点を変え任意の方向から観察できるようにしたものであ
る。フィルムで観察する場合はシャウカステンに掛け観
察する。2. Description of the Related Art In the case of current image diagnostic equipment such as an X-ray CT apparatus, image observation is performed by CRT or film. CR
The observation method on T is to display each slice image sequentially,
Reduce or display some slice images at the same time. Also, as a special method, it takes a huge amount of calculation time to
The three-dimensional image is reconstructed so that it can be observed from any direction by changing the viewpoint with a trackball or mouse. When observing with a film, hang it on Schaucasten and observe.
【0003】ところで撮影は、例えば、200mmの範
囲を観察する場合には一般的に5〜10mmスライス厚
の画像を5〜10mm間隔で20〜40枚投影する。医
師は20〜40枚の画像すべてを読影し診断を下すこと
になる。1日に同様の患者を10人撮影した場合、1日
に読影する画像の枚数は、200〜400枚にものぼ
る。さらに、今後X線CT装置を利用した集団検診シス
テムなどが実施された場合には被検体の数は数100人
規模となりスクリーニングに要する時間は膨大なものに
なる。特に、肺癌に関してはX線CTの有効性が注目さ
れているが、X線CTによる肺癌集検システムを実現す
る上でこの問題は重要である。By the way, in photographing, for example, when observing a range of 200 mm, generally 20 to 40 images having a slice thickness of 5 to 10 mm are projected at intervals of 5 to 10 mm. The doctor will read all 20 to 40 images and make a diagnosis. When 10 similar patients are photographed on one day, the number of images to be interpreted on one day is as high as 200 to 400. Furthermore, if a mass screening system using an X-ray CT apparatus is implemented in the future, the number of subjects will be on the order of several hundreds and the time required for screening will be enormous. In particular, the effectiveness of X-ray CT has attracted attention for lung cancer, but this problem is important in realizing a lung cancer mass screening system using X-ray CT.
【0004】この問題の解決策として3次元画像診断の
可能性が検討されている。ところが、従来の3次元画像
は3次元情報から2次元画像情報に変換するレンダリン
グ処理(この処理のため2.5次元画像と呼ぶべきかも
しれない)など処理が複数で多くの演算時間を要する。
また、表示対称はCT値が明らかで単純なしきい値処理
で抽出できる骨、皮膚などに限られ、しかも抽出した表
示対称は2値画像であるから3次元画像にCT値は反映
されず、形状診断しかできない。そのため現実的に外科
手術のシミュレーションなどに用いられる程度でルーチ
ン的に使用するには至らず、スループットの面でも不適
当であった。The possibility of three-dimensional image diagnosis has been investigated as a solution to this problem. However, a conventional three-dimensional image requires a lot of calculation time because it has a plurality of processes such as a rendering process for converting three-dimensional information into two-dimensional image information (which may be called a 2.5-dimensional image for this process).
Also, the display symmetry is limited to bones, skins, etc. where the CT value is clear and can be extracted by simple threshold processing, and since the extracted display symmetry is a binary image, the CT value is not reflected in the three-dimensional image, and the shape is Can only diagnose. For this reason, it has not been practically used routinely as much as it is used for simulation of surgical operations, and it is also inappropriate in terms of throughput.
【0005】一方、MRIでは血管像の表示に比較的処
理が単純な3次元再構成法としてMIP(Maximum Inte
nsity Projection)を用いることが試みられている。M
IPは3次元画像情報を2次元平面に投影する方法の1
つで図5に示したように投影線上の最大画素値を投影す
る方法で短時間で3次元再構成画像が得られ、複数枚の
画像情報は1枚の画像に圧縮され1度に観察が可能にな
る。即ち、図5で複数(n個)のスライス画像をもって
3次元画像を構成している場合、着目した画素位置への
視点を設定し、この視点位置に存在する複数のn個のス
ライス画像中の画素P1、P2、…、Pnを取り出す。こ
の画素P1、P2、…、Pnの中で最大濃度(階調)の画
素を選択する。これらの画素選択を、関心領域のすべて
のわたって行い、かくして得た1枚の画像が投影2次元
画像となり、これを表示面上に表示すれば、2次元画像
の観察が可能になる。On the other hand, in MRI, MIP (Maximum Inte
nsity Projection) has been attempted. M
IP is one of the methods for projecting three-dimensional image information onto a two-dimensional plane.
Then, as shown in FIG. 5, the method of projecting the maximum pixel value on the projection line can obtain a three-dimensional reconstructed image in a short time, and the image information of a plurality of images is compressed into one image and observed at one time. It will be possible. That is, when a three-dimensional image is composed of a plurality (n) of slice images in FIG. 5, a viewpoint to a pixel position of interest is set, and among the n slice images of a plurality of n existing at this viewpoint position. Pixels P 1 , P 2 , ..., P n are taken out. Of the pixels P 1 , P 2 , ..., P n , the pixel having the maximum density (gradation) is selected. These pixel selections are performed over the entire region of interest, and one image thus obtained becomes a projected two-dimensional image, and when this is displayed on the display surface, the two-dimensional image can be observed.
【0006】[0006]
【発明が解決しようとする課題】MRIの場合は、血管
を含む3次元画像において血管のみを抽出した2次元画
像を得るのに、上記MIP法が便利な手法である。背景
に対し、血管の画素濃度が大きいためによる。このMI
PをX線CT装置の胸部画像にも応用することが試みら
れている。X線CTの場合、画素値はCT値と呼ばれる
吸収係数に対応する値であり、気管支など肺野領域は約
−800〜−400、肺癌の病巣部が約−500〜0、
それ以外の肝臓、心臓など軟部組織は約−100〜+1
00の値を示す。従ってMIP処理を施した場合、軟部
組織の値の方が大きいため肺癌などが存在する部位たる
気管支が投影されにくくなる。そこでしきい値を用いて
マスク処理を施して吸収値の高い軟部組織を除外し、気
管支等の関心領域のみを抽出する。例えば、処理装置は
図4に示したように関心領域抽出装置1と投影変換処理
装置2からなる。関心領域抽出装置1は原画像メモリ
3、しきい値処理装置4、マスク画像メモリ6、マスク
処理装置5などで構成される。投影変換処理装置2は画
像メモリ8と投影変換装置7、処理画像メモリ9などで
構成される。しきい値処理装置4は設定されているしき
い値で原画像メモリ3に入っている原画像Aを2値化し
て2値化画像Bを得、これをマスク画像メモリ6に蓄え
る。この2値化画像Bは関心領域に対応する画素(この
例では病巣と気管支など肺野領域)が1であり、その他
の軟部組織等は0となっているパターンで、マスク処理
装置5はこのマスクパターンBと原画像Aの論理和を取
り、マスクパターンが1となっている画素のみを出力す
る。マスク処理結果たるマスク処理画像Cは投影変換処
理装置2の画像メモリ8に入力され投影変換装置7で投
影変換され、所望の3次元画像が処理画像メモリ9に得
られる。ところが、軟部組織と癌組織のCT値の分布範
囲が重なるためかんじんの病巣も一緒に削除されてしま
ったり、軟部組織が完全に除去できなくなる。更に、軟
部組織がスライスに一部分含まれたパーシャルボリュー
ム(Partial Volume)の場合には本来のCT値よりも低
い値を示す。実験によると、肝臓や心臓などがパーシャ
ルボリュームとして計測されると約−500程度の値を
持つ場合もあり、気管支との分離が困難になる。つま
り、マスクパターン上の癌組織に対応する画素が0とな
ったり、観察の妨げになる軟部組織領域の一部が1にな
ったりする。In the case of MRI, the above MIP method is a convenient method for obtaining a two-dimensional image in which only blood vessels are extracted from a three-dimensional image including blood vessels. This is because the pixel density of blood vessels is high compared to the background. This MI
It has been attempted to apply P to a chest image of an X-ray CT apparatus. In the case of X-ray CT, the pixel value is a value corresponding to an absorption coefficient called CT value, the lung field region such as the bronchus is about -800 to -400, the lung cancer lesion is about -500 to 0,
Other soft tissues such as liver and heart are about -100 to +1
The value of 00 is shown. Therefore, when the MIP process is performed, the value of the soft tissue is larger, so that the bronchus, which is the site where the lung cancer or the like exists, becomes difficult to be projected. Therefore, masking is performed using a threshold value to exclude soft tissue having a high absorption value, and only the region of interest such as the bronchus is extracted. For example, the processing device includes the region-of-interest extraction device 1 and the projection conversion processing device 2 as shown in FIG. The region-of-interest extraction device 1 includes an original image memory 3, a threshold value processing device 4, a mask image memory 6, a mask processing device 5, and the like. The projection conversion processing device 2 includes an image memory 8, a projection conversion device 7, a processed image memory 9, and the like. The threshold value processing device 4 binarizes the original image A stored in the original image memory 3 with a set threshold value to obtain a binarized image B, which is stored in the mask image memory 6. The binary image B has a pattern in which the pixels corresponding to the region of interest (in this example, lung regions such as lesions and bronchus) are 1, and other soft tissues and the like are 0. The logical sum of the mask pattern B and the original image A is calculated, and only the pixels for which the mask pattern is 1 are output. The mask processing image C as the mask processing result is input to the image memory 8 of the projection conversion processing apparatus 2 and projected and converted by the projection conversion apparatus 7, and a desired three-dimensional image is obtained in the processed image memory 9. However, since the distribution range of the CT value of the soft tissue and that of the cancer tissue overlap, the lesion of the carcinoma is also deleted, or the soft tissue cannot be completely removed. Further, in the case of a partial volume (Partial Volume) in which the soft tissue is partially included in the slice, the value is lower than the original CT value. According to experiments, when the liver or heart is measured as a partial volume, it may have a value of about −500, which makes it difficult to separate it from the bronchi. That is, the pixel corresponding to the cancerous tissue on the mask pattern becomes 0, or a part of the soft tissue region that obstructs the observation becomes 1.
【0007】本発明の目的は、肺野領域等の関心領域を
正確に抽出可能にするCT画像の処理装置を提供するに
ある。It is an object of the present invention to provide a CT image processing apparatus capable of accurately extracting a region of interest such as a lung field region.
【0008】[0008]
【課題を解決するための手段】本発明の処理装置は、各
2次元原画像毎に、しきい値処理を行って各2次元2値
化画像を得る手段と、各2次元2値化画像毎に、画素値
1を持つ閉領域毎に面積を求める手段と、各2次元2値
化画素毎にこの面積の大小によりマスクすべき画素とマ
スクすべきでない画素とに区分したマスクパターンを生
成する手段と、各2次元原画像毎に、このマスクパター
ンに従ってマスクすべき画素位置の原画像画素は画素値
0とし、マスクすべきでない画素位置の原画像画素はそ
のまま残すマスク処理手段と、このマスク処理された各
2次元原画像にMIP処理を施し2次元投影像を得る手
段とより成る(請求項1)。The processing apparatus of the present invention comprises means for performing threshold processing on each two-dimensional original image to obtain each two-dimensional binary image, and each two-dimensional binary image. For each, a means for obtaining an area for each closed region having a pixel value of 1, and for each two-dimensional binarized pixel, generate a mask pattern divided into pixels to be masked and pixels not to be masked according to the size of the area. For each two-dimensional original image, the original image pixel at the pixel position to be masked according to this mask pattern has a pixel value of 0, and the original image pixel at the pixel position not to be masked is left as it is, and The mask-processed two-dimensional original image is subjected to MIP processing to obtain a two-dimensional projected image (claim 1).
【0009】更に本発明の処理装置でのMIP処理で
は、投影線上の複数の画素の中で最大の画素値を求め、
かくして得た投影線毎の最大の画素値を、その投影線上
の2次元投影画像の画素値として設定してなる処理とし
た(請求項2)。Further, in the MIP processing in the processing apparatus of the present invention, the maximum pixel value among a plurality of pixels on the projection line is calculated,
The thus obtained maximum pixel value for each projection line is set as the pixel value of the two-dimensional projection image on that projection line (claim 2).
【0010】更に本発明の処理装置での面積を求める手
段にあっては、2次元2値化画像を収縮処理して画素値
1を持つ閉領域のラベル付けをその収縮処理した画像に
対して行い、該ラベル付け後に2値化画像を拡張処理
し、この拡張処理で得た画像に対し、上記ラベル付けし
た閉領域毎に面積を求めることとした(請求項3)。Further, in the means for obtaining the area in the processing apparatus of the present invention, the two-dimensional binarized image is contracted to label the closed area having the pixel value 1 with respect to the contracted image. After the labeling, the binarized image is expanded, and the area obtained for each of the labeled closed regions is calculated for the image obtained by this expansion (claim 3).
【0011】[0011]
【作用】本発明によれば、2次元2値化画像を得る手段
とMIP処理を行う手段との間に、閉領域の面積で区分
したマスクパターンを得る手段を設けたが故に、余分な
軟部組織をも投影画像上に残しておくようなことをなく
せる。(請求項1〜3)。According to the present invention, since the means for obtaining the mask pattern divided by the area of the closed region is provided between the means for obtaining the two-dimensional binarized image and the means for performing the MIP processing, the extra soft portion is provided. It is possible to avoid leaving tissue on the projected image. (Claims 1 to 3).
【0012】更に本発明によれば、MIP処理では最大
値を求めることとし(請求項2)、面積を求める処理に
あっては収縮処理と拡張処理とを行うことによって閉領
域の正しい区分化を達成する(請求項3)。Further, according to the present invention, the maximum value is obtained in the MIP process (claim 2), and the contraction process and the expansion process are performed in the process of obtaining the area to correctly segment the closed region. Achieve (claim 3).
【0013】[0013]
【実施例】図2は一般的なX線CT装置の構成図で、X
線管、X線検出器、計測回路などからなるガントリ−1
0、患者を搬送する患者テーブル11、X線管に高電圧
を供給する高電圧発生装置12、および画像診断装置1
3からなる。図3には、画像診断装置13の実施例図を
示す。この画像診断装置13は、磁気ディスク14、主
メモリ15、画像再構成処理装置16、高速演算器1
7、簡易3次元処理装置18、表示部19、及びこれら
を接続する高速内部バス20より成る。簡易3次元処理
装置18は関心領域抽出処理装置21、投影変換処理装
置22より成る。DETAILED DESCRIPTION FIG. 2 is a block diagram of a general X-ray CT apparatus.
Gantry-1 consisting of X-ray tube, X-ray detector and measuring circuit
0, a patient table 11 for carrying a patient, a high voltage generator 12 for supplying a high voltage to an X-ray tube, and an image diagnostic apparatus 1.
It consists of three. FIG. 3 shows an embodiment of the image diagnostic apparatus 13. The image diagnostic device 13 includes a magnetic disk 14, a main memory 15, an image reconstruction processing device 16, and a high-speed computing unit 1.
7, a simple three-dimensional processing device 18, a display unit 19, and a high-speed internal bus 20 connecting them. The simple three-dimensional processing device 18 includes a region of interest extraction processing device 21 and a projection conversion processing device 22.
【0014】以上の構成で、磁気ディスク14は、原画
像A等の各種のデータを格納しておくものであり、抽出
処理時には、主メモリ15へバス20を介して転送され
る。また主メモリ15には、計測した投影データを一時
的に格納し、画像再構成処理装置16が、この計測した
投影データを処理してスライス面での断層像を得るもの
であり、この結果が原画像Aとなり、磁気ディスク14
へ格納することになる。高速演算器17は、一般のCP
Uでは処理速度が遅くなる処理(対数処理、三角関数処
理、その他画像処理等)を、高速処理するために設けて
いる。簡易3次元処理装置18は、前記したMIP処理
用であり、特に本発明の特徴部分であり、その詳細構成
を図1に示してある。表示部19は、各種の画像表示用
に使う。With the above configuration, the magnetic disk 14 stores various data such as the original image A and is transferred to the main memory 15 via the bus 20 during the extraction processing. In addition, the measured projection data is temporarily stored in the main memory 15, and the image reconstruction processing device 16 processes the measured projection data to obtain a tomographic image on the slice plane. It becomes the original image A and the magnetic disk 14
Will be stored in. The high-speed computing unit 17 is a general CP
In U, processing (logarithmic processing, trigonometric function processing, other image processing, etc.) that slows down the processing speed is provided for high-speed processing. The simple three-dimensional processing device 18 is for the above-mentioned MIP processing, and is a characteristic part of the present invention, and its detailed configuration is shown in FIG. The display unit 19 is used for displaying various images.
【0015】さて、以上の構成である患者の胸部400
mmの範囲を撮影する場合を考える。スキャンは、撮影
中にテーブル11を移動しながら計測し高速に広範囲の
撮影可能なスパイラルスキャンとする。スパイラルスキ
ャン(特開昭62ー87137号、特開昭62ー139
630号)を実施するにはスリップリング等を用いて静
止系から回転系へ電力を供給したり、信号の受渡しが必
要なことは言うまでもない。スパイラルスキャンではス
ライス位置の投影データは補間によって求めれば任意位
置のスライス画像が得られる。ここで5mm間隔でスラ
イス画像を求めようとすると、画像再構成処理装置16
では補間処理によって指定位置の投影データが求めら
れ、公知のフィルタ補正逆投影法(FilteredBack Projec
tion Method)などによって再構成された画像は磁気ディ
スク14および表示部19に順次転送される。最終的に
約80枚の画像が得られ磁気ディスク14内に格納され
る。このように通常の使用方法では撮影中であっても画
像が再構成されしだい順次表示部19に断層像が表示さ
れる。The patient's chest 400 having the above configuration
Consider the case of shooting a range of mm. The scan is a spiral scan in which the table 11 is measured while moving and the wide range can be taken at high speed. Spiral scan (JP-A-62-87137, JP-A-62-139)
It is needless to say that in order to carry out No. 630), it is necessary to supply electric power from the stationary system to the rotating system using a slip ring or the like, and to deliver and receive signals. In the spiral scan, if the projection data at the slice position is obtained by interpolation, a slice image at an arbitrary position can be obtained. If the slice images are obtained at 5 mm intervals, the image reconstruction processing device 16
Projection data at a specified position is obtained by interpolation processing, and the well-known filtered back projection method (Filtered Back Projec
The image reconstructed by the motion method) is sequentially transferred to the magnetic disk 14 and the display unit 19. Finally, about 80 images are obtained and stored in the magnetic disk 14. As described above, in the normal use method, the tomographic images are sequentially displayed on the display unit 19 as the images are reconstructed even during photographing.
【0016】ここで、撮影前に本実施例の特徴である簡
易3Dモードを選択しておくと画面上にはMIP処理さ
れた画像が表示される。次に簡易3D処理について説明
する。簡易3D処理装置18は関心領域抽出装置21と
投影変換処理装置22からなりその詳細構成を図1に示
す。本実施例の関心領域抽出装置21は原画像メモリ2
2、しきい値処理装置23、面積処理装置24、マスク
画像メモリ25、マスク処理装置26で構成される。投
影変換処理装置22は画像メモリ27と投影変換装置2
8、処理画像メモリ29で構成される。これらの構成は
面積処理装置24を付加した点が異なるものであり、そ
の他の構成は図4と同じである。但し記号は異ならせて
ある。関心領域抽出装置21は、2次元CT画像(2次
元原画像)毎に以下の内部動作を行う。先ず再構成され
た画像は順次関心領域抽出装置21の原画像メモリ22
に入力される。しきい値処理装置23は設定されている
しきい値−100で原画像メモリ22に入っている原画
像を2値化し、しきい値以上の画素を0、未満の画素を
1として出力する。マスク処理装置26はこのマスクパ
ターン(2値画像)と原画像の論理和を取り、マスクパ
ターンが1となっている画素のみを出力するから、マス
クパターンでは軟部組織は0となっていなければならな
い。ところが、前述のように−800〜−100の値を
有する軟部組織はマスクパターンに1の画素として残っ
てしまう。そこで、本実施例における面積処理装置24
ではマスクパターンの閉領域(連結した画素値の集ま
り)をラベル付けし、各閉領域の面積を求める。If the simplified 3D mode, which is a feature of this embodiment, is selected before photographing, an MIP-processed image is displayed on the screen. Next, the simple 3D processing will be described. The simplified 3D processing device 18 is composed of a region of interest extraction device 21 and a projection conversion processing device 22, and its detailed configuration is shown in FIG. The region-of-interest extraction device 21 of the present embodiment is the original image memory 2
2, a threshold processing device 23, an area processing device 24, a mask image memory 25, and a mask processing device 26. The projection conversion processing device 22 includes the image memory 27 and the projection conversion device 2.
8 and a processed image memory 29. These configurations are different in that an area processing device 24 is added, and other configurations are the same as those in FIG. However, the symbols are different. The ROI extraction device 21 performs the following internal operation for each two-dimensional CT image (two-dimensional original image). First, the reconstructed images are sequentially stored in the original image memory 22 of the ROI extracting device 21.
Entered in. The threshold value processing device 23 binarizes the original image stored in the original image memory 22 with the set threshold value -100, and outputs the pixels above the threshold value as 0 and the pixels below the threshold value as 1. Since the mask processing device 26 takes the logical sum of this mask pattern (binary image) and the original image and outputs only the pixels whose mask pattern is 1, the soft tissue must be 0 in the mask pattern. .. However, as described above, the soft tissue having a value of −800 to −100 remains as one pixel in the mask pattern. Therefore, the area processing device 24 in the present embodiment
Then, the closed area (collection of connected pixel values) of the mask pattern is labeled, and the area of each closed area is obtained.
【0017】本実施例では異なる組織領域が同じ閉領域
となるのを抑制するために、あらかじめラベル付け前に
収縮処理(細め処理とも言う)F1を施す。ラベル付け
後には収縮処理F1によって小さくなった分だけ拡張処
理(太め処理とも言う)F2によりもとの大きさに戻
す。この段階での画素値はその画素が属する閉領域のラ
ベル(番号)で区分けしておく。例えば、気管支をL
1、肺癌部をL2…の如くラベルわけされる。同じラベ
ル番号を有する画素をカウントして対応する各閉領域の
面積が画素数の単位で求められる。(面積処理F3)。
実験によると軟部組織領域は気管支や患部に比べ面積が
大きく例えば100画素以上は軟部組織であるというよ
うに面積値によるしきい値処理で分離することが出来
る。即ち、面積処理F3ではしきい値面積より大きな閉
領域の画素値を0、小さい閉領域の画素値を1とするこ
とでマスクパターンを作成しマスク画像メモリ25に格
納する。これによってマスクパターンは軟部組織相当位
置の2値化画素値は0となってほぼ完全にマスク可能に
なり、投影変換処理に好都合なマスクパターン画像が得
られる。In this embodiment, in order to prevent different tissue regions from becoming the same closed region, a shrinking process (also referred to as a thinning process) F 1 is performed in advance before labeling. After the labeling, the original size is restored by the expansion process (also called thickening process) F 2 by the amount reduced by the contraction process F 1 . The pixel value at this stage is divided by the label (number) of the closed region to which the pixel belongs. For example, the bronchus is L
1. The lung cancer part is labeled as L2 ... By counting the pixels having the same label number, the area of each corresponding closed region is obtained in the unit of the number of pixels. (Area treatment F 3).
According to the experiment, the soft tissue region has a larger area than that of the bronchus or the affected area, and for example, 100 pixels or more are the soft tissue, and thus the soft tissue region can be separated by the threshold processing by the area value. That is, in the area processing F 3 , the mask value is created by setting the pixel value of the closed area larger than the threshold area to 0 and the pixel value of the small closed area to 1, and the mask pattern is stored in the mask image memory 25. As a result, the mask pattern becomes almost completely maskable with the binarized pixel value at the soft tissue corresponding position being 0, and a mask pattern image convenient for projection conversion processing can be obtained.
【0018】次に、マスク処理装置26では、マスクパ
ターン画像と、その元となった2次元原画像(原画像メ
モリ22内にあったもの)との間でマスク処理をする。
このマスク処理とは、2次元原画像の画素位置とマスク
パターン画像の画素位置とは互いに対応しているため
に、この互いの対応画素位置にあっては、マスクパター
ン上の画素値が0であれば対応する原画像の画素は強制
的に0とし、マスクパターン上の画素値が1であれば対
応する原画像の画素値はそのまま残しておく処理を云
う。かくして、このマスク処理によって、マスクパター
ン上での0を示す画素位置に対応する原画像の画素はマ
スクされたことになり、投影変換処理に好都合な原画像
となる。これを画像メモリ27に格納する。Next, in the mask processing device 26, the mask processing is performed between the mask pattern image and the original two-dimensional original image (that was in the original image memory 22).
In this mask processing, the pixel position of the two-dimensional original image and the pixel position of the mask pattern image correspond to each other. Therefore, at these corresponding pixel positions, the pixel value on the mask pattern is 0. If there is, the pixel of the corresponding original image is forcibly set to 0, and if the pixel value on the mask pattern is 1, the pixel value of the corresponding original image is left as it is. Thus, by this masking process, the pixels of the original image corresponding to the pixel position indicating 0 on the mask pattern are masked, and the original image is convenient for the projection conversion process. This is stored in the image memory 27.
【0019】これら関心領域抽出処理を全2次元原画像
(スライス画像)に対して施すと、投影変換処理装置2
2の画像メモリ27には3次元関心領域のCT値情報が
蓄えられる。投影変換装置28はMIP処理を実行し、
投影線をスライスと垂直に選び処理を単純化している。
すなわち、n枚のマスク処理画像I1(i、j)〜In
(i、j)とすれば、処理画像メモリ29に出力される
簡易3D画像P(i、j)はWhen these region-of-interest extraction processes are applied to all two-dimensional original images (slice images), the projection conversion processing device 2
The second image memory 27 stores CT value information of the three-dimensional region of interest. The projection conversion device 28 executes MIP processing,
The projection line is selected perpendicular to the slice to simplify the process.
That is, n mask processed images I1 (i, j) to In
If (i, j), the simplified 3D image P (i, j) output to the processed image memory 29 is
【数1】P(i、j)=MAX(I1(i、j)、I2
(i、j)、…、In(i、j))となる。ここで、M
AXとは、n枚の画像中の画素位置毎の最大画素値を抽
出して得た値を意味する。## EQU1 ## P (i, j) = MAX (I1 (i, j), I2
(I, j), ..., In (i, j)). Where M
AX means a value obtained by extracting the maximum pixel value for each pixel position in n images.
【0020】以上の実施例ではしきい値を1つ設け実施
していたが、X線CT装置の場合、背景である空気は約
−1000を示しマスク処理画像に残ってしまうため、
簡易3D画像上にもノイズとして残る。そこで、実施例
2ではしきい値を2つ設定し、T1≦CT値≦T2の画
素を1とする2値化画像をしきい値処理装置23で出力
することとした。それ以降の処理は変わりない。T1、
T2は例えば−800と0といった値である。また、本
実施例は最大値を求める例としたが、抽出目的によって
は、最小値抽出やM個の画素の平均値といった統計処理
量の抽出にも適用できる。また、MRI画像等のX線C
T像以外のCT像に適用できる。In the above embodiment, one threshold value is set and implemented. However, in the case of the X-ray CT apparatus, the background air indicates about -1000 and remains in the mask processed image.
It also remains as noise on the simple 3D image. Therefore, in the second embodiment, two threshold values are set, and the threshold value processing device 23 outputs the binarized image in which the pixel of T1 ≦ CT value ≦ T2 is set to 1. Subsequent processing does not change. T1,
T2 is a value such as −800 and 0. In addition, although the present embodiment has been described as an example of obtaining the maximum value, it may be applied to the extraction of the minimum value or the statistical processing amount such as the average value of M pixels depending on the extraction purpose. In addition, X-ray C such as MRI images
It can be applied to CT images other than T images.
【0021】[0021]
【発明の効果】本発明によれば、X線CT像等における
原画像からのMIP処理に際し、関心領域のみを正確に
抽出可能になった。According to the present invention, only the region of interest can be accurately extracted in the MIP processing from the original image in the X-ray CT image or the like.
【図1】本発明の簡易3次元処理装置の実施例図であ
る。FIG. 1 is a diagram showing an embodiment of a simple three-dimensional processing apparatus of the present invention.
【図2】本発明のX線CT装置の全体構成図である。FIG. 2 is an overall configuration diagram of an X-ray CT apparatus according to the present invention.
【図3】本発明の画像診断装置の実施例図である。FIG. 3 is a diagram showing an embodiment of the image diagnostic apparatus of the present invention.
【図4】従来のMRIでのMIP処理システムの構成図
である。FIG. 4 is a configuration diagram of a conventional MRI MIP processing system.
【図5】MRIでのMIP処理の説明図である。FIG. 5 is an explanatory diagram of MIP processing by MRI.
21 関心領域抽出装置 22 投影変換処理装置 24 面積処理装置 21 Region of Interest Extractor 22 Projection Conversion Processor 24 Area Processor
Claims (3)
画像(CT画像)からなる3次元原画像情報を、2次元
平面に投影して出力するCT画像の処理装置において、
上記各2次元原画像毎に、しきい値処理を行って各2次
元2値化画像を得る手段と、各2次元2値化画像毎に、
画素値1を持つ閉領域毎に面積を求める手段と、各2次
元2値化画像毎に、この面積の大小によりマスクすべき
画素とマスクすべきでない画素とに区分したマスクパタ
ーンを生成する手段と、各2次元原画像毎に、このマス
クパターンに従ってマスクすべき画素位置の原画像画素
は画素値0とし、マスクすべきでない画素位置の原画像
画素はそのまま残すマスク処理手段と、このマスク処理
された各2次元原画像にMIP処理を施し2次元投影像
を得る手段とより成るCT画像の処理装置。1. A CT image processing apparatus for projecting and outputting three-dimensional original image information composed of a plurality of reconstructed two-dimensional tomographic original images (CT images) of a subject on a two-dimensional plane.
Means for performing threshold processing on each of the two-dimensional original images to obtain each of the two-dimensional binary images; and for each of the two-dimensional binary images,
A means for obtaining an area for each closed region having a pixel value of 1, and a means for generating, for each two-dimensional binarized image, a mask pattern divided into pixels to be masked and pixels not to be masked according to the size of the area. Then, for each two-dimensional original image, the original image pixel at the pixel position to be masked according to the mask pattern has a pixel value of 0, and the original image pixel at the pixel position not to be masked is left as it is, and the mask processing. A CT image processing device comprising means for performing MIP processing on each of the two-dimensional original images thus obtained to obtain a two-dimensional projection image.
素の中で最大の画素値を求め、かくして得た投影線毎の
最大の画素値を、その投影線上の2次元投影画像の画素
値として設定してなる請求項1のCT画像の処理装置。2. The MIP processing obtains the maximum pixel value among a plurality of pixels on a projection line, and the maximum pixel value for each projection line thus obtained is the pixel value of a two-dimensional projection image on that projection line. The CT image processing apparatus according to claim 1, wherein
2値化画像を収縮処理して画素値1を持つ閉領域のラベ
ル付けをその収縮処理した画像に対して行い、該ラベル
付け後に2値化画像を拡張処理し、この拡張処理で得た
画像に対し、上記ラベル付けした閉領域毎に面積を求め
ることとした請求項1又は2のCT画像の処理装置。3. In the means for obtaining the area, a two-dimensional binarized image is contracted to label a closed region having a pixel value of 1 on the contracted image, and the labeled image is labeled. 3. The CT image processing apparatus according to claim 1, wherein the binarized image is expanded afterward, and the area obtained for each of the labeled closed regions is calculated for the image obtained by this expansion.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP3321511A JPH05130989A (en) | 1991-11-11 | 1991-11-11 | Processor for ct image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP3321511A JPH05130989A (en) | 1991-11-11 | 1991-11-11 | Processor for ct image |
Publications (1)
Publication Number | Publication Date |
---|---|
JPH05130989A true JPH05130989A (en) | 1993-05-28 |
Family
ID=18133386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP3321511A Pending JPH05130989A (en) | 1991-11-11 | 1991-11-11 | Processor for ct image |
Country Status (1)
Country | Link |
---|---|
JP (1) | JPH05130989A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001087228A (en) * | 1999-09-27 | 2001-04-03 | Hitachi Medical Corp | Image reading support device |
JP2001236492A (en) * | 2000-02-24 | 2001-08-31 | Hitachi Medical Corp | Method and device for image processing |
JP2002209882A (en) * | 2000-12-26 | 2002-07-30 | Ge Medical Systems Global Technology Co Llc | Method and device for diagnosing ct tomographic image |
JP2005261531A (en) * | 2004-03-17 | 2005-09-29 | Hitachi Medical Corp | Medical image display method and device therefor |
JP2006200937A (en) * | 2005-01-18 | 2006-08-03 | Bridgestone Corp | Deformation behavior predicting method of rubber material, and deformation behavior predicting device of rubber material |
WO2006126970A1 (en) * | 2005-05-27 | 2006-11-30 | Agency For Science, Technology And Research | Brain image segmentation from ct data |
JP2007271369A (en) * | 2006-03-30 | 2007-10-18 | Bridgestone Corp | Apparatus and method for estimating deformation behavior of rubber material |
JP2008073301A (en) * | 2006-09-22 | 2008-04-03 | Toshiba Corp | Medical imaging diagnostic apparatus and medical image processor |
US7747056B2 (en) | 2004-09-06 | 2010-06-29 | Kabushiki Kaisha Toshiba | Image data area extracting system and image data area extracting method |
JP4538135B2 (en) * | 2000-05-30 | 2010-09-08 | 株式会社メック | Defect inspection equipment |
US7853310B2 (en) | 1994-10-27 | 2010-12-14 | Wake Forest University Health Sciences | Automatic analysis in virtual endoscopy |
JP2011043879A (en) * | 2009-08-19 | 2011-03-03 | Kddi Corp | Method and program for extracting mask image, and method and program for constructing voxel data |
JP2012020174A (en) * | 2011-10-17 | 2012-02-02 | Toshiba Corp | Medical diagnostic imaging apparatus and medical image processor |
CN109961487A (en) * | 2017-12-14 | 2019-07-02 | 通用电气公司 | Radiotherapy localization image-recognizing method, computer program and computer storage medium |
CN112330656A (en) * | 2020-11-20 | 2021-02-05 | 北京航星机器制造有限公司 | Method and system for implanting dangerous goods in security check CT image |
-
1991
- 1991-11-11 JP JP3321511A patent/JPH05130989A/en active Pending
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8275446B2 (en) | 1994-10-27 | 2012-09-25 | Wake Forest University Health Sciences | Automatic analysis in virtual endoscopy |
US7853310B2 (en) | 1994-10-27 | 2010-12-14 | Wake Forest University Health Sciences | Automatic analysis in virtual endoscopy |
JP2001087228A (en) * | 1999-09-27 | 2001-04-03 | Hitachi Medical Corp | Image reading support device |
JP2001236492A (en) * | 2000-02-24 | 2001-08-31 | Hitachi Medical Corp | Method and device for image processing |
JP4538135B2 (en) * | 2000-05-30 | 2010-09-08 | 株式会社メック | Defect inspection equipment |
JP2002209882A (en) * | 2000-12-26 | 2002-07-30 | Ge Medical Systems Global Technology Co Llc | Method and device for diagnosing ct tomographic image |
JP4497965B2 (en) * | 2004-03-17 | 2010-07-07 | 株式会社日立メディコ | Medical image display device |
JP2005261531A (en) * | 2004-03-17 | 2005-09-29 | Hitachi Medical Corp | Medical image display method and device therefor |
US7747056B2 (en) | 2004-09-06 | 2010-06-29 | Kabushiki Kaisha Toshiba | Image data area extracting system and image data area extracting method |
JP2006200937A (en) * | 2005-01-18 | 2006-08-03 | Bridgestone Corp | Deformation behavior predicting method of rubber material, and deformation behavior predicting device of rubber material |
JP4602776B2 (en) * | 2005-01-18 | 2010-12-22 | 株式会社ブリヂストン | Method for predicting deformation behavior of rubber material and apparatus for predicting deformation behavior of rubber material |
WO2006126970A1 (en) * | 2005-05-27 | 2006-11-30 | Agency For Science, Technology And Research | Brain image segmentation from ct data |
JP2007271369A (en) * | 2006-03-30 | 2007-10-18 | Bridgestone Corp | Apparatus and method for estimating deformation behavior of rubber material |
JP2008073301A (en) * | 2006-09-22 | 2008-04-03 | Toshiba Corp | Medical imaging diagnostic apparatus and medical image processor |
JP2011043879A (en) * | 2009-08-19 | 2011-03-03 | Kddi Corp | Method and program for extracting mask image, and method and program for constructing voxel data |
JP2012020174A (en) * | 2011-10-17 | 2012-02-02 | Toshiba Corp | Medical diagnostic imaging apparatus and medical image processor |
CN109961487A (en) * | 2017-12-14 | 2019-07-02 | 通用电气公司 | Radiotherapy localization image-recognizing method, computer program and computer storage medium |
CN112330656A (en) * | 2020-11-20 | 2021-02-05 | 北京航星机器制造有限公司 | Method and system for implanting dangerous goods in security check CT image |
CN112330656B (en) * | 2020-11-20 | 2024-04-05 | 北京航星机器制造有限公司 | Safety inspection CT image dangerous goods implantation method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9098935B2 (en) | Image displaying apparatus, image displaying method, and computer readable medium for displaying an image of a mammary gland structure without overlaps thereof | |
US20200184639A1 (en) | Method and apparatus for reconstructing medical images | |
US4903202A (en) | Three-dimensional object removal via connectivity | |
US4737921A (en) | Three dimensional medical image display system | |
JPH05130989A (en) | Processor for ct image | |
US6754376B1 (en) | Method for automatic segmentation of medical images | |
CN110310281A (en) | Lung neoplasm detection and dividing method in a kind of Virtual Medical based on Mask-RCNN deep learning | |
JP2002515772A (en) | Imaging device and method for canceling movement of a subject | |
JPS6297074A (en) | Method and apparatus for displaying 3-d surface structure | |
JPH0731739B2 (en) | Method and apparatus for extracting faces in a two-dimensional tomographic slice | |
Ney et al. | Three-dimensional CT-volumetric reconstruction and display of the bronchial tree | |
JP2002219123A (en) | Projection conversion system and device and method for producing fractional images depending on the difference over time | |
Ratul et al. | CCX-rayNet: a class conditioned convolutional neural network for biplanar X-rays to CT volume | |
JP6301277B2 (en) | Diagnostic auxiliary image generation apparatus, diagnostic auxiliary image generation method, and diagnostic auxiliary image generation program | |
US6845143B2 (en) | CT image reconstruction | |
CN114340496A (en) | Analysis method and related device of heart coronary artery based on VRDS AI medical image | |
CN112884879B (en) | Method for providing a two-dimensional unfolded image of at least one tubular structure | |
JPH0838433A (en) | Medical image diagnostic device | |
JPH07271997A (en) | Image processor | |
JPH09238933A (en) | Mammography display device | |
CN114708283A (en) | Image object segmentation method and device, electronic equipment and storage medium | |
JP6642048B2 (en) | Medical image display system, medical image display program, and medical image display method | |
JP3244347B2 (en) | Image processing method and image processing apparatus | |
JPH06189952A (en) | I.p. image processing device | |
JPH0728976A (en) | Picture display device |