JP5635389B2 - Image processing apparatus, image processing program, and X-ray diagnostic imaging apparatus - Google Patents
Image processing apparatus, image processing program, and X-ray diagnostic imaging apparatus Download PDFInfo
- Publication number
- JP5635389B2 JP5635389B2 JP2010285012A JP2010285012A JP5635389B2 JP 5635389 B2 JP5635389 B2 JP 5635389B2 JP 2010285012 A JP2010285012 A JP 2010285012A JP 2010285012 A JP2010285012 A JP 2010285012A JP 5635389 B2 JP5635389 B2 JP 5635389B2
- Authority
- JP
- Japan
- Prior art keywords
- region
- image
- vertebral body
- value
- profile
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000002059 diagnostic imaging Methods 0.000 title claims description 4
- 210000000746 body region Anatomy 0.000 claims description 43
- 238000000605 extraction Methods 0.000 claims description 34
- 238000011976 chest X-ray Methods 0.000 claims description 24
- 239000000284 extract Substances 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 11
- 210000000038 chest Anatomy 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 7
- 210000004072 lung Anatomy 0.000 description 6
- 238000000034 method Methods 0.000 description 5
- 210000000988 bone and bone Anatomy 0.000 description 3
- 210000000115 thoracic cavity Anatomy 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000037182 bone density Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 230000009278 visceral effect Effects 0.000 description 1
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Description
本発明は、画像処理装置、画像処理プログラム、及びX線画像診断装置に係り、特に、胸部のX線画像の階調補正技術に関する。 The present invention relates to an image processing apparatus, an image processing program, and an X-ray image diagnostic apparatus, and more particularly to a tone correction technique for an X-ray image of a chest.
X線画像診断装置は、被検体を透過したX線をX線検出器により検出し、画像化を行うことによって画像を得ることができる。しかし、被検体ごとに異なる骨の厚みや骨の密度、内臓の厚み、脂肪や筋肉の厚み等不確定な要素が多いため、被検体の異なる画像同士の輝度(画素値)やコントラストを安定させ出力することは困難である。 The X-ray diagnostic imaging apparatus can obtain an image by detecting X-rays transmitted through a subject with an X-ray detector and performing imaging. However, because there are many uncertain factors such as bone thickness, bone density, visceral thickness, fat and muscle thickness that vary from subject to subject, the brightness (pixel value) and contrast between different images of the subject are stabilized. It is difficult to output.
特に胸部X線画像では、被検体の体厚の薄い肺野から、横隔膜と椎体(骨)とが重なった体厚の厚い部位までを含んで撮影されるが、これら体厚の大きく異なる領域の輝度とコントラストを整える必要があった。この要請に応えるために、例えば、特許文献1には、画像認識技術により胸部X線画像から胸椎領域を探索し、この胸椎領域の特徴量が目標値となるよう階調変換することにより、胸部X線画像の輝度とコントラストを安定させる技術が開示されている。 In particular, chest X-ray images are taken from the lung field where the subject's body is thin to the thick part where the diaphragm and vertebral body (bone) overlap. It was necessary to adjust the brightness and contrast. In order to meet this demand, for example, in Patent Document 1, a chest vertebra region is searched from a chest X-ray image by image recognition technology, and gradation conversion is performed so that the feature amount of the thoracic vertebra region becomes a target value. A technique for stabilizing the brightness and contrast of an X-ray image is disclosed.
しかし、特許文献1によれば、胸椎領域を探索するための前提条件として、肺野を検出する必要があり、肺野検出手法に依存しなければ、胸部X線画像の輝度とコントラストとを安定化させることは出来ないという問題があった。 However, according to Patent Document 1, it is necessary to detect the lung field as a precondition for searching the thoracic vertebra region, and if it does not depend on the lung field detection technique, the luminance and contrast of the chest X-ray image are stabilized. There was a problem that it could not be made.
そこで、本発明は、上記問題に鑑みて、肺野検出手法に依存することなく、胸部X線画像の階調処理を安定して行うことができる画像処理装置、画像処理プログラム、及びX線画像診断装置を提供することを目的とする。 Therefore, in view of the above problems, the present invention provides an image processing apparatus, an image processing program, and an X-ray image that can stably perform gradation processing of a chest X-ray image without depending on a lung field detection method. An object is to provide a diagnostic apparatus.
上記課題を解決するために、本発明に係る画像処理装置は、被検体の胸部を撮影して得た胸部X線画像に対し、画素値又は前記胸部X線画像における画素位置の少なくとも一つに基づいて、前記被検体の椎体が撮影された椎体領域を含む椎体分布領域を設定し、その椎体分布領域を前記胸部X線画像の体軸方向に沿って並んだ複数の小領域に分割し、小領域毎に、前記体軸方向に沿って並んだ画素列の画素値の特徴量の分布を示す第1プロファイルを生成し、当該第1プロファイルにおける最小値を基に定めた第1閾値以下の画素値に相当する画素を、前記各小領域から椎体領域として抽出する椎体領域抽出部と、前記抽出された椎体領域の画素値の特徴量を算出する特徴量算出部と、前記椎体領域の画素値の特徴量が目標値となるように階調処理を行う階調処理手段と、を備えることを特徴とする。 In order to solve the above-described problems, an image processing apparatus according to the present invention has a pixel value or at least one pixel position in a chest X-ray image with respect to a chest X-ray image obtained by imaging a chest of a subject. A plurality of small regions in which a vertebral body distribution region including a vertebral body region in which the vertebral body of the subject is imaged is set, and the vertebral body distribution regions are arranged along the body axis direction of the chest X-ray image A first profile indicating a distribution of feature values of pixel values of pixel columns arranged along the body axis direction is generated for each small region, and a first profile determined based on a minimum value in the first profile is generated. A vertebral body region extraction unit that extracts a pixel corresponding to a pixel value equal to or less than one threshold as a vertebral body region from each of the small regions, and a feature amount calculation unit that calculates a feature amount of a pixel value of the extracted vertebral body region And the feature value of the pixel value of the vertebral body region becomes a target value. A gradation processing means for processing, characterized in that it comprises a.
また、本発明にかかる画像処理プログラムは、被検体の胸部を撮影して得た胸部X線画像に対し、画素値又は前記胸部X線画像における画素位置の少なくとも一つに基づいて、前記被検体の椎体が撮影された椎体領域を含む椎体分布領域を設定し、その椎体分布領域を前記胸部X線画像の体軸方向に沿って並んだ複数の小領域に分割し、小領域毎に、前記体軸方向に沿って並んだ画素列の画素値の特徴量の分布を示す第1プロファイルを生成し、当該第1プロファイルにおける最小値を基に定めた第1閾値以下の画素値に相当する画素を、前記各小領域から椎体領域として抽出するステップと、前記抽出された椎体領域の画素値の特徴量を算出するステップと、前記椎体領域の画素値の特徴量が目標値となるように階調処理を行うステップと、をコンピュータに実行させることを特徴とする。 The image processing program according to the present invention is based on at least one of a pixel value or a pixel position in the chest X-ray image with respect to a chest X-ray image obtained by imaging the chest of the subject. A vertebral body distribution region including a vertebral body region in which the vertebral body was imaged is set, and the vertebral body distribution region is divided into a plurality of small regions arranged along the body axis direction of the chest X-ray image. A first profile indicating a distribution of feature values of pixel values of pixel columns arranged along the body axis direction is generated every time, and a pixel value equal to or less than a first threshold value determined based on a minimum value in the first profile Are extracted from each of the small regions as a vertebral body region, a feature value of the pixel value of the extracted vertebral body region is calculated, and a feature value of the pixel value of the vertebral body region is A step of performing gradation processing to obtain a target value; And characterized by causing a computer to execute the.
また、本発明に係るX線画像診断装置は、X線発生手段と、前記X線発生手段から生じたX線を検出するX線検出手段と、X線画像を表示する画像表示手段と、を備え、前記画像処理装置は、前記X線検出手段から出力された画像に対して前記階調処理を行い、前記画像表示手段は、前記階調処理が施されたX線画像を表示する、ことを特徴とする。 An X-ray image diagnostic apparatus according to the present invention comprises: X-ray generation means; X-ray detection means for detecting X-rays generated from the X-ray generation means; and image display means for displaying an X-ray image. The image processing apparatus performs the gradation processing on the image output from the X-ray detection means, and the image display means displays the X-ray image subjected to the gradation processing. It is characterized by.
本発明によれば、肺野検出手法に依存することなく椎体を検出し、椎体の特徴量を基に階調処理を行うことにより、安定した輝度とコントラストの胸部画像を提供することができる。 According to the present invention, it is possible to provide a chest image with stable brightness and contrast by detecting a vertebral body without depending on a lung field detection method and performing gradation processing based on the feature amount of the vertebral body. it can.
以下、本発明の実施形態について図面を用いて説明する。同一機能を有する構成及び同一の処理内容の手順には同一符号を付し、その説明の繰り返しを省略する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. The same reference numerals are given to the procedures having the same functions and the same processing contents, and the description thereof will not be repeated.
まず、図1に基づいて、本実施形態に係るX線画像診断装置の構成について説明する。図1は、本発明のX線画像診断装置の実施形態の構成例を示す模式図である。 First, the configuration of the X-ray image diagnostic apparatus according to the present embodiment will be described with reference to FIG. FIG. 1 is a schematic diagram showing a configuration example of an embodiment of the X-ray image diagnostic apparatus of the present invention.
本実施形態に係るX線画像診断装置10は、X線を照射するX線発生器1と、X線発生器1から発生したX線を検出し電気信号へ変換するX線検出器2と、X線検出器2から得られた電気信号を取得し、これに基づく画像に対して階調処理を含む各種画像処理を行う画像処理装置3と、画像処理装置3から出力された画像をユーザに提示、又は表示する出力装置4と、を備える。 An X-ray diagnostic imaging apparatus 10 according to the present embodiment includes an X-ray generator 1 that emits X-rays, an X-ray detector 2 that detects X-rays generated from the X-ray generator 1 and converts them into electrical signals, An image processing device 3 that acquires an electrical signal obtained from the X-ray detector 2 and performs various image processing including gradation processing on an image based on the electrical signal, and an image output from the image processing device 3 to the user And an output device 4 for presenting or displaying.
画像処理装置3は、X線検出器2から取得した電気信号に基づく画像を記憶する画像記憶装置31と、画像記憶装置31から読み出した画像のうち、被検体0の椎体が撮影された椎体領域の特徴量を算出する椎体特徴量算出部32と、椎体領域の特徴量に基づいて階調処理を行う階調処理部33と、を備える。 The image processing apparatus 3 includes an image storage device 31 that stores an image based on an electrical signal acquired from the X-ray detector 2, and a vertebra in which the vertebral body of the subject 0 is imaged among images read from the image storage device 31. A vertebral body feature value calculation unit 32 that calculates a feature value of the body region, and a gradation processing unit 33 that performs gradation processing based on the feature value of the vertebral body region.
本実施形態では、X線発生器1とX線検出器2との間に被検体0を位置させ、被検体0の胸部を撮影する。よって、X線検出器2は、被検体0の胸部を透過したX線を検出し、画像記憶装置31に対して被検体0の胸部のX線吸収分布を示す胸部X線画像が出力・記憶される。 In the present embodiment, the subject 0 is positioned between the X-ray generator 1 and the X-ray detector 2 and the chest of the subject 0 is imaged. Therefore, the X-ray detector 2 detects X-rays transmitted through the chest of the subject 0 and outputs and stores a chest X-ray image indicating the X-ray absorption distribution of the chest of the subject 0 to the image storage device 31. Is done.
次に図2に基づいて、椎体特徴量算出部32の構成について説明する。図2は、椎体特徴量算出部32を構成するプログラムを示すブロック図である。 Next, the configuration of the vertebral body feature quantity calculation unit 32 will be described with reference to FIG. FIG. 2 is a block diagram showing a program constituting the vertebral body feature quantity calculation unit 32.
椎体特徴量算出部32は、画像記憶装置31に記憶された胸部X線画像から、被検体0が撮影された領域である被検体領域を抽出する被検体領域抽出部32aと、被検体領域のうち、被検体の体幹(被検体の四肢の除く部位)が撮影された体幹部領域抽出部32bと、被検体の椎体(骨)が撮影された領域を抽出する椎体領域抽出部32cと、椎体領域の特徴量を算出する特徴量算出部32dと、を備える。 The vertebral body feature amount calculation unit 32 extracts a subject region that is a region where the subject 0 is imaged from a chest X-ray image stored in the image storage device 31, and a subject region. Among them, a trunk region extraction unit 32b in which the trunk of the subject (excluding the limbs of the subject) is imaged, and a vertebral body region extraction unit that extracts a region in which the vertebral body (bone) of the subject is imaged 32c and a feature amount calculation unit 32d that calculates a feature amount of the vertebral body region.
上記被検体領域抽出部32a、体幹部領域抽出部32b、椎体領域抽出部32c、特徴量算出部32dを含む椎体特徴量算出部32、及び階調処理部33は、各構成要素の機能を実現するプログラムと画像処理装置3を構成するコンピュータとが協働し、このコンピュータにより各プログラムが実行されて、その機能が実現することにより構成される。 The subject region extraction unit 32a, the trunk region extraction unit 32b, the vertebral body region extraction unit 32c, the vertebral body feature amount calculation unit 32 including the feature amount calculation unit 32d, and the gradation processing unit 33 are each a function of each component. And the computer constituting the image processing apparatus 3 cooperate with each other, and each program is executed by this computer to realize its function.
次に、本実施形態に係るX線画像診断装置10による画像取得から表示までの処理について図3〜図8に基づいて説明する。図3は、本実施形態に係るX線画像診断装置10による画像取得から表示までの処理の流れを示すフローチャートである。図4は、被検体領域抽出処理を示す説明図であって、(a)は元画像を示し、(b)は、絞り領域を除去した画像を示し、(c)は、直接線領域を除去した画像(被検体領域)を示す。図5は、体幹部領域抽出処理を示す説明図であって、(a)は被検体領域を基に生成した縦画素数プロファイル及び横画素数プロファイルを示し、(b)は、体幹部領域画像を示す。図6は、椎体中心領域抽出を示す説明図であって、(a)は体幹部領域画像を基に生成した縦列平均値プロファイルを示し、(b)は、椎体中心領域画像を示す。図7は、椎体分布領域抽出処理を示す説明図であって、(a)は椎体分布領域を複数の矩形領域に分割した状態を示し、(b)は、各矩形領域に対する縦列平均値プロファイルを示し、(c)は、各矩形領域から椎体領域を抽出した状態を示す。図8は、階調変換テーブルを示す模式図である。以下、図3の各ステップに沿って説明する。 Next, processing from image acquisition to display by the X-ray image diagnostic apparatus 10 according to the present embodiment will be described with reference to FIGS. FIG. 3 is a flowchart showing a flow of processing from image acquisition to display by the X-ray image diagnostic apparatus 10 according to the present embodiment. 4A and 4B are explanatory diagrams showing the subject region extraction process, where FIG. 4A shows the original image, FIG. 4B shows the image from which the aperture region has been removed, and FIG. 4C shows the direct line region being removed. The obtained image (subject region) is shown. 5A and 5B are explanatory diagrams showing the trunk region extraction processing, where FIG. 5A shows the vertical pixel number profile and the horizontal pixel number profile generated based on the subject region, and FIG. 5B shows the trunk region image. Indicates. 6A and 6B are explanatory diagrams showing vertebral body center region extraction, in which FIG. 6A shows a tandem average value profile generated based on a trunk region image, and FIG. 6B shows a vertebral body center region image. FIG. 7 is an explanatory diagram showing the vertebral body distribution region extraction processing, where (a) shows a state in which the vertebral body distribution region is divided into a plurality of rectangular regions, and (b) shows the column average value for each rectangular region A profile is shown, (c) shows the state which extracted the vertebral body area | region from each rectangular area. FIG. 8 is a schematic diagram showing a gradation conversion table. Hereinafter, it demonstrates along each step of FIG.
(ステップS1)
X線発生器1からX線を発生させ、被検体0の胸部を撮影する。X線検出器2は、被検体0の透過X線信号を検出して電気信号に変換する。この電気信号は、画像記憶装置31に出力される。画像記憶装置31は、受信した電気信号を、被検体0の胸部X線画像として記憶する(S1)。この胸部X線画像では、画像の縦方向に沿って体軸方向に沿った被検体画像が撮像されているので、以下では、被検体画像における体軸方向に沿った方向を縦方向、体軸方向に直交した方向を横方向と称する。
(Step S1)
X-rays are generated from the X-ray generator 1 and the chest of the subject 0 is imaged. The X-ray detector 2 detects the transmitted X-ray signal of the subject 0 and converts it into an electric signal. This electrical signal is output to the image storage device 31. The image storage device 31 stores the received electrical signal as a chest X-ray image of the subject 0 (S1). In this chest X-ray image, since the subject image along the body axis direction is taken along the vertical direction of the image, in the following, the direction along the body axis direction in the subject image is defined as the vertical direction, the body axis A direction orthogonal to the direction is referred to as a lateral direction.
(ステップS2)
被検体領域抽出部32aは、画像記憶装置31から胸部X線画像(以下「元画像」という)を読み出し、胸部X線画像のうち、X線照射野外である絞り領域を除去する(S2)。本ステップの絞り除去処理は、画像処理により絞り領域を検出し除外する方法でも良いし、X線発生器1とX線検出器2との位置関係をセンサ等により得て幾何学演算を行うことで照射野領域を求め、それを基に照射野領域外を除去する方法で求めても良い。本ステップにより、図4(a)の元画像40から、元画像40内の周縁部にある絞り領域41を除去した絞り除去画像42(図4(b))が得られる。
(Step S2)
The subject region extraction unit 32a reads a chest X-ray image (hereinafter referred to as “original image”) from the image storage device 31, and removes a diaphragm region outside the X-ray irradiation field from the chest X-ray image (S2). The aperture removal processing in this step may be a method of detecting and excluding the aperture region by image processing, or obtaining a positional relationship between the X-ray generator 1 and the X-ray detector 2 by a sensor or the like and performing geometric calculation. It is also possible to obtain the irradiation field region by the method of removing the outside of the irradiation field region based on the irradiation field region. Through this step, an aperture-removed image 42 (FIG. 4B) obtained by removing the aperture region 41 at the peripheral edge in the original image 40 is obtained from the original image 40 in FIG.
(ステップS3)
次に、被検体領域抽出部32aは、絞り領域を除去した画像のうち、被検体0を透過しない直接X線が入射した領域である直接線領域を除去する(S3)。本ステップの直接線除去処理は、例えば、ヒストグラムを解析し高画素値で頻度が密集している画素値範囲を閾値処理して除去しても良いし、X線発生器1の撮影条件(管電圧、管電流、曝射時間、グリッドの露出倍数)から到達する画素値を計算し、閾値以上の画素値を直接線として除去しても良い。本ステップにより、図4(b)の絞り除去画像42から、直接線領域43を除去した直接線除去画像(図4(c))が得られる。以降、元画像40から、絞り領域41と直接線領域43を除外した画像を被検体領域画像44という。
(Step S3)
Next, the subject region extraction unit 32a removes a direct line region that is a region where direct X-rays that do not transmit through the subject 0 are incident from the image from which the aperture region has been removed (S3). The direct line removal processing in this step may be performed by, for example, analyzing a histogram and removing a pixel value range having high pixel values and a high frequency by performing threshold processing, or by capturing conditions (tubes) of the X-ray generator 1. The pixel value reached from the voltage, the tube current, the exposure time, and the grid exposure multiple) may be calculated, and the pixel value equal to or greater than the threshold value may be removed as a direct line. By this step, a direct line removal image (FIG. 4C) obtained by removing the direct line region 43 is obtained from the aperture removal image 42 of FIG. 4B. Hereinafter, an image obtained by excluding the aperture region 41 and the direct line region 43 from the original image 40 is referred to as a subject region image 44.
(ステップS4)
体幹部領域抽出部32bは、被検体領域画像44から、被検体0の四肢が撮影された領域を除外した体幹部領域を抽出する(S4)。図5に基づいて、体幹部領域抽出処理について説明する。体幹部領域抽出部32bは、被検体領域画像44の縦方向と横方向とのそれぞれに対して、被検体領域画像44の各列と各行の画素の数をプロットした横画素数プロファイル46と、縦画素数プロファイル47とを生成する。
(Step S4)
The trunk region extraction unit 32b extracts a trunk region excluding the region where the extremities of the subject 0 are imaged from the subject region image 44 (S4). Based on FIG. 5, the trunk region extraction process will be described. The trunk region extraction unit 32b includes a horizontal pixel number profile 46 in which the number of pixels in each column and each row of the subject region image 44 is plotted for each of the vertical direction and the horizontal direction of the subject region image 44. A vertical pixel number profile 47 is generated.
次に、横画素数プロファイル46及び縦画素数プロファイル47に対して閾値m、閾値nを設定し、閾値m、閾値nを上回る領域(図5(a)の横画素数プロファイル46及び縦画素数プロファイル47の斜線部分)の双方に含まれる領域(若しくは、少なくとも一つに含まれる領域)を体幹部領域画像46m、47nとして抽出する。このときの閾値m、閾値nは、例えば、各プロファイルの平均値と画像サイズの半分の値の小さい方を閾値とすることで求めることができる。具体的には、被検体領域画像44の画像の横方向の画素数(横方向の画像サイズ)がM、縦方向の画素数(縦方向の画像サイズ)がN、横画素数プロファイル46の全画素数(面積)をSm(Sm=a1+a2+・・・+aN:aは横方向に並んだ各画素列において画素値が検出された数を示す。)とすると、横画素数プロファイル46の平均値はSm/Nであり、画像サイズの半分値はM/2である。よって、Sm/NとM/2とのうちの小さい方を閾値mとする。同様に、縦画素数プロファイル47の全画素数(面積)をSn(Sn=b1+b2+・・・+bM:bは縦方向の各画素列において画素値が検出された数を示す。)とすると、縦画素数プロファイル47の平均値はSn/Mであり、画像サイズの半分値はN/2である。よって、Sn/MとN/2とのうちの小さい方を閾値nとする。体幹部領域抽出処理により、被検体領域画像44のうち、四肢が撮影された領域を除く体幹部領域画像48(図5(b)参照)を抽出することができる。 Next, a threshold value m and a threshold value n are set for the horizontal pixel number profile 46 and the vertical pixel number profile 47, and the region exceeds the threshold value m and the threshold value n (the horizontal pixel number profile 46 and the vertical pixel number in FIG. 5A). Regions (or regions included in at least one) included in both of the hatched portions of the profile 47 are extracted as trunk region images 46m and 47n. The threshold value m and the threshold value n at this time can be obtained, for example, by setting the smaller one of the average value of each profile and the half value of the image size as the threshold value. Specifically, the number of pixels in the horizontal direction (image size in the horizontal direction) of the image of the subject region image 44 is M, the number of pixels in the vertical direction (image size in the vertical direction) is N, and all of the horizontal pixel number profiles 46 are present. picture prime number (area) of Sm: If (Sm = a 1 + a 2 + ··· + a N. a is indicating the number of pixel values is detected at each pixel row aligned in a horizontal direction) to the horizontal The average value of the pixel number profile 46 is Sm / N, and the half value of the image size is M / 2. Therefore, the smaller one of Sm / N and M / 2 is set as the threshold value m. Similarly, entire field number elements of the vertical pixel number profile 47 (area) to Sn (Sn = b 1 + b 2 + ··· + b M: b number of pixel values is detected at each pixel row in the vertical direction ), The average value of the vertical pixel number profile 47 is Sn / M, and the half value of the image size is N / 2. Therefore, the smaller one of Sn / M and N / 2 is set as the threshold value n. By the trunk region extraction process, the trunk region image 48 (see FIG. 5B) excluding the region where the extremities are photographed can be extracted from the subject region image 44.
(ステップS5)
椎体領域抽出部32cは、体幹部領域画像48からから椎体中心を抽出し、椎体の大まかな位置を導出する(S5)。椎体領域抽出部32cは、体幹部領域画像48の縦方向の画素値の平均画素値を求め、縦軸が平均画素値、横軸が横方向の画素位置を示す縦列平均値プロファイル49(図6(a)参照)を求める。次に、椎体領域抽出部32cは、縦列平均値プロファイル49の中から平均画素値の最小値Pminを求める。そして、この求めた最小値Pminに、例えば固定比率を積算するか、縦列平均値プロファイル49の標準偏差を最小値に加えたものを閾値pとして、閾値p以下の領域(図6(a)の縦列平均値プロファイル49内の斜線部分に相当)を椎体中心の横方向の画素位置として求める。そして、体幹部領域画像48の画像のうち、求めた横方向の画素位置に相当する部分を椎体中心領域50(図6(b)参照)として抽出する。
(Step S5)
The vertebral body region extraction unit 32c extracts the vertebral body center from the trunk region image 48 and derives a rough position of the vertebral body (S5). The vertebral body region extraction unit 32c obtains the average pixel value of the vertical pixel values of the trunk region image 48, and the vertical column average value profile 49 (FIG. 5) shows the average pixel value on the vertical axis and the pixel position on the horizontal axis. 6 (a)). Next, the vertebral body region extraction unit 32 c calculates the minimum value Pmin of the average pixel value from the column average value profile 49. Then, for example, a fixed ratio is added to the obtained minimum value Pmin , or a standard deviation of the column average value profile 49 is added to the minimum value as a threshold p, and a region below the threshold p (FIG. 6A). (Corresponding to the hatched portion in the column average value profile 49) is obtained as the horizontal pixel position of the vertebral body center. Then, a portion corresponding to the obtained lateral pixel position in the trunk region image 48 is extracted as the vertebral body central region 50 (see FIG. 6B).
(ステップS6)
椎体領域抽出部32cは、ステップS5で求めた椎体中心領域50を基に椎体分布領域を設定する(S6)。椎体領域抽出部32cは、体幹部領域画像48のうち、椎体中心領域50を中心とし、体幹部領域画像48の横方向の開始位置から終了位置までの幅の固定比率分(例えば3分の1等)を広げた領域を、椎体分布領域52(図7(a)参照)として設定する。すなわち、椎体分布領域の横方向の端部座標は、下式(1)に基づいて求められる。
[数1]
椎体分布領域の端部座標=椎体中心の横方向中央画素座標±体幹部領域画像幅×固定比率・・・(1)
(Step S6)
The vertebral body region extraction unit 32c sets a vertebral body distribution region based on the vertebral body center region 50 obtained in step S5 (S6). The vertebral body region extraction unit 32c is centered on the vertebral body center region 50 in the trunk region image 48 and has a fixed ratio of the width from the lateral start position to the end position of the trunk region image 48 (for example, 3 minutes). 1) is set as a vertebral body distribution region 52 (see FIG. 7A). That is, the end coordinate in the horizontal direction of the vertebral body distribution region is obtained based on the following formula (1).
[Equation 1]
Edge coordinates of vertebral body distribution region = horizontal center pixel coordinate of vertebral body center ± trunk region image width × fixed ratio (1)
(ステップS7)
椎体領域抽出部32cは、図7(a)に示すように、椎体分布領域52内を体軸方向(椎体中心軸方向)と略直交する方向に複数行(例えば5や10行)単位で短冊状に区切り、複数の矩形領域53a、53b、53c、・・・を生成する。次に、椎体領域抽出部32cは、各矩形領域53a、53b、53c、・・・について、ステップS5と同様に、縦方向の画素値の平均画素値を求め、縦軸が平均画素値、横軸が横方向の画素位置を示す縦列平均値プロファイル54a、54b、54c、・・・を求める(図7(b)参照)。更に、椎体領域抽出部32cは、各縦列平均値プロファイル54a、54b、54c、・・・について、各々の縦列平均値プロファイル54a、54b、54c、・・・の最小値Pa、Pb、Pc、・・・を求め、求めた最小値Pa、Pb、Pc、・・・に、例えば固定比率を積算するか各縦列平均値プロファイルの標準偏差を最小値に加えた値を算出する。そしてその値を閾値a、b、c・・・・として、閾値a、b、c・・・・以下の領域55a、55b、55c、・・・(図7(b)の斜線部分)に相当する各矩形領域53a、53b、53c、・・・の画像領域を椎体領域56(図7(c)の斜線部分)として抽出する(S7)。
(Step S7)
As shown in FIG. 7A, the vertebral body region extraction unit 32c has a plurality of rows (for example, 5 or 10 rows) in the vertebral body distribution region 52 in a direction substantially orthogonal to the body axis direction (vertical body center axis direction). A plurality of rectangular areas 53a, 53b, 53c,... Are generated by dividing the unit into strips. Next, the vertebral body region extraction unit 32c calculates the average pixel value of the vertical pixel values for each of the rectangular regions 53a, 53b, 53c,... Column average value profiles 54a, 54b, 54c,... Whose horizontal axis indicates the pixel position in the horizontal direction are obtained (see FIG. 7B). Further, the vertebral body region extraction unit 32c, for each column average value profile 54a, 54b, 54c,..., Minimum values Pa, Pb, Pc,. Are calculated, and, for example, a fixed ratio is added to the determined minimum values Pa, Pb, Pc,... Or a standard deviation of each column average value profile is added to the minimum value. .., And corresponding to the following regions 55a, 55b, 55c,... (Hatched portion in FIG. 7B). Image areas of the respective rectangular areas 53a, 53b, 53c,... Are extracted as vertebral body areas 56 (shaded portions in FIG. 7C) (S7).
(ステップS8)
特徴量算出部32dは、抽出された椎体領域56の画像特徴量を算出する。本実施形態では、画像特徴量として、抽出された椎体領域56の全体の画素値の平均値Iaveを求めるが、画像特徴量は、椎体領域の全部または一部の画素値を用いたものであれば、上記に限定されない(S8)。
(Step S8)
The feature amount calculation unit 32d calculates the image feature amount of the extracted vertebral body region 56. In the present embodiment, the average value I ave of the entire pixel values of the extracted vertebral body region 56 is obtained as the image feature amount. The image feature amount uses all or part of the pixel values of the vertebral body region. If it is a thing, it will not be limited to the above (S8).
(ステップS9)
階調処理部33は、算出された椎体領域56の特徴量を元に、入力画素値を出力画素値に変換する階調変換テーブルを作成し、この階調変換テーブルによって元画像を出力画像に変換する(S9)。階調処理部33は、例えば、元画像の入力画素値の始点I0及び終点I1と、それらに対する出力画素値(目標値)の始点O0及び終点O1を予め決め、加えて椎体領域56の特徴量(本実施形態ではIave)とその目標値Oaveの3点の間をスプライン補間や線形補間等で補間することにより階調変換テーブル(図8参照)を作成する。階調処理部33は、階調変換して得た出力画像を、フィルムプリンタやモニタ等の出力装置4に出力して、画像を表示する。
(Step S9)
The gradation processing unit 33 creates a gradation conversion table for converting the input pixel value into the output pixel value based on the calculated feature quantity of the vertebral body region 56, and the original image is output to the output image by using the gradation conversion table. (S9). For example, the gradation processing unit 33 determines in advance the start point I 0 and the end point I 1 of the input pixel value of the original image, and the start point O 0 and the end point O 1 of the output pixel value (target value) corresponding thereto, and adds the vertebral body A gradation conversion table (see FIG. 8) is created by interpolating between three points of the feature quantity (I ave in this embodiment) of the region 56 and its target value O ave by spline interpolation, linear interpolation, or the like. The gradation processing unit 33 outputs an output image obtained by gradation conversion to the output device 4 such as a film printer or a monitor, and displays the image.
本実施形態によれば、肺野検出手法に依存することなく、椎体の抽出をすることができる。特に、椎体分布領域を基に複数の矩形領域(小領域)を設定し、各矩形領域毎に椎体領域を抽出することにより、椎体が体軸方向に沿わず曲がっている画像においても、椎体領域の抽出を行うことができる。そして、この椎体領域の入力画素値を目標値に階調変換した出力画像を生成することにより、被検体の体厚や撮影条件の違いに関わらず、椎体領域が所定の目標値として表示された画像を提供でき、比較読影が容易となる。 According to the present embodiment, vertebral bodies can be extracted without depending on the lung field detection technique. In particular, by setting a plurality of rectangular regions (small regions) based on the vertebral body distribution region and extracting the vertebral body region for each rectangular region, even in an image in which the vertebral body is bent along the body axis direction The vertebral body region can be extracted. Then, by generating an output image in which the input pixel value of this vertebral body region is converted into a target value, the vertebral body region is displayed as a predetermined target value regardless of the body thickness of the subject or the imaging conditions. Images can be provided, and comparative interpretation is facilitated.
上記実施形態では、椎体中心領域を抽出してから椎体分布領域を設定したが、椎体分布領域の設定に椎体中心領域の抽出は必須ではない。例えば、椎体中心領域の抽出に代えて、体幹部領域画像48の横方向の中心位置や、体幹部領域画像の中心座標や重心座標を求め、これを中心に、体幹部領域画像幅に固定比率を乗じた領域を加減して、椎体分布領域を設定してもよい。 In the above embodiment, the vertebral body distribution region is set after extracting the vertebral body center region, but the extraction of the vertebral body center region is not essential for setting the vertebral body distribution region. For example, instead of extracting the vertebral body center region, the lateral center position of the torso region image 48, the center coordinates and the center of gravity coordinates of the torso region image are obtained and fixed to the torso region image width around this. The vertebral body distribution region may be set by adjusting the region multiplied by the ratio.
更に、体幹部領域画像の抽出に代えて、被検体領域画像44の中心座標や重心座標を求め、これを中心に、被検体領域画像44の幅に固定比率を乗じた領域を加減して、椎体分布領域を設定してもよい。また、上記中心座標や重心座標に変えて、体幹部領域画像や被検体領域画像の各画像幅の半分値を元に椎体分布領域を設定してもよい。 Further, instead of extracting the trunk region image, the center coordinates and the barycentric coordinates of the subject region image 44 are obtained, and the region obtained by multiplying the width of the subject region image 44 by the fixed ratio is adjusted around this, A vertebral body distribution region may be set. Further, instead of the center coordinates and the center-of-gravity coordinates, the vertebral body distribution area may be set based on the half value of each image width of the trunk area image or the subject area image.
0:被検体、1:X線発生器、2:X線検出器、3:画像処理装置、4:出力装置 0: subject, 1: X-ray generator, 2: X-ray detector, 3: image processing device, 4: output device
Claims (9)
前記抽出された椎体領域の画素値の特徴量を算出する特徴量算出部と、
前記椎体領域の画素値の特徴量が目標値となるように階調処理を行う階調処理手段と、
を備えることを特徴とする画像処理装置。 For a chest X-ray image obtained by imaging the chest of the subject, a vertebral body region in which the vertebral body of the subject is imaged based on at least one of a pixel value or a pixel position in the chest X-ray image. A vertebral body distribution region is set, and the vertebral body distribution region is divided into a plurality of small regions arranged along the body axis direction of the chest X-ray image, and each small region is arranged along the body axis direction. A first profile indicating the distribution of the feature values of the pixel values of the pixel row is generated, and pixels corresponding to pixel values equal to or lower than the first threshold value determined based on the minimum value in the first profile are extracted from each of the small regions. A vertebral body region extraction unit that extracts the vertebral body region;
A feature amount calculation unit for calculating a feature amount of a pixel value of the extracted vertebral body region;
Gradation processing means for performing gradation processing so that the feature value of the pixel value of the vertebral body region becomes a target value;
An image processing apparatus comprising:
ことを特徴とする請求項1に記載の画像処理装置。 Wherein the first threshold value, said each first profile generated for each small area, of the integrated value of the fixed ratio to the minimum value of the first profile, or the first profile to a minimum value of said first profile It is a value with standard deviation added.
The image processing apparatus according to claim 1.
前記被検体領域において、前記体軸方向に並んだ画素数の分布を示す第2プロファイルと、前記体軸方向に直交する方向に並んだ画素数の分布を示す第3プロファイルと、を生成し、前記第2プロファイルに対して設定された第2閾値以上の画素数を有し、かつ、前記第3プロファイルに対して設定された第3閾値以上の画素数を有する領域を、前記被検体の四肢が撮影された領域を除く体幹部領域として抽出する体幹部領域抽出部と、を更に備え、
前記椎体領域抽出部は、前記体幹部領域に対して前記椎体分布領域を設定する、
ことを特徴とする請求項1又は2に記載の画像処理装置。 In the chest X-ray image, an aperture region in which an X-ray diaphragm provided in the X-ray image diagnostic apparatus used for imaging the chest X-ray image is captured, and X-rays irradiated from the X-ray image diagnostic apparatus Among them, a subject region extraction unit that extracts a subject region in which the subject is imaged by removing a direct line region on which a direct line that does not pass through the subject is incident, and
Generating a second profile indicating a distribution of the number of pixels arranged in the body axis direction and a third profile indicating a distribution of the number of pixels arranged in a direction orthogonal to the body axis direction in the subject region; An area having the number of pixels equal to or greater than the second threshold set for the second profile and the number of pixels equal to or greater than the third threshold set for the third profile is defined as the limb of the subject. A torso area extracting unit that extracts as a torso area excluding the area where the image is taken,
The vertebral body region extraction unit sets the vertebral body distribution region with respect to the trunk region.
The image processing apparatus according to claim 1, wherein the image processing apparatus is an image processing apparatus.
ことを特徴とする請求項3に記載の画像処理装置。 The second profile is a graph obtained by plotting, in the body axis direction, the number of pixels in which the pixel value is detected among the pixels in the direction orthogonal to the body axis in the image of the subject region . threshold, the direction of the image size perpendicular to the area of the graph of the second profile to the body axis direction of the image values obtained by dividing by the body axial direction image size of direction and the subject area of the image of the subject area And the third profile is orthogonal to the body axis direction in the number of pixels in the body axis direction in which the pixel value is detected in the body axis direction pixel. to a graph plotting the direction, the third threshold value, said third profile area value obtained by dividing the direction of the image size to be perpendicular to the body axis direction and the subject image of the subject area of the graph Region image Is a small value among the half value of the image size of the body axis Direction,
The image processing apparatus according to claim 3.
ことを特徴とする請求項3又は4に記載の画像処理装置。 The vertebral body region extraction unit generates a fourth profile indicating a distribution of feature values of pixel values of pixel columns arranged in the body axis direction in the trunk region, and based on a minimum value of the fourth profile Extracting a pixel corresponding to a pixel value equal to or less than a predetermined fourth threshold as a vertebral body center region, and extracting a region including the vertebral body center as the vertebral body distribution region;
The image processing apparatus according to claim 3, wherein the image processing apparatus is an image processing apparatus.
ことを特徴とする請求項5に記載の画像処理装置。 The fourth threshold is a value obtained by adding a fixed ratio to the minimum value of the fourth profile, or a value obtained by adding a standard deviation of the fourth profile to the minimum value of the fourth profile.
The image processing apparatus according to claim 5.
ことを特徴とする請求項5又は6に記載の画像処理装置。 The vertebral body region extraction unit extracts, as the vertebral body distribution region, a region obtained by adding or subtracting a width obtained by multiplying the trunk region width by a fixed ratio to the vertebral body central region.
The image processing apparatus according to claim 5, wherein the image processing apparatus is an image processing apparatus.
前記抽出された椎体領域の画素値の特徴量を算出するステップと、
前記椎体領域の画素値の特徴量が目標値となるように階調処理を行うステップと、
をコンピュータに実行させることを特徴とする画像処理プログラム。 For a chest X-ray image obtained by imaging the chest of the subject, a vertebral body region in which the vertebral body of the subject is imaged based on at least one of a pixel value or a pixel position in the chest X-ray image. A vertebral body distribution region is set, and the vertebral body distribution region is divided into a plurality of small regions arranged along the body axis direction of the chest X-ray image, and each small region is arranged along the body axis direction. A first profile indicating the distribution of the feature values of the pixel values of the pixel row is generated, and pixels corresponding to pixel values equal to or lower than the first threshold value determined based on the minimum value in the first profile are extracted from each of the small regions. Extracting as a vertebral body region;
Calculating a feature value of a pixel value of the extracted vertebral body region;
Performing gradation processing so that the feature value of the pixel value of the vertebral body region becomes a target value;
An image processing program for causing a computer to execute.
前記画像処理装置は、前記X線検出手段から出力された画像に対して前記階調処理を行い、前記画像表示手段は、前記階調処理が施されたX線画像を表示する、
ことを特徴とするX線画像診断装置。 8. The image processing apparatus according to claim 1, an X-ray generation unit, an X-ray detection unit that detects an X-ray generated from the X-ray generation unit, and an image that displays an X-ray image. Display means,
The image processing apparatus performs the gradation processing on the image output from the X-ray detection means, and the image display means displays the X-ray image subjected to the gradation processing.
An X-ray diagnostic imaging apparatus characterized by the above.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010285012A JP5635389B2 (en) | 2010-12-21 | 2010-12-21 | Image processing apparatus, image processing program, and X-ray diagnostic imaging apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010285012A JP5635389B2 (en) | 2010-12-21 | 2010-12-21 | Image processing apparatus, image processing program, and X-ray diagnostic imaging apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2012130518A JP2012130518A (en) | 2012-07-12 |
JP5635389B2 true JP5635389B2 (en) | 2014-12-03 |
Family
ID=46646858
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2010285012A Expired - Fee Related JP5635389B2 (en) | 2010-12-21 | 2010-12-21 | Image processing apparatus, image processing program, and X-ray diagnostic imaging apparatus |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP5635389B2 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111918610A (en) * | 2018-05-16 | 2020-11-10 | 松下电器产业株式会社 | Gradation conversion method for chest X-ray image, gradation conversion program, gradation conversion device, server device, and conversion method |
JP7143794B2 (en) * | 2019-03-15 | 2022-09-29 | コニカミノルタ株式会社 | Image processing device, image processing system and program |
CN111832349A (en) | 2019-04-18 | 2020-10-27 | 富士通株式会社 | Method and device for identifying error detection of carry-over object and image processing equipment |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000023950A (en) * | 1998-07-10 | 2000-01-25 | Konica Corp | Image processing device for radiation image |
JP4756753B2 (en) * | 2001-03-09 | 2011-08-24 | キヤノン株式会社 | Image processing apparatus, method, and program |
JP2007068715A (en) * | 2005-09-06 | 2007-03-22 | Canon Inc | Medical image processing apparatus and method |
JP2009195277A (en) * | 2008-02-19 | 2009-09-03 | Fujifilm Corp | Median line detection device, method, and program |
-
2010
- 2010-12-21 JP JP2010285012A patent/JP5635389B2/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
JP2012130518A (en) | 2012-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5393245B2 (en) | Image processing apparatus, image processing apparatus control method, X-ray image capturing apparatus, and X-ray image capturing apparatus control method | |
US9836842B2 (en) | Image processing apparatus and image processing method | |
KR101493375B1 (en) | Image processing apparatus, image processing method, and computer-readable storage medium | |
US9619893B2 (en) | Body motion detection device and method | |
JP6815818B2 (en) | Radiation imaging system and radiography imaging method | |
JP4785133B2 (en) | Image processing device | |
WO2017126036A1 (en) | Image processing apparatus, image processing method, and image processing program | |
WO2013073627A1 (en) | Image processing device and method | |
US9595116B2 (en) | Body motion detection device and method | |
JP5635389B2 (en) | Image processing apparatus, image processing program, and X-ray diagnostic imaging apparatus | |
WO2011039954A1 (en) | Information processing apparatus, method, and program | |
JP4847041B2 (en) | X-ray equipment | |
JP2019115558A (en) | Radiography apparatus, image processing apparatus, and image determination method | |
JP5686661B2 (en) | X-ray equipment | |
JP6371515B2 (en) | X-ray image processing apparatus, X-ray image processing method, and program | |
JP2010005373A (en) | Radiographic image correction method, apparatus and program | |
JP2004152043A (en) | Method for correcting difference image, and image processor | |
JP2005136594A (en) | Image processing apparatus and control method thereof | |
JP2009285145A (en) | Radiographic image correction device, method, and program | |
JP2014236842A (en) | X-ray image diagnosis apparatus, image processing method and image processing apparatus | |
JP2010172560A (en) | Radiographic imaging apparatus and image processor | |
JP2009201535A (en) | X-ray moving image photographing system | |
JP2005167773A (en) | Image processing method and device | |
JP2011030753A (en) | Image processor and image processing method | |
CN107613870B (en) | Image processing apparatus and image processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20131202 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20140711 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20140729 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20140912 |
|
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20141007 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20141016 |
|
S111 | Request for change of ownership or part of ownership |
Free format text: JAPANESE INTERMEDIATE CODE: R313111 |
|
S533 | Written request for registration of change of name |
Free format text: JAPANESE INTERMEDIATE CODE: R313533 |
|
R350 | Written notification of registration of transfer |
Free format text: JAPANESE INTERMEDIATE CODE: R350 |
|
LAPS | Cancellation because of no payment of annual fees |