JP2009211138A - Target area extraction method, device and program - Google Patents

Target area extraction method, device and program Download PDF

Info

Publication number
JP2009211138A
JP2009211138A JP2008050615A JP2008050615A JP2009211138A JP 2009211138 A JP2009211138 A JP 2009211138A JP 2008050615 A JP2008050615 A JP 2008050615A JP 2008050615 A JP2008050615 A JP 2008050615A JP 2009211138 A JP2009211138 A JP 2009211138A
Authority
JP
Japan
Prior art keywords
pixel
region
point
contour
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2008050615A
Other languages
Japanese (ja)
Other versions
JP4964171B2 (en
Inventor
Yuanzhong Li
元中 李
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Priority to JP2008050615A priority Critical patent/JP4964171B2/en
Publication of JP2009211138A publication Critical patent/JP2009211138A/en
Application granted granted Critical
Publication of JP4964171B2 publication Critical patent/JP4964171B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To further improve extraction performance of a target area inside an input image. <P>SOLUTION: About each of pixels each expressing a reference point and pixels each expressing a point except the reference points inside a plurality of sample images wherein the reference points present each on an outline of the target area, and specifiable based on pixel value distributions of neighborhood areas are already known, the pixel value distributions of the neighborhood areas are previously performed with machine learning, and the reference points inside the input image are detected based on a result of the machine learning. An optional point C is set inside the target area inside the input image P, and a distinction area T thought that it includes the whole target area is set in the input image P. Outline likeness of each pixel inside the set distinction area T is calculated based on pixel value information of a neighborhood pixel of the pixel. The target area having the outline passing the reference points from the set distinction area T and including the optional point C is extracted based on the set optional point C, the detected reference points, and the calculated outline likeness of each pixel. <P>COPYRIGHT: (C)2009,JPO&INPIT

Description

本発明は、入力画像から対象領域を抽出する方法および装置ならびにプログラムに関するものである。   The present invention relates to a method, apparatus, and program for extracting a target area from an input image.

従来、医療分野においては、高い診断性能を有する画像を提供するために、医用画像中の病変領域、臓器領域等の所定の対象領域を抽出して表示する処理が行われている。   Conventionally, in the medical field, in order to provide an image having high diagnostic performance, processing for extracting and displaying a predetermined target region such as a lesion region or an organ region in a medical image has been performed.

入力画像中の対象領域を抽出する方法としては、たとえば特許文献1に示されているように、ユーザーにより指定された対象領域中の点および背景領域中の点の情報に基づいて、入力画像中の各画素が対象領域を示す画素である確からしさを算出するとともに、画像の局所的な明暗差(エッジ情報)に基づいて隣接する画素が同一領域内の画素である確からしさを算出し、算出した両方の確からしさを用いて入力画像中の対象領域を背景領域と区別して抽出する方法が知られている。   As a method for extracting a target area in an input image, for example, as disclosed in Patent Document 1, based on information on a point in a target area and a point in a background area specified by a user, The probability that each pixel of the pixel is a pixel indicating the target area is calculated, and the probability that the adjacent pixel is a pixel in the same area is calculated based on the local brightness difference (edge information) of the image. There is known a method of extracting a target area in an input image by distinguishing it from a background area using both the certainty factors.

また、特許文献2では、対象領域を含む複数のサンプル画像を用意し、サンプル画像中の、輪郭を表す画素であるか否かが既知である画素の近傍領域の画素値情報を予め機械学習することにより、入力画像中の各画素の輪郭らしさをその画素の近傍領域の画素値情報を用いて算出し、算出した輪郭らしさを用いて入力画像中の対象領域を背景領域と区別して抽出することにより、対象領域の抽出精度を向上させる方法が提案されている。
米国特許第6973212号明細書 特開2007−307358号公報
Also, in Patent Document 2, a plurality of sample images including a target region are prepared, and pixel value information of a region in the vicinity of a pixel that is known whether it is a pixel representing an outline in the sample image is machine-learned in advance. Thus, the contour likeness of each pixel in the input image is calculated using the pixel value information of the neighboring region of the pixel, and the target region in the input image is extracted separately from the background region using the calculated contour likeness. Thus, a method for improving the extraction accuracy of the target region has been proposed.
US Pat. No. 6,973,212 JP 2007-307358 A

しかしながら、上記従来技術では、入力画像中の各画素の輪郭らしさをその画素の近傍領域の画素値情報を用いて算出するなど、画像の局所的な画素値分布に基づいて対象領域を抽出するようにしているので、対象領域の内部または外部に輪郭のような画素値分布が存在する場合には、対象領域が正しく抽出できない場合が生じるという問題がある。   However, in the above prior art, the target region is extracted based on the local pixel value distribution of the image, such as calculating the contour likeness of each pixel in the input image using the pixel value information of the neighboring region of the pixel. Therefore, when a pixel value distribution such as a contour exists inside or outside the target area, there is a problem that the target area may not be extracted correctly.

たとえば、医用画像から特定の臓器領域を抽出する際、その臓器領域内に腫瘍が存在したり、医用画像中にノイズが存在するなど、臓器領域の内部または外部にその臓器領域の輪郭のような画素値分布が存在する場合、そのような腫瘍やノイズによる画素値分布が臓器領域の輪郭として誤って認識され、臓器領域の輪郭を正しく抽出できない場合が生じる。   For example, when extracting a specific organ region from a medical image, there is a tumor in the organ region, noise is present in the medical image, etc. When the pixel value distribution exists, the pixel value distribution due to such tumor or noise is erroneously recognized as the contour of the organ region, and the contour of the organ region may not be extracted correctly.

本発明は、上記事情に鑑み、対象領域の抽出性能をより向上させることが可能な対象領域抽出方法および装置ならびにプログラムを提供することを目的とするものである。   In view of the above circumstances, an object of the present invention is to provide a target area extraction method, apparatus, and program capable of further improving the extraction performance of a target area.

本発明の対象領域抽出方法は、入力画像から対象領域を抽出する方法であって、その対象領域と同種の対象領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、基準点を表す画素および基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、その機械学習の結果に基づいて入力画像中の基準点を検出する検出工程と、入力画像中の対象領域内に任意の点を設定する点設定工程と、入力画像中に、対象領域の全体を含むと思われる判別領域を設定する領域設定工程と、設定された判別領域内の各画素の輪郭らしさを、その画素の近傍画素の画素値情報に基づいて算出する算出工程と、設定された任意の点、検出された基準点、および算出された各画素の輪郭らしさに基づいて、設定された判別領域から基準点を通る輪郭を有し、かつ任意の点を含む対象領域を抽出する抽出工程とを備えたことを特徴とするものである。   The target region extraction method of the present invention is a method for extracting a target region from an input image, and is present on the contour of the target region of the same type as the target region and can be specified based on the pixel value distribution of the neighboring region. Based on the result of the machine learning, the pixel value distribution of the neighboring region is pre-machine-learned for each of the pixels representing the reference point and the pixels representing the points other than the reference point in the plurality of sample images whose reference points are known. A detection step for detecting a reference point in the input image, a point setting step for setting an arbitrary point in the target region in the input image, and a discrimination region that is supposed to include the entire target region in the input image. A region setting step to be set, a calculation step of calculating the contour likeness of each pixel in the set discrimination region based on pixel value information of neighboring pixels of the pixel, an arbitrary set point, and a detected reference Points, and calculated Based on the contour likelihood of each pixel has a contour that passes through the reference point from the set determination area, and is characterized in that it comprises an extraction step of extracting a region of interest including the arbitrary point.

上記方法においては、算出工程が、対象領域の輪郭が既知である複数のサンプル画像中の、輪郭上の点を表す画素および輪郭以外の点を表す画素のそれぞれについて、近傍画素の画素値情報を予め機械学習し、その機械学習の結果に基づいて各画素の輪郭らしさを算出するものであってもよい。   In the above method, the calculation step calculates pixel value information of neighboring pixels for each of a pixel representing a point on the contour and a pixel representing a point other than the contour in a plurality of sample images whose contours of the target region are known. Machine learning may be performed in advance, and the contour-likeness of each pixel may be calculated based on the result of the machine learning.

本発明の対象領域抽出装置は、入力画像から対象領域を抽出する装置であって、その対象領域と同種の対象領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、基準点を表す画素および基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、その機械学習の結果に基づいて入力画像中の基準点を検出する検出手段と、入力画像中の対象領域内に任意の点を設定する点設定手段と、入力画像中に、対象領域の全体を含むと思われる判別領域を設定する領域設定手段と、設定された判別領域内の各画素の輪郭らしさを、その画素の近傍画素の画素値情報に基づいて算出する算出手段と、設定された任意の点、検出された基準点、および算出された各画素の輪郭らしさに基づいて、設定された判別領域から基準点を通る輪郭を有し、かつ任意の点を含む対象領域を抽出する領域抽出手段とを備えたことを特徴とするものである。   The target region extraction device of the present invention is a device that extracts a target region from an input image, and is present on the contour of the same target region as the target region and can be specified based on the pixel value distribution of the neighboring region. Based on the result of the machine learning, the pixel value distribution of the neighboring region is pre-machine-learned for each of the pixels representing the reference point and the pixels representing the points other than the reference point in the plurality of sample images with known reference points Detection means for detecting a reference point in the input image, point setting means for setting an arbitrary point in the target area in the input image, and a discrimination area that is supposed to include the entire target area in the input image. Area setting means for setting, calculation means for calculating the contour likeness of each pixel in the set discrimination area based on pixel value information of neighboring pixels of the pixel, an arbitrary set point, and a detected reference Points, and calculated And an area extracting unit that extracts a target area having an outline passing through a reference point from a set discrimination area and including an arbitrary point, based on the contour likeness of each pixel. .

上記装置においては、算出手段が、対象領域の輪郭が既知である複数のサンプル画像中の、輪郭上の点を表す画素および輪郭以外の点を表す画素のそれぞれについて、近傍画素の画素値情報を予め機械学習し、その機械学習の結果に基づいて各画素の輪郭らしさを算出するものであってもよい。   In the above apparatus, the calculation means obtains pixel value information of neighboring pixels for each of a pixel representing a point on the contour and a pixel representing a point other than the contour in a plurality of sample images whose contours of the target region are known. Machine learning may be performed in advance, and the contour-likeness of each pixel may be calculated based on the result of the machine learning.

本発明の対象領域抽出プログラムは、入力画像から対象領域を抽出するためのプログラムであって、コンピュータに、その対象領域と同種の対象領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、基準点を表す画素および基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、その機械学習の結果に基づいて入力画像中の基準点を検出する検出手段と、入力画像中の対象領域内に任意の点を設定する点設定手段と、入力画像中に、対象領域の全体を含むと思われる判別領域を設定する領域設定手段と、設定された判別領域内の各画素の輪郭らしさを、その画素の近傍画素の画素値情報に基づいて算出する算出手段と、設定された任意の点、検出された基準点、および算出された各画素の輪郭らしさに基づいて、設定された判別領域から基準点を通る輪郭を有し、かつ任意の点を含む対象領域を抽出する領域抽出手段として機能させることを特徴とするものである。   The target area extraction program of the present invention is a program for extracting a target area from an input image, and is present on the contour of a target area of the same type as the target area in the computer and has a pixel value distribution in a neighboring area. Machine learning is performed in advance on the pixel value distribution of the neighboring region for each of the pixel representing the reference point and the pixel representing the point other than the reference point in a plurality of sample images whose reference points that can be specified based on the machine are known. A detection unit that detects a reference point in the input image based on a learning result, a point setting unit that sets an arbitrary point in the target region in the input image, and the entire target region in the input image An area setting means for setting a possible discrimination area; a calculation means for calculating the contour likeness of each pixel in the set discrimination area based on pixel value information of neighboring pixels of the pixel; and an arbitrary set point Based on the detected reference point and the calculated contour outline of each pixel, it functions as an area extracting means for extracting a target area having an outline passing through the reference point from a set discrimination area and including an arbitrary point It is characterized by making it.

本発明の対象領域抽出方法および装置ならびにプログラムによれば、入力画像中の対象領域内に任意の点を設定し、入力画像中に、対象領域の全体を含むと思われる判別領域を設定し、設定された判別領域内の各画素の輪郭らしさを、その画素の近傍画素の画素値情報に基づいて算出し、設定された任意の点および算出された各画素の輪郭らしさに基づいて、入力画像から対象領域を抽出する際に、その対象領域と同種の対象領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、基準点を表す画素および基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、その機械学習の結果に基づいて入力画像中の基準点を検出し、その検出結果にさらに基づいて対象領域の抽出を行うようにしているので、対象領域の内部または外部に輪郭のような画素値分布が存在する場合であっても、対象領域の正しい輪郭上に存在する点として検出した基準点を確実に通るように、対象領域の輪郭を決定することができ、対象領域の抽出性能をより向上させることができる。   According to the target area extraction method, apparatus, and program of the present invention, an arbitrary point is set in the target area in the input image, and a determination area that is supposed to include the entire target area is set in the input image. The contour likeness of each pixel in the set discrimination region is calculated based on the pixel value information of the neighboring pixels of the pixel, and the input image is calculated based on the set arbitrary point and the calculated contour likeness of each pixel. When extracting a target region from a plurality of sample images that exist on the same type of target region contour as the target region and have known reference points that can be identified based on the pixel value distribution of the neighboring region, For each of the pixel representing the reference point and the pixel representing the point other than the reference point, the pixel value distribution in the vicinity region is previously machine-learned, and the reference point in the input image is detected based on the result of the machine learning, and the detection is performed. In the result Since the target area is extracted based on the target area, even if there is a pixel value distribution such as a contour inside or outside the target area, it is detected as a point that exists on the correct contour of the target area. The contour of the target area can be determined so as to pass through the reference point reliably, and the extraction performance of the target area can be further improved.

以下、図面を参照して本発明の対象領域抽出装置を医用画像から肝臓領域を抽出するものに適用した実施の形態について説明する。なお、図1のような対象領域抽出装置1の構成は、補助記憶装置に読み込まれた対象領域抽出装置プログラムをコンピュータ(たとえばパーソナルコンピュータ等)上で実行することにより実現される。このとき、この対象領域抽出プログラムは、CD‐ROM等の情報記憶媒体に記憶され、もしくはインターネット等のネットワークを介して配布され、コンピュータにインストールされることになる。   Hereinafter, an embodiment in which a target region extraction apparatus of the present invention is applied to one that extracts a liver region from a medical image will be described with reference to the drawings. The configuration of the target area extracting apparatus 1 as shown in FIG. 1 is realized by executing a target area extracting apparatus program read into the auxiliary storage device on a computer (for example, a personal computer). At this time, the target area extraction program is stored in an information storage medium such as a CD-ROM or distributed via a network such as the Internet and installed in a computer.

図1に示すように、対象領域抽出装置1は、CT装置等により取得された医用画像Pから肝臓領域を抽出するものであって、基準点検出部10、点設定部20、領域設定部30、輪郭らしさ算出部40、領域抽出部50などを備えている。   As shown in FIG. 1, the target region extraction device 1 extracts a liver region from a medical image P acquired by a CT device or the like, and includes a reference point detection unit 10, a point setting unit 20, and a region setting unit 30. Further, a contour-likeness calculating unit 40, a region extracting unit 50, and the like are provided.

基準点検出部10は、肝臓領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、基準点を表す画素および基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、その機械学習の結果に基づいて医用画像P中の基準点を検出するものであって、識別器取得部11と、検出部12とを有する。   The reference point detection unit 10 is a pixel and a reference point that represent a reference point in a plurality of sample images that exist on the outline of the liver region and whose reference points that can be specified based on the pixel value distribution of the neighboring region are known. For each of the pixels representing points other than the above, the pixel value distribution in the neighboring region is machine-learned in advance, and the reference point in the medical image P is detected based on the machine learning result. And a detector 12.

ここで、基準点は、入力画像から抽出したい対象領域の種類に応じて定められるものであり、その数に制限はない。本実施の形態では、たとえば、図2に示すように、全体的になめらかな曲線からなる肝臓領域の輪郭における、角ばった箇所に存在する点Aおよび点Bのいずれかまたは両方を基準点として用いる。   Here, the reference points are determined according to the type of the target area to be extracted from the input image, and the number thereof is not limited. In the present embodiment, for example, as shown in FIG. 2, either or both of point A and point B existing at an angular position in the outline of the liver region consisting of a generally smooth curve is used as a reference point. .

また、医用画像から被検体の心臓が撮影された心臓領域を抽出する場合には、心臓領域の輪郭上に存在する点のうち、たとえば心臓が大動脈に分枝する境界部に存在する点、または心臓の下部にある最も尖がっている点を基準点として用いることができる。   In addition, when extracting the heart region where the subject's heart is imaged from the medical image, among the points existing on the outline of the heart region, for example, the point existing at the boundary where the heart branches to the aorta, or The sharpest point at the bottom of the heart can be used as the reference point.

識別器取得部11は、たとえば特開2006−139369号公報に記載されているように、肝臓領域を含む複数のサンプル画像を用意し、サンプル画像中の、基準点を表す画素および基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習することにより、サンプル画像中の各画素が基準点を示す画素であるか否かをその画素の近傍領域の画素値分布に基づいて識別する識別器Dを取得するものである。これにより取得した識別器Dは、任意の医用画像中の各画素が基準点を示す画素であるかどうかを識別する場合に適用することができる。なお、2以上の基準点(たとえば基準点A、B)を用いて肝臓領域を抽出する場合には、それぞれの基準点について識別器Dを取得する。   The classifier acquisition unit 11 prepares a plurality of sample images including a liver region, as described in, for example, Japanese Patent Application Laid-Open No. 2006-139369, and includes pixels other than the pixels representing the reference points and the reference points in the sample images. For each pixel representing a point, the pixel value distribution in the neighborhood area is pre-machine-learned to determine whether each pixel in the sample image is a pixel indicating a reference point in the pixel value distribution in the neighborhood area of the pixel. The discriminator D for identifying based on this is acquired. The discriminator D acquired in this way can be applied when discriminating whether each pixel in an arbitrary medical image is a pixel indicating a reference point. When a liver region is extracted using two or more reference points (for example, reference points A and B), the classifier D is acquired for each reference point.

ここで、各画素の近傍領域は、その近傍領域の内の各画素における画素値が変化する方向、その変化の大きさなどの、近傍領域の画素値分布に基づいてその画素が基準点であるか否かを識別できる程度の大きさの領域であることが望ましい。また、この近傍領域は、その画素を中心とした領域であってもよいし、その画素を中心から外れた位置に有する領域であってもよい。   Here, the neighboring area of each pixel is the reference point based on the pixel value distribution of the neighboring area such as the direction in which the pixel value changes in each pixel in the neighboring area and the magnitude of the change. It is desirable that the area has a size that can be identified. In addition, this neighborhood region may be a region centered on the pixel, or a region having the pixel at a position off the center.

なお、図2に、基準点A、Bの近傍領域R,Rの一例を示す。ここでは、近傍領域が矩形の領域である場合について例示しているが、近傍領域は円形、楕円形等、種々の形状の領域であってもよい。また、その近傍領域内の一部の画素の画素値分布のみを上記機械学習に用いるようにしてもよい。 FIG. 2 shows an example of the vicinity regions R A and R B of the reference points A and B. Here, the case where the vicinity area is a rectangular area is illustrated, but the vicinity area may be an area having various shapes such as a circle and an ellipse. Further, only the pixel value distribution of some pixels in the vicinity region may be used for the machine learning.

また、この識別器Dを取得する処理には、アダブースティングアルゴリズム(Adaboosting Algorithm)、ニューラルネットワーク(Neural Network)、SVM(Support Vector Machine)等を用いることができる。   In addition, for the process of obtaining the discriminator D, an adaboosting algorithm, a neural network, a support vector machine (SVM), or the like can be used.

検出部12は、医用画像P上に識別器取得部11において取得した識別器Dを走査させることにより、医用画像P中の基準点A、Bを検出するものである。なお、この検出部21による基準点の検出より先に、後述する領域設定部30において対象領域の全体を含むと思われる判別領域Tが設定されている場合には、医用画像P全体のうち、設定された判別領域Tあるいはその判別領域Tを含む一部の領域上にのみ識別器Dを走査させることにより、医用画像P中の基準点を検出するようにしてもよい。   The detection unit 12 detects the reference points A and B in the medical image P by causing the classifier D acquired by the classifier acquisition unit 11 to scan the medical image P. In addition, before the detection of the reference point by the detection unit 21, when a determination region T that is supposed to include the entire target region is set in the region setting unit 30 described later, of the entire medical image P, The reference point in the medical image P may be detected by scanning the discriminator D only over the set discrimination region T or a part of the region including the discrimination region T.

点設定部20は、医用画像P中の肝臓領域内に任意の点C(シード点)を設定するものであって、たとえば、対象領域抽出装置1に備えるキーボードやポインティングデバイス等による、操作者の入力に基づいて指定された医用画像P上の位置をその任意の点として設定するものであってもよいし、従来の対象領域検出方法により自動検出された肝臓領域内の各点に一定の質量が与えられているとし、その領域の重心位置をその任意の点として設定するものであってもよい。   The point setting unit 20 sets an arbitrary point C (seed point) in the liver region in the medical image P. For example, the point setting unit 20 uses a keyboard or a pointing device provided in the target region extraction device 1 to The position on the medical image P designated based on the input may be set as an arbitrary point, and a constant mass is set at each point in the liver region automatically detected by the conventional target region detection method. May be set as the arbitrary point.

また、この点設定部20による任意の点Cの設定より先に、基準点検出部10において基準点A、Bが検出されている場合には、肝臓領域と基準点A、Bの解剖学的な位置関係により、下記の式(1)により、基準点A(x、y)と基準点B(x、y)を用いて任意の点C(x、y)を設定するようにしてもよい。

Figure 2009211138
Further, when the reference points A and B are detected by the reference point detection unit 10 prior to the setting of the arbitrary point C by the point setting unit 20, the anatomical relationship between the liver region and the reference points A and B is detected. With an arbitrary positional relationship, an arbitrary point C (x C , y C ) is set using the reference point A (x A , y A ) and the reference point B (x B , y B ) according to the following equation (1). You may make it do.
Figure 2009211138

なお、この任意の点Cは、肝臓領域のおおまかな中心に設定されたものであってもよいし、肝臓領域の中心から外れた位置に設定されたものであってもよい。   The arbitrary point C may be set at the approximate center of the liver region or may be set at a position deviated from the center of the liver region.

領域決定部30は、医用画像P中に、肝臓領域の全体を含むと思われる判別領域Tを設定するものであって、たとえば、対象領域抽出装置1に備えるキーボードやポインティングデバイス等による、操作者の入力に基づいて指定された医用画像P上の領域をその判別領域Tとして設定するものであってもよいし、点設定部20において設定された点Cを中心とした、肝臓領域のありうる大きさ以上の大きさの領域を判別領域Tとして自動的に設定するものであってもよい。これにより、全体の医用画像Pから注目する領域の範囲を限定でき、以降の処理を高速化することができる。   The region determination unit 30 sets a determination region T that is supposed to include the entire liver region in the medical image P. For example, an operator using a keyboard or a pointing device provided in the target region extraction apparatus 1 The region on the medical image P designated based on the input of the above may be set as the discrimination region T, or there may be a liver region centered on the point C set in the point setting unit 20 An area having a size larger than the size may be automatically set as the discrimination area T. As a result, the range of the region of interest from the entire medical image P can be limited, and subsequent processing can be speeded up.

なお、判別領域Tは、その周縁形状として矩形、円形、楕円形等、種々の形状を採用することができる。   The discrimination region T can adopt various shapes such as a rectangle, a circle, and an ellipse as the peripheral shape.

輪郭らしさ算出部40は、領域設定部30において設定された判別領域T内の各画素の輪郭らしさを、その画素の近傍画素の画素値情報に基づいて算出するものであって、評価関数取得部41と、算出部42とを有する。   The contour likelihood calculation unit 40 calculates the contour likelihood of each pixel in the discrimination region T set by the region setting unit 30 based on the pixel value information of the neighboring pixels of the pixel, and the evaluation function acquisition unit 4 1 and a calculation unit 42.

評価関数取得部41は、肝臓領域を含む複数のサンプル画像を用意し、サンプル画像中の、輪郭上の点を表す画素および前記輪郭以外の点を表す画素のそれぞれについて、近傍画素の画素値情報をその画素における特徴量として予め機械学習することにより、サンプル画像中の各画素が輪郭を示す画素であるかどうかをその特徴量に基づいて評価する評価関数Fを取得するものである。   The evaluation function acquisition unit 41 prepares a plurality of sample images including a liver region, and pixel value information of neighboring pixels for each of a pixel representing a point on the contour and a pixel representing a point other than the contour in the sample image Is previously learned as a feature amount in the pixel, thereby obtaining an evaluation function F for evaluating whether each pixel in the sample image is a pixel indicating an outline based on the feature amount.

具体的には、たとえば特開2007−307358号公報に記載されているように、各画素の近傍画素の画素値情報、たとえば、その画素を中心とする水平方向5画素×垂直方向5画素の領域内のそれぞれ異なる複数の画素の画素値値の組み合わせを用いて、その画素が輪郭を示す画素であるか否かの判別を行う複数の弱判別器を、その弱判別器全てを組み合わせた評価関数Fがサンプル画像中の各画素が輪郭を示す画素であるかどうかを所望の性能で評価できるようになるまで逐次生成する。   Specifically, as described in, for example, Japanese Patent Laid-Open No. 2007-307358, pixel value information of neighboring pixels of each pixel, for example, an area of 5 pixels in the horizontal direction × 5 pixels in the vertical direction centering on the pixel An evaluation function combining a plurality of weak discriminators for determining whether or not the pixel is a pixel indicating an outline using a combination of pixel value values of a plurality of different pixels in the combination of all the weak discriminators F is sequentially generated until it can be evaluated with a desired performance whether each pixel in the sample image is a pixel indicating an outline.

これにより取得した評価関数Fは、任意の医用画像中の各画素が肝臓領域の輪郭を示す画素であるかどうかを評価する場合に適用することができる。   The evaluation function F acquired in this way can be applied when evaluating whether each pixel in an arbitrary medical image is a pixel indicating the outline of the liver region.

また、この評価関数Fを取得する処理にも、アダブースティングアルゴリズム(Adaboosting Algorithm)、ニューラルネットワーク(Neural Network)、SVM(Support Vector Machine)等の機械学習の手法を用いることができる。   Also, a machine learning technique such as an adaboosting algorithm, a neural network, or a support vector machine (SVM) can be used for the process of obtaining the evaluation function F.

算出部42は、判別領域T内の各画素の特徴量に基づいて、各画素の輪郭らしさ、つまりその画素が輪郭を示す画素であるかどうかの評価値を、評価関数Fを用いて算出するものである。   Based on the feature amount of each pixel in the discrimination region T, the calculation unit 42 calculates an evaluation value of whether each pixel is like a contour, that is, whether the pixel is a pixel indicating a contour, using the evaluation function F. Is.

領域抽出部50は、任意の点C、基準点S、および各画素の輪郭らしさを用いて判別領域Tから肝臓領域を抽出するものであって、たとえば、Yuri Y. Boykov, Marie-Pierre Jolly, “Interactive Graph Cuts for Optimal Boundary and Region Segmentation of Objects in N-D images”, Proceedings of “International Conference on Computer Vision”, Vancouver, Canada, July 2001 vol.I, p.105-112.や、米国特許第6973212号明細書等に記載されているグラフカット法(Graph Cuts)により判別領域Tを肝臓領域と背景領域とに分割する際、肝臓領域と背景領域の境界が肝臓領域の輪郭上に存在する点である基準点A、Bを必ず通るようにして、判別領域Tから肝臓領域を抽出する。   The region extraction unit 50 extracts a liver region from the discrimination region T using an arbitrary point C, a reference point S, and the contour likeness of each pixel. For example, Yuri Y. Boykov, Marie-Pierre Jolly, “Interactive Graph Cuts for Optimal Boundary and Region Segmentation of Objects in ND images”, Proceedings of “International Conference on Computer Vision”, Vancouver, Canada, July 2001 vol.I, p.105-112., US Pat. No. 6973212 When the discrimination region T is divided into a liver region and a background region by a graph cut method (Graph Cuts) described in the specification etc., the boundary between the liver region and the background region is present on the outline of the liver region. A liver region is extracted from the discrimination region T so as to pass through the reference points A and B without fail.

具体的には、まず、図3に示すように、判別領域T中の各画素を表すノードNij(i=1,2,…、j=1,2,…)と各画素が取り得るラベル(本実施の形態では、肝臓領域または背景領域)をそれぞれ表すノードS、Tと、隣接する画素のノード同士をつなぐリンクn-linkと、各画素を表すノードNijと肝臓領域を表すノードSをつなぐリンクs-linkと、各画素を表すノードNijと背景領域を表すノードTをつなぐリンクt-linkとから構成されるグラフを作成する。 Specifically, first, as shown in FIG. 3, a node N ij (i = 1, 2,..., J = 1, 2,...) Representing each pixel in the discrimination region T and labels that each pixel can take. (In this embodiment, nodes S and T representing liver regions or background regions), a link n-link connecting nodes of adjacent pixels, a node N ij representing each pixel, and a node S representing a liver region A graph composed of a link s-link that connects, a node N ij that represents each pixel, and a link t-link that connects a node T that represents the background region is created.

ここで、n-linkには、判別領域中の各画素を表すノードNij毎にそのノードから四方の隣接するノードへ向かう4本のリンクが存在し、各隣接するノード間には互いのノードに向かう2本のリンクが存在する。ここで、各ノードNijから四方の隣接するノードへ向かう4本のリンクは、そのノードが示す画素が四方の隣接する画素と同一領域内の画素である確からしさを表すものであり、その確からしさはその画素の輪郭らしさに基づいて求められる。具体的には、そのノードNijが示す画素の輪郭らしさが設定しきい値以下である場合には、それらの各リンクに確からしさの所定の最大値が設定され、輪郭らしさが設定しきい値以上である場合は、その輪郭らしさが大きいほど、小さい値の確からしさが各リンクに設定される。たとえば、確からしさの最大値を1000とした場合、ノードNijが示す画素の輪郭らしさが設定しきい値(ゼロ)以下である場合には、そのノードから四方の隣接するノードへ向かう4本のリンクに1000という値が設定され、輪郭らしさが設定しきい値(ゼロ)以上である場合は、次式(1000−(輪郭らしさ/輪郭らしさの最大値)×1000)により算出された値をそれらの各リンクに設定することができる。ここで、輪郭らしさの最大値は、算出部42により判別領域T内の各画素において算出した全ての輪郭らしさのうち最大の値を意味する。 Here, in the n-link, there are four links from each node to four adjacent nodes for each node N ij representing each pixel in the determination region, and each adjacent node has a mutual node. There are two links going to. Here, the four links from each node N ij to the four adjacent nodes represent the probability that the pixel indicated by the node is a pixel in the same region as the four adjacent pixels. The likelihood is obtained based on the contour likeness of the pixel. Specifically, when the contour likelihood of the pixel indicated by the node N ij is less than or equal to the set threshold value, a predetermined maximum value of the probability is set for each of those links, and the contour likelihood is set to the threshold value. In the case described above, the probability of a smaller value is set for each link as the contour likelihood increases. For example, when the maximum value of the probability is set to 1000, when the likelihood of the contour of the pixel indicated by the node N ij is equal to or less than the set threshold value (zero), four lines from the node to four adjacent nodes are displayed. If a value of 1000 is set for the link and the contour likelihood is greater than or equal to the set threshold (zero), the value calculated by the following formula (1000-(the contour likelihood / maximum value of the contour likelihood) x 1000) is used. Can be set for each link. Here, the maximum value of the contour likelihood means the maximum value of all the contour likelihoods calculated for each pixel in the discrimination region T by the calculation unit 42.

また、各画素を表すノードNijと肝臓領域を表すノードSをつなぐs-linkは、各画素が肝臓領域に含まれる画素である確からしさを表すものであり、各画素を表すノードNijと背景領域を表すノードTをつなぐt-linkは、各画素が背景領域に含まれる画素である確からしさを表すものであり、それらの確からしさは、その画素が肝臓領域又は背景領域のいずれかを示す画素であるかの情報がすでに与えられている場合、その与えられた情報に従って設定される。 The s-link that connects the node N ij representing each pixel and the node S representing the liver region represents the probability that each pixel is a pixel included in the liver region, and the node N ij representing each pixel The t-link connecting the nodes T representing the background region represents the likelihood that each pixel is a pixel included in the background region, and the certainty indicates whether the pixel is either the liver region or the background region. When information indicating whether the pixel is a pixel to be shown has already been given, the information is set according to the given information.

具体的には、任意の点Cは肝臓領域内に設定された画素であるので、図10に示すように、その点Cを示すノードN33と肝臓領域を表すノードSとをつなぐs-linkに大きい値の確からしさを設定する。また、肝臓領域内に設定された任意の点を基準として、その肝臓領域を含むように設定した判別領域Tは、通常、肝臓領域及びその肝臓領域の周囲に存在する背景領域を含むようになっていることから、判別領域TT2の周縁の各画素を、背景領域を示す画素であろうと想定し、それらのその各画素を示すノードN11、N12、…、N15、N21、N25、N31、、と背景領域を表すノードTとをつなぐt-linkに大きい値の確からしさを設定する。 Specifically, since an arbitrary point C is a pixel set in the liver region, as shown in FIG. 10, an s-link that connects a node N 33 indicating the point C and a node S indicating the liver region. Set the probability of a large value to. In addition, the discrimination area T set to include the liver area on the basis of an arbitrary point set in the liver area usually includes the liver area and the background area existing around the liver area. Therefore, it is assumed that each pixel on the periphery of the discrimination region TT2 is a pixel indicating the background region, and nodes N 11 , N 12 ,..., N 15 , N 21 , N 25 indicating those pixels are indicated. , N 31 , and a node T representing the background area are set to a probability of a large value.

また、図4に示すように、点設定部20において設定された点Cから基準点A、Bをそれぞれ通る方向に延びた各線分全体のうち、基準点Aと点Cとの間、基準点Bと点Cとの間の部分に位置する各画素は肝臓領域の内部に存在する画素であると判断することができるので、それらの各画素を示すノードと肝臓領域を表すノードSとをつなぐs-linkに大きい値の確からしさを設定し、基準点Aから点Cとは反対側に延びた部分、および基準点Bから点Cとは反対側に延びた部分に位置する各画素は肝臓領域の外部に存在する画素であると判断することができるので、それらの各画素を示すノードと肝臓領域を表すノードTとをつなぐt-linkに大きい値の確からしさを設定する。   Further, as shown in FIG. 4, the reference point between the reference point A and the point C among the entire line segments extending in the direction passing through the reference points A and B from the point C set in the point setting unit 20. Since each pixel located in the portion between B and point C can be determined to be a pixel existing inside the liver region, the node indicating each pixel and the node S representing the liver region are connected. The probability of a large value is set for s-link, and each pixel located in a portion extending from the reference point A to the opposite side of the point C and a portion extending from the reference point B to the opposite side of the point C is the liver. Since it can be determined that the pixel exists outside the region, the probability of a large value is set in the t-link that connects the node indicating each pixel and the node T indicating the liver region.

そして、肝臓領域と背景領域は互いに排他的な領域であるので、たとえば図5に点線で示すように、全てのn-link、s-link、およびt-linkのうち適当なリンクを切断してノードSをノードTから切り離すことにより、判別領域Tを肝臓領域と背景領域に分割して、肝臓領域を抽出する。ここで、切断する全てのn-link、s-link、およびt-linkにおける確からしさの合計が最も小さくなるような切断を行うことにより、最適な領域分割をすることができる。   Since the liver region and the background region are mutually exclusive regions, for example, as shown by dotted lines in FIG. 5, all the n-links, s-links, and t-links are cut off. By separating the node S from the node T, the discrimination region T is divided into a liver region and a background region, and the liver region is extracted. Here, optimal region division can be performed by performing cutting so that the total probability of all n-links, s-links, and t-links to be cut is minimized.

なお、ここでは、グラフカット法(Graph Cuts)を用いて肝臓領域を抽出する場合について例示しているが、それに代えて、たとえば特開2007−307358号公報に記載されているような動的計画法を用いて肝臓領域の輪郭を決定する等他の手法を用いて肝臓領域を抽出してもよい。   In this example, the case of extracting the liver region using the graph cut method (Graph Cuts) is illustrated, but instead, for example, dynamic planning as described in JP-A-2007-307358 is used. The liver region may be extracted using other methods such as determining the contour of the liver region using a method.

次いで、上記の構成により、医用画像Pから肝臓領域を抽出する際に行われる処理の一例について説明する。   Next, an example of processing performed when a liver region is extracted from the medical image P with the above configuration will be described.

まず、検出部12が、識別器取得部11により予め取得した、任意の医用画像中の各画素が基準点A、Bのいずれかを示す画素であるかどうかを識別できる識別器D、Bを用いて、医用画像P中の基準点A、Bを検出する。次に、点設定部20が、上記の式(1)により、基準点A(x、y)と基準点B(x、y)を用いて、医用画像P中の対象領域内に任意の点C(x、y)(シード点)を設定する。次に、領域設定部30が、医用画像P中に、対象領域の全体を含むと思われる判別領域Tを設定する。次に、算出部42が、評価関数取得部41により予め取得した、任意の医用画像中の各画素が肝臓領域の輪郭を示す画素であるかどうかを評価できる評価関数Fを用いて、判別領域T内の各画素の輪郭らしさを算出する。最後に、領域抽出部50が、たとえばグラフカット法(Graph Cuts)により、設定された任意の点Cおよび算出された各画素の輪郭らしさに基づいて、かつ、肝臓領域の輪郭が基準点A、Bを必ず通るようにして、判別領域Tから対象領域を抽出し、処理を終了する。 First, the discriminators D A and B that can identify whether or not each pixel in an arbitrary medical image acquired in advance by the discriminator acquisition unit 11 is a pixel that indicates one of the reference points A and B. Using B , reference points A and B in the medical image P are detected. Next, the point setting unit 20 uses the reference point A (x A , y A ) and the reference point B (x B , y B ) in the target region in the medical image P according to the above equation (1). An arbitrary point C (x C , y C ) (seed point) is set in. Next, the region setting unit 30 sets a discrimination region T that seems to include the entire target region in the medical image P. Next, the determination unit 42 uses the evaluation function F that is acquired in advance by the evaluation function acquisition unit 41 to evaluate whether each pixel in an arbitrary medical image is a pixel indicating the outline of the liver region. The contour likeness of each pixel in T is calculated. Finally, the region extraction unit 50 determines whether the contour of the liver region is the reference point A, based on the set arbitrary point C and the calculated contour profile of each pixel by, for example, the graph cut method (Graph Cuts). The target region is extracted from the discrimination region T so as to pass through B, and the process is terminated.

上記実施の形態によれば、入力画像中の対象領域内に任意の点を設定し、入力画像中に、対象領域の全体を含むと思われる判別領域を設定し、設定された判別領域内の各画素の輪郭らしさを、その画素の近傍画素の画素値情報に基づいて算出し、設定された任意の点および算出された各画素の輪郭らしさに基づいて、入力画像から対象領域を抽出する際に、その対象領域と同種の対象領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、基準点を表す画素および基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、その機械学習の結果に基づいて入力画像中の基準点を検出し、その検出結果にさらに基づいて対象領域の抽出を行うようにしているので、対象領域の内部または外部に輪郭のような画素値分布が存在する場合であっても、対象領域の正しい輪郭上に存在する点として検出した基準点を確実に通るように、対象領域の輪郭を決定することができ、対象領域の抽出性能をより向上させることができる。   According to the above embodiment, an arbitrary point is set in the target area in the input image, and a determination area that is supposed to include the entire target area is set in the input image. When calculating the contour likeness of each pixel based on pixel value information of neighboring pixels of that pixel and extracting the target area from the input image based on the set arbitrary point and the calculated contour likeness of each pixel In addition, a pixel that represents a reference point and a reference in a plurality of sample images that exist on the contour of the target region of the same type as the target region and have known reference points that can be identified based on the pixel value distribution of the neighboring region For each pixel representing a point other than a point, the pixel value distribution in the neighboring area is machine-learned in advance, the reference point in the input image is detected based on the machine learning result, and the target area is further based on the detection result To do the extraction Therefore, even if there is a pixel value distribution such as a contour inside or outside the target area, the target must be surely passed through the reference point detected as a point existing on the correct contour of the target area. The contour of the region can be determined, and the extraction performance of the target region can be further improved.

なお、上記実施の形態においては、本発明の対象領域抽出装置1を2次元の入力画像から対象領域を抽出するものに適用した場合について説明したが、3次元の入力画像から対象領域を抽出するものに適用することもできる。   In the above-described embodiment, the case where the target region extraction apparatus 1 of the present invention is applied to the one that extracts the target region from the two-dimensional input image has been described. However, the target region is extracted from the three-dimensional input image. It can also be applied to things.

たとえば、3次元の医用画像中の肝臓領域の輪郭を決定する場合、2次元の医用画像から肝臓領域を抽出するときと同様に、全体的になめらかな曲面からなる肝臓領域の輪郭の角ばった箇所に存在する点を基準点として用いる。   For example, when determining the outline of a liver region in a three-dimensional medical image, the same as when extracting a liver region from a two-dimensional medical image, the corner of the outline of the liver region consisting of a smooth surface as a whole A point existing in is used as a reference point.

識別器取得部11が、肝臓領域を含む複数の3次元のサンプル画像を用意し、それらのサンプル画像中の、基準点を表すボクセルおよび基準点以外の点を表すボクセルのそれぞれについて、近傍領域の画素値分布を予め機械学習することにより、サンプル画像中の各ボクセルが基準点を示すボクセルであるか否かをそのボクセルの近傍領域の画素値分布に基づいて識別する識別器を取得する。ここで、各ボクセルの近傍領域は、その近傍領域の内の各ボクセルにおける画素値が変化する方向、その変化の大きさなどの、近傍領域の画素値分布に基づいてそのボクセルが基準点であるか否かを識別できる程度の大きさの3次元領域であることが望ましい。検出部12が、3次元の医用画像上に識別器取得部11において取得した識別器を走査させることにより、その医用画像中の基準点を検出する。   The discriminator acquisition unit 11 prepares a plurality of three-dimensional sample images including the liver region, and for each of the voxels representing the reference point and the voxels representing points other than the reference point in the sample image, By performing machine learning in advance on the pixel value distribution, a discriminator is obtained that identifies whether each voxel in the sample image is a voxel indicating a reference point based on the pixel value distribution in the region near the voxel. Here, the neighborhood area of each voxel is the reference point based on the pixel value distribution in the neighborhood area, such as the direction in which the pixel value in each voxel within the neighborhood area changes, the magnitude of the change, and the like. It is desirable that the region is a three-dimensional region that is large enough to be identified. The detection unit 12 scans the classifier acquired by the classifier acquisition unit 11 on the three-dimensional medical image, thereby detecting a reference point in the medical image.

また、点設定部20が、3次元の医用画像の肝臓領域内に、3次元座標系での任意の点Cを設定し、領域決定部30が、その3次元の医用画像中に、肝臓領域の全体を含むと思われる3次元の存在範囲を設定する。ここで、存在範囲は、その周縁形状として六面体、球体等、種々の形状を採用することができる。   Further, the point setting unit 20 sets an arbitrary point C in the three-dimensional coordinate system in the liver region of the three-dimensional medical image, and the region determination unit 30 includes the liver region in the three-dimensional medical image. A three-dimensional existence range that is supposed to include the whole of is set. Here, the existence range can adopt various shapes such as a hexahedron and a sphere as the peripheral shape.

また、評価関数取得部41が、肝臓領域を含む複数の3次元のサンプル画像を用意し、それらのサンプル画像中の、輪郭上の点を表すボクセルおよび輪郭以外の点を表すボクセルのそれぞれについて、近傍ボクセルの画素値情報を予め機械学習することにより、任意の3次元の医用画像中の各ボクセルが肝臓領域の輪郭を示すボクセルであるかどうかを評価できる評価関数Fを取得する。ここで、近傍ボクセルの画素値情報としては、例えば、そのボクセルを中心とするX軸方向5ボクセル×Y軸方向5ボクセル×Z軸方向5ボクセルの立方体の領域内の異なる複数個のボクセルにおける画素値の組み合わせを用いることができる。次に、算出部42が、判別領域T内の各ボクセルの特徴量に基づいて、各ボクセルの輪郭らしさ、つまりそのボクセルが輪郭を示すボクセルであるかどうかの評価値を、評価関数Fを用いて算出する。   In addition, the evaluation function acquisition unit 41 prepares a plurality of three-dimensional sample images including the liver region, and for each of the voxels representing points on the contour and points other than the contour in those sample images, By performing machine learning in advance on the pixel value information of neighboring voxels, an evaluation function F that can evaluate whether each voxel in an arbitrary three-dimensional medical image is a voxel indicating the outline of the liver region is acquired. Here, as the pixel value information of neighboring voxels, for example, pixels in a plurality of different voxels in a cubic region of 5 voxels in the X-axis direction × 5 voxels in the Y-axis × 5 voxels in the Z-axis centering on the voxel. A combination of values can be used. Next, the calculation unit 42 uses the evaluation function F to calculate the likelihood of each voxel contour, that is, whether the voxel is a voxel indicating a contour, based on the feature amount of each voxel in the discrimination region T. To calculate.

領域抽出部50が、たとえば米国特許第6973212号明細書等に記載されている3次元のグラフカット法(Graph Cuts)により、3次元の判別領域Tを肝臓領域と背景領域とに分割する際、肝臓領域と背景領域の境界が、検出部12において検出された基準点を必ず通るようにして、判別領域Tから肝臓領域を抽出し、処理を終了する。   When the region extraction unit 50 divides the three-dimensional discrimination region T into a liver region and a background region by a three-dimensional graph cut method (Graph Cuts) described in, for example, US Pat. No. 6,732,212, The liver region is extracted from the discrimination region T so that the boundary between the liver region and the background region always passes through the reference point detected by the detection unit 12, and the process is terminated.

なお、上記実施の形態では、本発明の対象領域抽出装置1を、医用画像から肝臓領域を抽出するものに適用した場合について説明したが、それに限らず、種々の入力画像から抽出したい領域、たとえば、医用画像中の各種臓器領域又は病変領域などの対象領域を抽出する場合に適用することができる。   In the above embodiment, the case where the target region extraction apparatus 1 of the present invention is applied to one that extracts a liver region from a medical image has been described. However, the present invention is not limited to this, and regions to be extracted from various input images, for example, The present invention can be applied when extracting target regions such as various organ regions or lesion regions in a medical image.

なお、上述したように、抽出したい対象領域の種類に応じて、その対象領域の抽出に用いられる基準点の数およびその各基準点の特徴が異なるので、2以上の対象領域を選択的に抽出する場合には、対象領域抽出装置1に備える記録手段に、対象領域の種類毎にその対象領域の抽出に用いられる基準点を予め対応付けて記録しておき、対象領域抽出装置1により対象領域を抽出する際に、ユーザーにより抽出したい対象領域を選択させ、その対象領域の抽出に適した基準点の設定を自動で行うようにしてもよい。   As described above, since the number of reference points used for extracting the target area and the characteristics of each reference point differ depending on the type of target area to be extracted, two or more target areas are selectively extracted. In this case, the recording means included in the target area extraction apparatus 1 records in advance a reference point used for extraction of the target area for each type of target area, and the target area extraction apparatus 1 uses the target area. When extracting the target area, the user may select a target area to be extracted and automatically set a reference point suitable for the extraction of the target area.

また、医用画像中の肝臓領域の輪郭を決定する場合における上述した種々の対応についても、種々の入力画像からその領域を抽出する場合に適用することができる。   Further, the various correspondences described above when determining the contour of the liver region in the medical image can also be applied when extracting the region from various input images.

本発明の対象領域抽出装置の実施の形態を示すブロック図The block diagram which shows embodiment of the target area extraction apparatus of this invention 肝臓領域の基準点およびその近傍領域の一例を示す図The figure which shows an example of the reference point of a liver area | region, and its vicinity area | region 図1の領域抽出部により対象領域を抽出する一方法を説明するための図The figure for demonstrating one method of extracting an object area | region by the area | region extraction part of FIG. シード点および基準点の位置に基づいてs-linkおよびt-linkの値を設定する一方法を説明するための図Diagram for explaining one method of setting s-link and t-link values based on the position of seed point and reference point 図1の領域抽出部により対象領域を抽出する一方法を説明するための図The figure for demonstrating one method of extracting an object area | region by the area | region extraction part of FIG.

符号の説明Explanation of symbols

1 対象領域抽出装置
10 基準点検出部
20 点設定部
30 領域設定部
40 輪郭らしさ算出部
50 領域抽出部
A、B 基準点
P 医用画像
C 肝臓領域内の任意の点
T 判別領域
DESCRIPTION OF SYMBOLS 1 Target area | region extraction apparatus 10 Reference point detection part 20 Point setting part 30 Area setting part 40 Contourness calculation part 50 Area extraction part A, B Reference point P Medical image C Arbitrary point T in the liver area | region T Discrimination area | region

Claims (5)

入力画像から対象領域を抽出する方法であって、
前記対象領域と同種の対象領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、前記基準点を表す画素および前記基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、該機械学習の結果に基づいて前記入力画像中の基準点を検出する検出工程と、
前記入力画像中の前記対象領域内に任意の点を設定する点設定工程と、
前記入力画像中に、前記対象領域の全体を含むと思われる判別領域を設定する領域設定工程と、
前記設定された判別領域内の各画素の輪郭らしさを、該画素の近傍画素の画素値情報に基づいて算出する算出工程と、
前記設定された任意の点、前記検出された基準点、および前記算出された各画素の輪郭らしさに基づいて、前記設定された判別領域から前記基準点を通る輪郭を有し、かつ前記任意の点を含む前記対象領域を抽出する抽出工程と
を備えたことを特徴とする対象領域抽出方法。
A method of extracting a target area from an input image,
A pixel representing the reference point and a reference in a plurality of sample images that exist on the outline of the target region of the same type as the target region and whose reference points that can be specified based on the pixel value distribution in the neighboring region are known For each pixel representing a point other than a point, a detection step of previously learning a pixel value distribution of a neighboring region and detecting a reference point in the input image based on the result of the machine learning;
A point setting step for setting an arbitrary point in the target area in the input image;
In the input image, an area setting step for setting a discrimination area that is supposed to include the entire target area;
A calculation step of calculating the contour likeness of each pixel in the set discrimination region based on pixel value information of neighboring pixels of the pixel;
Based on the set arbitrary point, the detected reference point, and the calculated contour likeness of each pixel, the contour has a contour that passes through the reference point from the set discrimination region, and the arbitrary An extraction process for extracting the target area including a point.
前記算出工程が、対象領域の輪郭が既知である複数のサンプル画像中の、前記輪郭上の点を表す画素および前記輪郭以外の点を表す画素のそれぞれについて、近傍画素の画素値情報を予め機械学習し、該機械学習の結果に基づいて前記各画素の輪郭らしさを算出するものであることを特徴とする請求項1記載の対象領域抽出方法。   In the calculation step, pixel value information of neighboring pixels is pre-machined for each of a pixel representing a point on the contour and a pixel representing a point other than the contour in a plurality of sample images whose contours of the target region are known. The target region extraction method according to claim 1, wherein learning is performed and the contour-likeness of each pixel is calculated based on the result of the machine learning. 入力画像から対象領域を抽出する装置であって、
前記対象領域と同種の対象領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、前記基準点を表す画素および前記基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、該機械学習の結果に基づいて前記入力画像中の基準点を検出する検出手段と、
前記入力画像中の対象領域内に任意の点を設定する点設定手段と、
前記入力画像中に、前記対象領域の全体を含むと思われる判別領域を設定する領域設定手段と、
前記設定された判別領域内の各画素の輪郭らしさを、該画素の近傍画素の画素値情報に基づいて算出する算出手段と、
前記設定された任意の点、前記検出された基準点、および前記算出された各画素の輪郭らしさに基づいて、前記設定された判別領域から前記基準点を通る輪郭を有し、かつ前記任意の点を含む前記対象領域を抽出する領域抽出手段と
を備えたことを特徴とする対象領域抽出装置。
An apparatus for extracting a target area from an input image,
A pixel representing the reference point and a reference in a plurality of sample images that exist on the outline of the target region of the same type as the target region and whose reference points that can be specified based on the pixel value distribution in the neighboring region are known For each pixel representing a point other than a point, a detection unit that performs machine learning in advance on a pixel value distribution in a neighboring region and detects a reference point in the input image based on the result of the machine learning;
Point setting means for setting an arbitrary point in a target area in the input image;
In the input image, an area setting means for setting a discrimination area that is supposed to include the entire target area;
Calculating means for calculating the contour likeness of each pixel in the set discrimination region based on pixel value information of neighboring pixels of the pixel;
Based on the set arbitrary point, the detected reference point, and the calculated contour likeness of each pixel, the contour has a contour that passes through the reference point from the set discrimination region, and the arbitrary A target region extracting apparatus comprising: a region extracting unit that extracts the target region including a point.
前記算出手段が、対象領域の輪郭が既知である複数のサンプル画像中の、前記輪郭上の点を表す画素および前記輪郭以外の点を表す画素のそれぞれについて、近傍画素の画素値情報を予め機械学習し、該機械学習の結果に基づいて前記各画素の輪郭らしさを算出するものであることを特徴とする請求項3記載の対象領域抽出装置。   The calculation means previously calculates pixel value information of neighboring pixels for each of a pixel representing a point on the contour and a pixel representing a point other than the contour in a plurality of sample images whose contours of a target region are known. 4. The target area extracting apparatus according to claim 3, wherein learning is performed and the likelihood of the contour of each pixel is calculated based on a result of the machine learning. 入力画像から対象領域を抽出するためのプログラムであって、
コンピュータに、
前記対象領域と同種の対象領域の輪郭上に存在し、かつ近傍領域の画素値分布に基づいて特定可能な基準点が既知である複数のサンプル画像中の、前記基準点を表す画素および前記基準点以外の点を表す画素のそれぞれについて、近傍領域の画素値分布を予め機械学習し、該機械学習の結果に基づいて前記入力画像中の基準点を検出する検出手段と、
前記入力画像中の対象領域内に任意の点を設定する点設定手段と、
前記入力画像中に、前記対象領域の全体を含むと思われる判別領域を設定する領域設定手段と、
前記設定された判別領域内の各画素の輪郭らしさを、該画素の近傍画素の画素値情報に基づいて算出する算出手段と、
前記設定された任意の点、前記検出された基準点、および前記算出された各画素の輪郭らしさに基づいて、前記設定された判別領域から前記基準点を通る輪郭を有し、かつ前記任意の点を含む前記対象領域を抽出する領域抽出手段として機能させることを特徴とする対象領域抽出プログラム。
A program for extracting a target area from an input image,
On the computer,
A pixel representing the reference point and a reference in a plurality of sample images that exist on the outline of the target region of the same type as the target region and whose reference points that can be specified based on the pixel value distribution in the neighboring region are known For each pixel representing a point other than a point, a detection unit that performs machine learning in advance on a pixel value distribution in a neighboring region and detects a reference point in the input image based on the result of the machine learning;
Point setting means for setting an arbitrary point in a target area in the input image;
In the input image, an area setting means for setting a discrimination area that is supposed to include the entire target area;
Calculating means for calculating the contour likeness of each pixel in the set discrimination region based on pixel value information of neighboring pixels of the pixel;
Based on the set arbitrary point, the detected reference point, and the calculated contour likeness of each pixel, the contour has a contour that passes through the reference point from the set discrimination region, and the arbitrary A target region extraction program that functions as region extraction means for extracting the target region including a point.
JP2008050615A 2008-02-29 2008-02-29 Target region extraction method, apparatus, and program Active JP4964171B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008050615A JP4964171B2 (en) 2008-02-29 2008-02-29 Target region extraction method, apparatus, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008050615A JP4964171B2 (en) 2008-02-29 2008-02-29 Target region extraction method, apparatus, and program

Publications (2)

Publication Number Publication Date
JP2009211138A true JP2009211138A (en) 2009-09-17
JP4964171B2 JP4964171B2 (en) 2012-06-27

Family

ID=41184271

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008050615A Active JP4964171B2 (en) 2008-02-29 2008-02-29 Target region extraction method, apparatus, and program

Country Status (1)

Country Link
JP (1) JP4964171B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2372660A2 (en) 2010-03-31 2011-10-05 Fujifilm Corporation Projection image generation apparatus and method, and computer readable recording medium on which is recorded program for the same
US8634628B2 (en) 2011-04-19 2014-01-21 Fujifilm Corporation Medical image processing apparatus, method and program
JP2014502176A (en) * 2010-11-02 2014-01-30 シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド Geometric feature automatic calculation method, non-transitory computer-readable medium, and image interpretation system
JP2014513332A (en) * 2011-03-04 2014-05-29 エルビーティー イノベーションズ リミテッド How to improve the classification results of a classifier
JP2014120136A (en) * 2012-12-19 2014-06-30 Casio Comput Co Ltd Angle of view adjustment device, method and program
WO2014112339A1 (en) * 2013-01-17 2014-07-24 富士フイルム株式会社 Region segmenting device, program, and method
JP2018175343A (en) * 2017-04-12 2018-11-15 富士フイルム株式会社 Medical image processing apparatus, method, and program
KR101955919B1 (en) * 2017-09-21 2019-03-08 재단법인 아산사회복지재단 Method and program for providing tht region-of-interest in image by deep-learing algorithm
JP2019056988A (en) * 2017-09-20 2019-04-11 カシオ計算機株式会社 Contour detection device and contour detection method
WO2020137745A1 (en) * 2018-12-28 2020-07-02 キヤノン株式会社 Image processing device, image processing system, image processing method, and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6973212B2 (en) * 2000-09-01 2005-12-06 Siemens Corporate Research, Inc. Graph cuts for binary segmentation of n-dimensional images from object and background seeds
JP2007307358A (en) * 2006-04-17 2007-11-29 Fujifilm Corp Method, apparatus and program for image treatment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6973212B2 (en) * 2000-09-01 2005-12-06 Siemens Corporate Research, Inc. Graph cuts for binary segmentation of n-dimensional images from object and background seeds
JP2007307358A (en) * 2006-04-17 2007-11-29 Fujifilm Corp Method, apparatus and program for image treatment

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8611988B2 (en) 2010-03-31 2013-12-17 Fujifilm Corporation Projection image generation apparatus and method, and computer readable recording medium on which is recorded program for the same
EP2372660A2 (en) 2010-03-31 2011-10-05 Fujifilm Corporation Projection image generation apparatus and method, and computer readable recording medium on which is recorded program for the same
JP2014502176A (en) * 2010-11-02 2014-01-30 シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド Geometric feature automatic calculation method, non-transitory computer-readable medium, and image interpretation system
JP2016167304A (en) * 2011-03-04 2016-09-15 エルビーティー イノベーションズ リミテッド Method for improving classification result of classifier
US10037480B2 (en) 2011-03-04 2018-07-31 Lbt Innovations Limited Method for improving classification results of a classifier
JP2014513332A (en) * 2011-03-04 2014-05-29 エルビーティー イノベーションズ リミテッド How to improve the classification results of a classifier
US8634628B2 (en) 2011-04-19 2014-01-21 Fujifilm Corporation Medical image processing apparatus, method and program
JP2014120136A (en) * 2012-12-19 2014-06-30 Casio Comput Co Ltd Angle of view adjustment device, method and program
JP2014137744A (en) * 2013-01-17 2014-07-28 Fujifilm Corp Area division device, program and method
US9536317B2 (en) 2013-01-17 2017-01-03 Fujifilm Corporation Region segmentation apparatus, recording medium and method
WO2014112339A1 (en) * 2013-01-17 2014-07-24 富士フイルム株式会社 Region segmenting device, program, and method
JP2018175343A (en) * 2017-04-12 2018-11-15 富士フイルム株式会社 Medical image processing apparatus, method, and program
US10846853B2 (en) 2017-04-12 2020-11-24 Fujifilm Corporation Medical image processing apparatus, medical image processing method, and medical image processing program
JP2019056988A (en) * 2017-09-20 2019-04-11 カシオ計算機株式会社 Contour detection device and contour detection method
JP7009864B2 (en) 2017-09-20 2022-01-26 カシオ計算機株式会社 Contour detection device and contour detection method
JP7439842B2 (en) 2017-09-20 2024-02-28 カシオ計算機株式会社 Contour detection device and contour detection method
KR101955919B1 (en) * 2017-09-21 2019-03-08 재단법인 아산사회복지재단 Method and program for providing tht region-of-interest in image by deep-learing algorithm
WO2020137745A1 (en) * 2018-12-28 2020-07-02 キヤノン株式会社 Image processing device, image processing system, image processing method, and program
JP2020109614A (en) * 2018-12-28 2020-07-16 キヤノン株式会社 Image processing apparatus, image processing system, image processing method, and program

Also Published As

Publication number Publication date
JP4964171B2 (en) 2012-06-27

Similar Documents

Publication Publication Date Title
JP4964171B2 (en) Target region extraction method, apparatus, and program
JP4999163B2 (en) Image processing method, apparatus, and program
US8693753B2 (en) Medical image processing device, method and program
US20220172348A1 (en) Information processing device, information processing method, and storage medium
JP5016603B2 (en) Method and apparatus for automatic and dynamic vessel detection
JP4717935B2 (en) Image processing apparatus and method, and program
US8787642B2 (en) Method, device and computer-readable recording medium containing program for extracting object region of interest
US7574031B2 (en) Nodule boundary detection
KR101899866B1 (en) Apparatus and method for detecting error of lesion contour, apparatus and method for correcting error of lesion contour and, apparatus for insecting error of lesion contour
JP5263995B2 (en) Network construction apparatus and method, and program
WO2010100858A1 (en) Image processing device and method, and program
US8019139B2 (en) Method and system for processing an image of body tissues
US20170039711A1 (en) System and method for detecting central pulmonary embolism in ct pulmonary angiography images
JP5748636B2 (en) Image processing apparatus and method, and program
CN107633514B (en) Pulmonary nodule peripheral blood vessel quantitative evaluation system and method
US7103203B2 (en) Medical imaging station with a function of extracting a path within a ramified object
US8224057B2 (en) Method and system for nodule feature extraction using background contextual information in chest x-ray images
US8306354B2 (en) Image processing apparatus, method, and program
JP2011054062A (en) Apparatus and method for processing image, and program
US8774496B2 (en) Compound object separation
JP2016195755A (en) Medical image processor, medical image processing method, and medical imaging device
JP2013080389A (en) Vanishing point estimation method, vanishing point estimation device, and computer program
JP7257388B2 (en) Determination of areas of dense lung tissue in lung images
JP6296385B2 (en) Medical image processing apparatus, medical target region extraction method, and medical target region extraction processing program
Santamaria-Pang et al. Cell segmentation and classification via unsupervised shape ranking

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100707

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120306

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120327

R150 Certificate of patent or registration of utility model

Ref document number: 4964171

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150406

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250