WO2014050044A1 - Image processing device, method, and program - Google Patents

Image processing device, method, and program Download PDF

Info

Publication number
WO2014050044A1
WO2014050044A1 PCT/JP2013/005556 JP2013005556W WO2014050044A1 WO 2014050044 A1 WO2014050044 A1 WO 2014050044A1 JP 2013005556 W JP2013005556 W JP 2013005556W WO 2014050044 A1 WO2014050044 A1 WO 2014050044A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
image
contour
low
resolution
Prior art date
Application number
PCT/JP2013/005556
Other languages
French (fr)
Japanese (ja)
Inventor
元中 李
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to CA2886092A priority Critical patent/CA2886092A1/en
Priority to EP13841776.1A priority patent/EP2901933A4/en
Priority to CN201380050478.9A priority patent/CN104717925A/en
Priority to BR112015006523A priority patent/BR112015006523A2/en
Publication of WO2014050044A1 publication Critical patent/WO2014050044A1/en
Priority to US14/665,365 priority patent/US20150193943A1/en
Priority to IN2613DEN2015 priority patent/IN2015DN02613A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/162Segmentation; Edge detection involving graph-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Definitions

  • the present invention relates to an image processing apparatus and method for extracting a region from an image using a graph cut method, and a program.
  • Patent Document 1 proposes a method of extracting a specific region of an organ from a number of two-dimensional images constituting a three-dimensional image, and stacking the extracted regions to generate a three-dimensional image of the specific region. Yes.
  • Patent Document 1 since the method described in Patent Document 1 extracts a region from a two-dimensional image, the extraction range may be slightly different in each two-dimensional image. The extraction accuracy of the area is not so good.
  • a graph cut method is known as a method for extracting a desired region from an image (see Non-Patent Document 1).
  • the graph cut method includes a node Nij representing each pixel in the image, nodes S and T representing whether each pixel is a target region or a background region, an n-link which is a link connecting nodes of adjacent pixels, and each A graph composed of s-link and t-link, which is a link connecting a node Nij representing a pixel, a node S representing a target region, and a node T representing a background region, is created, and each pixel is a pixel of the target region Or a pixel in the background area is represented by the thickness (value size) of s-link, t-link and n-link, respectively, and the target area and the background area depending on the link thickness obtained as a result of the operation
  • This is a technique for extracting a target region from an image.
  • the present invention has been made in view of the above circumstances, and an object thereof is to shorten the processing time required for extracting an area from an image using the graph cut method and to reduce the amount of memory for calculation. .
  • An image processing apparatus is an image processing apparatus that extracts a specific area from a processing target image by a graph cut method, First extraction means for generating a low-resolution image of the processing target image and extracting a specific region from the low-resolution image by a graph cut method; A contour region setting means for setting a contour region including the contour of the specific region in the processing target image to the processing target image based on the extraction result of the specific region; And a second extraction means for extracting a region corresponding to the specific region from the contour region by a graph cut method.
  • the contour region setting unit may be a unit that determines the size of the contour region based on the difference in resolution between the low-resolution image and the processing target image.
  • the contour region setting means may be a means for setting a contour region by erosion processing and dilation processing of morphology processing.
  • the second extraction means increases the value of s-link in the graph cut method for the pixels inside the contour of the specific region in the contour region, and Means for increasing the value of t-link in the graph cut method for pixels outside the contour may be used.
  • An image processing method is an image processing method for extracting a specific region from a processing target image by a graph cut method, Generate a low-resolution image of the image to be processed, extract a specific area from the low-resolution image by the graph cut method, Based on the extraction result of the specific region, the contour region including the contour of the specific region in the processing target image is set as the processing target image, A region corresponding to the specific region is extracted from the contour region by a graph cut method.
  • the image processing method according to the present invention may be provided as a program for causing a computer to execute the image processing method.
  • a low-resolution image of the processing target image is generated, and a specific region is extracted from the low-resolution image by the graph cut method.
  • the number of pixels of the low resolution image is smaller than that of the processing target image, the amount of calculation and the memory used can be reduced, but the accuracy of the region extraction is not so good.
  • a contour area that includes a contour in the specific area in the processing target image is set as the processing target image, and the contour area corresponds to the specific area by the graph cut method. Regions are extracted.
  • the specific area is extracted by applying the graph cut method only to the contour region of the low-resolution image and the processing target image, so the graph cutting method is applied to the entire processing target image.
  • the processing time can be greatly shortened, and the amount of memory for calculation can be greatly reduced.
  • FIG. 1 is a schematic block diagram showing the configuration of an image processing apparatus according to an embodiment of the present invention.
  • Diagram for explaining multi-resolution conversion Diagram for explaining the graph cut method Diagram for explaining area division by graph cut method Diagram showing the outline of the liver region extracted from the low-resolution image The figure which shows the state which set the outline of the liver area
  • FIG. 1 is a schematic block diagram showing the configuration of an image processing apparatus according to an embodiment of the present invention.
  • the configuration of the image processing apparatus 1 shown in FIG. 1 is realized by executing a program read in an auxiliary storage device (not shown) on a computer (for example, a personal computer). Further, this program is stored in an information storage medium such as a CD-ROM or distributed via a network such as the Internet and installed in a computer.
  • the image processing apparatus 1 generates a three-dimensional image M0 from a plurality of two-dimensional images captured by, for example, the X-ray CT apparatus 2, and automatically uses a graph cut method to identify a specific area included in the three-dimensional image M0.
  • An image acquisition unit 10, a low-resolution image generation unit 12, a first extraction unit 14, a contour region setting unit 16, a second extraction unit 18, and a display control unit 20 are included.
  • an input unit 22 and a display unit 24 are connected to the image processing apparatus 1.
  • the three-dimensional image M0 represents the thorax of the human body, and the specific area is the liver area.
  • the image acquisition unit 10 acquires, for example, a plurality of CT images (two-dimensional images) captured by the X-ray CT apparatus 2, and generates a three-dimensional image M0 from the plurality of two-dimensional images.
  • the image acquisition unit 10 may acquire not only CT images but also two-dimensional images such as so-called MRI images, RI images, PET images, X-ray images, and the like.
  • the X-ray CT apparatus 2 may generate a three-dimensional image M0, and the image acquisition unit 10 may perform only the process of acquiring the three-dimensional image M0.
  • the low resolution image generation unit 12 converts the three-dimensional multiresolution image Ms2 having a resolution of 1/4 of the three-dimensional image M0 to the low resolution image ML.
  • the first extraction unit 14 divides a region other than the liver region and the liver region of the low resolution image ML using the graph cut method, and extracts the liver region from the low resolution image ML. Specifically, a liver region is set as a target region, and a region other than the liver region is set as a background region, a determination region having a predetermined pixel size is set at all pixel positions in the low resolution image ML, and determination is performed using a graph cut method. The area is divided into a target area and a background area.
  • nodes Sij representing nodes Nij representing pixels in the discrimination area, and labels (target areas or background areas in the present embodiment) that each pixel can take, From n-link, which is a link connecting nodes of adjacent pixels, and from s-link and t-link, which is a link connecting node Nij representing each pixel, node S representing a target region, and node T representing a background region Create a composed graph.
  • the discrimination area is a 3 ⁇ 3 two-dimensional area.
  • n-link represents the probability that adjacent pixels are pixels in the same region by the thickness of the link, and the value of the probability is the distance between the adjacent pixels and the difference between the pixel values.
  • the s-link connecting the node Nij representing each pixel and the node S representing the target area represents the probability that each pixel is a pixel included in the target area, and the node representing each pixel and the background area
  • the t-link connecting the node T representing the likelihood represents the probability that each pixel is a pixel included in the background area. If the information on whether the pixel is a pixel indicating the target region or the background region is already given, the probability values can be set according to the given information. Is not given, for s-link, the target area is estimated, and a probability value is set based on the histogram of the density distribution of the estimated target area. For t-link, the background area is set. The probability value can be set based on the estimated density distribution histogram of the background region.
  • the pixels represented by the nodes N11, N12, N21, N22, and N31 are pixels set in the target area, the nodes N11, N12, N21, N22, N31, and the node S The s-link connecting the two nodes becomes thick, and the n-link connecting the nodes N11, N12, N21, N22, and N31 becomes thicker.
  • the pixels represented by the nodes N13, N23, N32, and N33 are pixels set in the background area, the t-link that connects the nodes N13, N23, N32, and N33 and the node T becomes thicker. The n-link connecting the nodes N13, N23, N32, and N33 becomes thicker.
  • the target area and the background area are mutually exclusive areas, for example, as shown by a broken line in FIG. 4, an appropriate link among s-link, t-link, and n-link is cut to obtain a node S Is separated from the node T, the discrimination area can be divided into the target area and the background area.
  • the cutting by performing the cutting so that the sum of the probability values of all the s-links, t-links, and n-links to be cut is minimized, the optimum region division can be performed.
  • the first extraction unit 14 divides the region of the low resolution image ML as described above, and extracts the liver region that is the target region from the low resolution image ML.
  • FIG. 5 is a diagram showing the outline of the liver region extracted from the low-resolution image.
  • the liver region is extracted from the three-dimensional low-resolution image ML, but here, for the sake of explanation, it is extracted from the low-resolution image of one two-dimensional image constituting the three-dimensional image M0.
  • the outline of the liver region is indicated by a solid line.
  • the contour region setting unit 16 sets a contour region including the contour of the liver region extracted by the first extraction unit 14 in the three-dimensional image M0.
  • FIG. 6 is a diagram illustrating a state in which the outline of the liver region extracted from the low resolution image ML by the first extraction unit 14 is set in the three-dimensional image M0.
  • the outline of the liver region set in one two-dimensional image constituting the three-dimensional image M0 is shown by a solid line.
  • the low resolution image ML has a resolution that is 1 ⁇ 4 that of the three-dimensional image M0. Therefore, the outline of the liver region extracted from the low resolution image ML is enlarged four times and set to the three-dimensional image M0. Will be. For this reason, the set outline does not completely match the outline of the liver region included in the three-dimensional image M0, and includes irregularities based on the difference in resolution.
  • the contour region setting unit 16 contracts the contour set in the three-dimensional image M0 inward and expands outward, and sets a region surrounded by the expanded contour and the contracted contour as the contour region E0.
  • the size of the contour region E0 in the width direction is based on the size of the low-resolution image ML and the three-dimensional image M0 (the size of the three-dimensional image M0 / the size of the low-resolution image ML). +1) ⁇ 2 is determined by calculation.
  • the range size of the contour region E0 is determined to be 10 pixels.
  • the method for determining the size of the contour region E0 is not limited to the above method, and any method can be applied as appropriate.
  • the contraction and expansion of the contour are performed by morphology processing. Specifically, an erosion process for searching for the minimum value in a predetermined width centered on the target pixel on the contour set in the three-dimensional image M0 is performed using a structural element as shown in FIG. Thus, the contour is contracted by one pixel, and the contour is further contracted by performing erosion processing on the contracted contour. Then, by performing such erosion processing four times, the contour set in the three-dimensional image M0 is contracted four pixels inward.
  • the contour is obtained by performing a dilation process for searching for the maximum value in a predetermined width centered on the pixel of interest on the contour set in the three-dimensional image M0. Is expanded by one pixel, and the contour is further expanded by performing dilation processing on the expanded contour. Then, by performing such dilation processing five times, the outline set in the three-dimensional image M0 is expanded outward by five pixels.
  • FIG. 8 is a diagram showing a contour region set in the three-dimensional image M0. Since the contour is shrunk 4 pixels inward by erosion processing and expanded 5 pixels outward by dilation processing, the size in the width direction of the contour region E0 is 10 pixels together with 1 pixel of the contour. ing.
  • the second extraction unit 18 divides the contour region E0 set in the three-dimensional image M0 into a liver region and a region other than the liver region using a graph cut method, and further, based on the result of the region division, 3
  • the entire liver region is extracted from the dimension image M0.
  • the region inside the contour in the contour region E0 is likely to be a liver region, and the region outside the contour in the contour region E0 is highly likely to be a background region. For this reason, when the graph cut method is applied to the contour region E0, the value of t-link is increased toward the outside of the set contour in the contour region E0, and s-link is increased toward the inside of the set contour in the contour region E0. Increase the value of.
  • the contour region E0 can be efficiently and accurately divided into the liver region and the other regions.
  • the second extraction unit 18 divides the contour region E0 as described above, and extracts the liver region that is the target region from the contour region E0.
  • FIG. 9 is a diagram showing the contour of the liver region extracted from the contour region E0.
  • the liver region is extracted from the three-dimensional image M0.
  • the outline of the liver region extracted in one two-dimensional image constituting the three-dimensional image M0 is illustrated by a solid line. Is shown.
  • the contour of the liver region obtained by dividing the contour region E0 of the three-dimensional image M0 smoothly connects the surfaces of the liver.
  • the 2nd extraction part 18 extracts the area
  • the display control unit 20 displays the extracted liver region and the like on the display unit 24.
  • the input unit 22 includes, for example, a keyboard and a mouse, and inputs various instructions from a user such as a radiologist to the image processing apparatus 1.
  • the display unit 24 is composed of, for example, a liquid crystal display, a CRT display, or the like, and displays an image of the extracted liver region or the like as necessary.
  • FIG. 10 is a flowchart showing processing performed in the present embodiment.
  • the image acquisition unit 10 acquires a plurality of CT images from the X-ray CT apparatus 2 and generates a three-dimensional image M0 (step ST1).
  • the low resolution image generation unit 12 multi-resolution converts the three-dimensional image M0 to generate a low resolution image ML (step ST2)
  • the first extraction unit 14 extracts a liver region from the low resolution image ML. (Step ST3).
  • the contour region setting unit 16 sets the contour of the liver region extracted from the low-resolution image ML to the three-dimensional image M0, and performs the erosion processing and the dilation processing as described above, thereby converting the contour region E0 into the three-dimensional image.
  • M0 step ST4
  • the 2nd extraction part 18 extracts the outline of a liver area
  • the liver region extracted by the display control unit 20 is displayed on the display unit 24 (step ST6), and the process ends.
  • FIG. 11 is a diagram showing the displayed liver region. As shown in FIG. 11, according to the present embodiment, it can be seen that the liver region is accurately extracted.
  • the low-resolution image ML of the three-dimensional image M0 is generated, and a specific region such as a liver region is extracted from the low-resolution image ML by the graph cut method.
  • the low-resolution image ML has a smaller number of pixels than the three-dimensional image M0. For example, when the resolution of the low resolution image ML is 1/4 of the three-dimensional image M0, the number of pixels is 1/64. For this reason, by using the low-resolution image ML, the amount of calculation and the memory used can be reduced, but the accuracy of region extraction is not so good.
  • the contour region E0 including the contour of the liver region extracted from the low-resolution image ML is set as the three-dimensional image M0, and the liver region is extracted from the contour region E0 by the graph cut method. Is.
  • the size of the contour region E0 is significantly smaller than that of the three-dimensional image M0.
  • the liver region is extracted by applying the graph cut method only to the low-resolution image ML of the three-dimensional image M0 and the contour region E0 of the three-dimensional image M0. Compared with the case where the graph cut method is applied to the entire image M0, the processing time can be greatly shortened, and the amount of memory for calculation can be greatly reduced.
  • the liver region is extracted from the medical three-dimensional image M0, but the region to be extracted is not limited to this, and the brain, heart, lung field, pancreas, spleen, kidney,
  • the present invention when extracting regions of various structures included in a medical three-dimensional image such as blood vessels, it is possible to reduce the processing amount and processing time of computation.
  • the liver region is extracted by applying the result of region extraction in the low-resolution image ML having a resolution of 1/4 of the three-dimensional image M0 to the three-dimensional image M0, but the low-resolution image ML is extracted.
  • the region extraction result in the image may be applied to the three-dimensional image M0.
  • processing of setting a contour region for a resolution image having a higher one-step resolution than that of the low-resolution image and extracting a liver region from the resolution image having a higher one-step resolution using the region extraction result in the low-resolution image may be extracted from the three-dimensional image M0 by repeating up to the target three-dimensional image M0.
  • a medical three-dimensional image is a processing target, but a medical two-dimensional image may be a processing target.
  • the present invention can be applied not only to extracting a medical image but also to extracting an area such as a person from an image acquired using a digital camera or the like.
  • a large amount of computation is required when extracting a region such as a person using the graph cut method. The processing amount and processing time can be greatly reduced.
  • the present invention can be applied when extracting a region from a moving image. Since a moving image is composed of a plurality of frames, it is conceivable to extract a region from each frame. However, the image included in each frame has poor image quality, and the region cannot be extracted with high accuracy.
  • the moving image can be regarded as a three-dimensional image in which a plurality of frames are arranged along the time axis. In this way, when extracting a region from a moving image regarded as a three-dimensional image using the graph cut method, by applying the present invention, it is possible to greatly reduce the processing amount and processing time of calculation, In addition, the region can be extracted from the moving image with high accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

[Problem] To shorten the processing time required when extracting a region from an image using a graph cut method, and reduce the amount of memory used for computation. [Solution] An image acquisition unit (10) acquires a plurality of CT images from an X-ray CT device (2) and generates a three-dimensional image (M0). A low-resolution image generation unit (12) converts the resolution of the three-dimensional image (M0) into multiple resolutions and generates a low-resolution image (ML). A first extraction unit (14) uses a graph cut method to extract a specific region such as a liver region from the low-resolution image (ML). A contour region setting unit (16) sets the contour of the liver region extracted from the low-resolution image (ML) in the three-dimensional image (M0), and sets a contour region including said contour in the three-dimensional image (M0). A second extraction unit (18) extracts the contour of the liver region from the contour region, and extracts the liver region from the three-dimensional image (M0).

Description

画像処理装置および方法並びにプログラムImage processing apparatus and method, and program
 本発明は、グラフカット法を用いて画像から領域を抽出する画像処理装置および方法、並びにプログラムに関するものである。 The present invention relates to an image processing apparatus and method for extracting a region from an image using a graph cut method, and a program.
 近年、医療機器(例えば多検出器型CT等)の進歩により質の高い高解像度の3次元画像が画像診断に用いられるようになってきている。ここで、3次元画像は多数の2次元画像から構成され情報量が多いため、医師が所望の観察部位を見つけ診断することに時間を要する場合がある。そこで、注目する臓器を認識し、注目する臓器を含む3次元画像から、例えば最大値投影法(MIP法)および最小値投影法(MinIP法)等の方法を用いて、注目する臓器を抽出してMIP表示等を行ったり、3次元画像のボリュームレンダリング(VR)表示を行ったり、CPR(Curved Planer Reconstruction)表示を行ったりすることにより、臓器全体や病変の視認性を高め診断の効率化を図ることが行われている。 In recent years, high-quality, high-resolution three-dimensional images have come to be used for image diagnosis due to advances in medical equipment (for example, multi-detector CT). Here, since a three-dimensional image is composed of a large number of two-dimensional images and has a large amount of information, it may take time for a doctor to find and diagnose a desired observation site. Therefore, the organ of interest is recognized, and the organ of interest is extracted from a three-dimensional image including the organ of interest using a method such as a maximum value projection method (MIP method) and a minimum value projection method (MinIP method). By performing MIP display, volume rendering (VR) display of 3D images, and CPR (Curved Planer Reconstruction) display, the visibility of the entire organ and lesions can be improved and diagnosis can be made more efficient. Things are going on.
 また、3次元画像から臓器を抽出する種々の手法が提案されている。例えば特許文献1には、3次元画像を構成する多数の2次元画像から臓器の特定の領域を抽出し、抽出した領域を積み上げて、特定の領域の3次元画像を生成する手法が提案されている。 In addition, various methods for extracting an organ from a three-dimensional image have been proposed. For example, Patent Document 1 proposes a method of extracting a specific region of an organ from a number of two-dimensional images constituting a three-dimensional image, and stacking the extracted regions to generate a three-dimensional image of the specific region. Yes.
 しかしながら、特許文献1に記載された手法は、2次元画像から領域を抽出するものであるため、2次元画像のそれぞれにおいて微妙に抽出範囲が異なる場合があることから、3次元画像としてみた場合の領域の抽出精度がそれほどよくない。 However, since the method described in Patent Document 1 extracts a region from a two-dimensional image, the extraction range may be slightly different in each two-dimensional image. The extraction accuracy of the area is not so good.
 一方、画像から所望とする領域を抽出する手法としてグラフカット法が知られている(非特許文献1参照)。グラフカット法は、画像中の各画素を表すノードNij、各画素が対象領域であるか背景領域かを表すノードS,T、隣接する画素のノード同士をつなぐリンクであるn-link、並びに各画素を表すノードNijと対象領域を表すノードSおよび背景領域を表すノードTとをつなぐリンクであるs-linkおよびt-linkから構成されるグラフを作成し、各画素が対象領域の画素であるかまたは背景領域の画素であるかを、それぞれs-link、t-linkおよびn-linkの太さ(値の大きさ)で表し、演算の結果得られるリンクの太さにより対象領域と背景領域とに分割して、画像から対象領域を抽出する手法である。このようなグラフカット法を用いることにより、医用3次元画像に含まれる心臓、肺、肝臓等の領域を精度良く抽出することができる。 On the other hand, a graph cut method is known as a method for extracting a desired region from an image (see Non-Patent Document 1). The graph cut method includes a node Nij representing each pixel in the image, nodes S and T representing whether each pixel is a target region or a background region, an n-link which is a link connecting nodes of adjacent pixels, and each A graph composed of s-link and t-link, which is a link connecting a node Nij representing a pixel, a node S representing a target region, and a node T representing a background region, is created, and each pixel is a pixel of the target region Or a pixel in the background area is represented by the thickness (value size) of s-link, t-link and n-link, respectively, and the target area and the background area depending on the link thickness obtained as a result of the operation This is a technique for extracting a target region from an image. By using such a graph cut method, it is possible to accurately extract regions such as the heart, lungs, and liver included in the medical three-dimensional image.
特開2003-10172号公報JP 2003-10172 A
 しかしながら、グラフカット法を用いて画像から領域を抽出する場合、演算の対象となる画像のサイズが大きいほど、画素数およびリンク数が多くなるため、処理に必要なメモリおよび処理時間が増加する。とくに3次元画像に対してグラフカット法を適用した場合、画素数およびリンク数は2次元画像と比較して指数関数的に増加する。このため、グラフカット法を用いての所望とする領域の抽出に、非常に時間を要することとなる。また、メモリ容量が小さい低スペックの計算機を用いた場合は、グラフカット法を用いての領域の抽出を行うことができない場合があり得る。 However, when extracting a region from an image using the graph cut method, the larger the size of the image to be calculated, the greater the number of pixels and the number of links, and thus the memory and processing time required for processing increase. In particular, when the graph cut method is applied to a three-dimensional image, the number of pixels and the number of links increase exponentially compared to the two-dimensional image. For this reason, it takes a very long time to extract a desired region using the graph cut method. In addition, when a low-spec computer with a small memory capacity is used, it may be impossible to extract a region using the graph cut method.
 本発明は上記事情に鑑みなされたものであり、グラフカット法を用いて画像から領域を抽出する際に必要な処理時間を短縮し、かつ演算のためのメモリ量を低減することを目的とする。 The present invention has been made in view of the above circumstances, and an object thereof is to shorten the processing time required for extracting an area from an image using the graph cut method and to reduce the amount of memory for calculation. .
 本発明による画像処理装置は、グラフカット法により処理対象画像から特定領域を抽出する画像処理装置であって、
 処理対象画像の低解像度画像を生成し、グラフカット法により低解像度画像から特定領域を抽出する第1の抽出手段と、
 特定領域の抽出結果に基づいて、処理対象画像における特定領域の輪郭を含む輪郭領域を、処理対象画像に設定する輪郭領域設定手段と、
 グラフカット法により、輪郭領域から特定領域に対応する領域を抽出する第2の抽出手段とを備えたことを特徴とするものである。
An image processing apparatus according to the present invention is an image processing apparatus that extracts a specific area from a processing target image by a graph cut method,
First extraction means for generating a low-resolution image of the processing target image and extracting a specific region from the low-resolution image by a graph cut method;
A contour region setting means for setting a contour region including the contour of the specific region in the processing target image to the processing target image based on the extraction result of the specific region;
And a second extraction means for extracting a region corresponding to the specific region from the contour region by a graph cut method.
 なお、本発明による画像処理装置においては、輪郭領域設定手段を、輪郭領域のサイズを、低解像度画像と処理対象画像との解像度の差に基づいて決定する手段としてもよい。 In the image processing apparatus according to the present invention, the contour region setting unit may be a unit that determines the size of the contour region based on the difference in resolution between the low-resolution image and the processing target image.
 また、本発明による画像処理装置においては、輪郭領域設定手段を、モフォロジー処理のイロージョン処理およびダイレーション処理により、輪郭領域を設定する手段としてもよい。 In the image processing apparatus according to the present invention, the contour region setting means may be a means for setting a contour region by erosion processing and dilation processing of morphology processing.
 また、本発明による画像処理装置においては、第2の抽出手段を、輪郭領域における特定領域の輪郭の内側にある画素に対するグラフカット法におけるs-linkの値を大きくし、輪郭領域における特定領域の輪郭の外側にある画素に対するグラフカット法におけるt-linkの値を大きくする手段としてもよい。 In the image processing apparatus according to the present invention, the second extraction means increases the value of s-link in the graph cut method for the pixels inside the contour of the specific region in the contour region, and Means for increasing the value of t-link in the graph cut method for pixels outside the contour may be used.
 本発明による画像処理方法は、グラフカット法により処理対象画像から特定領域を抽出する画像処理方法であって、
 処理対象画像の低解像度画像を生成し、グラフカット法により低解像度画像から特定領域を抽出し、
 特定領域の抽出結果に基づいて、処理対象画像における特定領域の輪郭を含む輪郭領域を、処理対象画像に設定し、
 グラフカット法により、輪郭領域から特定領域に対応する領域を抽出することを特徴とするものである。
An image processing method according to the present invention is an image processing method for extracting a specific region from a processing target image by a graph cut method,
Generate a low-resolution image of the image to be processed, extract a specific area from the low-resolution image by the graph cut method,
Based on the extraction result of the specific region, the contour region including the contour of the specific region in the processing target image is set as the processing target image,
A region corresponding to the specific region is extracted from the contour region by a graph cut method.
 なお、本発明による画像処理方法をコンピュータに実行させるためのプログラムとして提供してもよい。 The image processing method according to the present invention may be provided as a program for causing a computer to execute the image processing method.
 本発明によれば、処理対象画像の低解像度画像が生成され、グラフカット法により低解像度画像から特定領域が抽出される。ここで、低解像度画像は処理対象画像よりも画素数が少ないため、演算量および使用メモリは少なくすることができるが、領域抽出の精度がそれほどよいものではない。このため、本発明によれば、特定領域の抽出結果に基づいて、処理対象画像における特定領域に輪郭を含む輪郭領域が処理対象画像に設定され、グラフカット法により輪郭領域から特定領域に対応する領域が抽出される。このように、本発明においては、低解像度画像および処理対象画像の輪郭領域に対してのみグラフカット法を適用して特定領域を抽出するようにしたため、処理対象画像の全体にグラフカット法を適用した場合と比較して、処理時間を大幅に短縮することができ、かつ演算のためのメモリ量を大幅に低減することができる。 According to the present invention, a low-resolution image of the processing target image is generated, and a specific region is extracted from the low-resolution image by the graph cut method. Here, since the number of pixels of the low resolution image is smaller than that of the processing target image, the amount of calculation and the memory used can be reduced, but the accuracy of the region extraction is not so good. For this reason, according to the present invention, based on the extraction result of the specific area, a contour area that includes a contour in the specific area in the processing target image is set as the processing target image, and the contour area corresponds to the specific area by the graph cut method. Regions are extracted. As described above, in the present invention, the specific area is extracted by applying the graph cut method only to the contour region of the low-resolution image and the processing target image, so the graph cutting method is applied to the entire processing target image. Compared to the case, the processing time can be greatly shortened, and the amount of memory for calculation can be greatly reduced.
本発明の実施形態による画像処理装置の構成を示す概略ブロック図1 is a schematic block diagram showing the configuration of an image processing apparatus according to an embodiment of the present invention. 多重解像度変換を説明するための図Diagram for explaining multi-resolution conversion グラフカット法を説明するための図Diagram for explaining the graph cut method グラフカット法による領域分割を説明するための図Diagram for explaining area division by graph cut method 低解像度画像から抽出した肝臓領域の輪郭を示す図Diagram showing the outline of the liver region extracted from the low-resolution image 低解像度画像から抽出した肝臓領域の輪郭を3次元画像に設定した状態を示す図The figure which shows the state which set the outline of the liver area | region extracted from the low-resolution image to the three-dimensional image モフォロジー処理の構造要素を示す図Diagram showing the structural elements of morphology processing 3次元画像M0に設定された輪郭領域を示す図The figure which shows the outline area | region set to the three-dimensional image M0 3次元画像から抽出した肝臓領域の輪郭を示す図The figure which shows the outline of the liver area | region extracted from the three-dimensional image 本実施形態において行われる処理を示すフローチャートA flowchart showing processing performed in the present embodiment 抽出された肝臓領域を示す図Diagram showing extracted liver region
 以下、図面を参照して本発明の実施形態について説明する。図1は本発明の実施形態による画像処理装置の構成を示す概略ブロック図である。なお、図1に示す画像処理装置1の構成は、補助記憶装置(不図示)に読み込まれたプログラムをコンピュータ(例えばパーソナルコンピュータ等)上で実行することにより実現される。また、このプログラムは、CD-ROM等の情報記憶媒体に記憶され、もしくはインターネット等のネットワークを介して配布され、コンピュータにインストールされることになる。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a schematic block diagram showing the configuration of an image processing apparatus according to an embodiment of the present invention. The configuration of the image processing apparatus 1 shown in FIG. 1 is realized by executing a program read in an auxiliary storage device (not shown) on a computer (for example, a personal computer). Further, this program is stored in an information storage medium such as a CD-ROM or distributed via a network such as the Internet and installed in a computer.
 画像処理装置1は、例えばX線CT装置2により撮像された複数の2次元画像から3次元画像M0を生成し、この3次元画像M0に含まれる特定領域をグラフカット法を用いて自動的に抽出するものであって、画像取得部10、低解像度画像生成部12、第1抽出部14、輪郭領域設定部16、第2抽出部18、表示制御部20を備える。なお、画像処理装置1には、入力部22および表示部24が接続されている。また、本実施形態においては、3次元画像M0を人体の胸腹部を表すものとし、特定領域を肝臓領域とする。 The image processing apparatus 1 generates a three-dimensional image M0 from a plurality of two-dimensional images captured by, for example, the X-ray CT apparatus 2, and automatically uses a graph cut method to identify a specific area included in the three-dimensional image M0. An image acquisition unit 10, a low-resolution image generation unit 12, a first extraction unit 14, a contour region setting unit 16, a second extraction unit 18, and a display control unit 20 are included. Note that an input unit 22 and a display unit 24 are connected to the image processing apparatus 1. In the present embodiment, the three-dimensional image M0 represents the thorax of the human body, and the specific area is the liver area.
 画像取得部10は、例えばX線CT装置2により撮像された複数のCT画像(2次元画像)を取得し、複数の2次元画像から3次元画像M0を生成する。なお、画像取得部10は、CT画像のみならず、いわゆるMRI画像、RI画像、PET画像、X線画像等の2次元画像を取得するものであってもよい。また、X線CT装置2において3次元画像M0を生成し、画像取得部10は3次元画像M0を取得する処理のみを行うようにしてもよい。 The image acquisition unit 10 acquires, for example, a plurality of CT images (two-dimensional images) captured by the X-ray CT apparatus 2, and generates a three-dimensional image M0 from the plurality of two-dimensional images. The image acquisition unit 10 may acquire not only CT images but also two-dimensional images such as so-called MRI images, RI images, PET images, X-ray images, and the like. Alternatively, the X-ray CT apparatus 2 may generate a three-dimensional image M0, and the image acquisition unit 10 may perform only the process of acquiring the three-dimensional image M0.
 低解像度画像生成部12は、3次元画像M0を多重解像度変換して図2に示すように、解像度が異なる複数の3次元多重解像度画像Msi(i=0~n)を生成する。なお、i=0は3次元画像M0と同一解像度、i=nは最低解像度を表す。なお、本実施形態においては、後述するように低解像度画像から特定領域を抽出するものであり、その低解像度画像の解像度は、3次元画像M0の1画素当たりの実サイズに応じて決定する。例えば、3次元画像M0画素当たりの実サイズが0.5mmである場合、低解像度画像生成部12は、3次元画像M0の1/4の解像度の3次元多重解像度画像Ms2を、低解像度画像MLとして生成する。 The low-resolution image generation unit 12 multi-resolution-converts the three-dimensional image M0 to generate a plurality of three-dimensional multi-resolution images Msi (i = 0 to n) having different resolutions as shown in FIG. Note that i = 0 represents the same resolution as the three-dimensional image M0, and i = n represents the lowest resolution. In the present embodiment, as described later, a specific region is extracted from a low-resolution image, and the resolution of the low-resolution image is determined according to the actual size per pixel of the three-dimensional image M0. For example, when the actual size per pixel of the three-dimensional image M0 is 0.5 mm, the low resolution image generation unit 12 converts the three-dimensional multiresolution image Ms2 having a resolution of 1/4 of the three-dimensional image M0 to the low resolution image ML. Generate as
 第1抽出部14は、低解像度画像MLの肝臓領域および肝臓領域以外の領域を、グラフカット法を用いて領域分割して、低解像度画像MLから肝臓領域を抽出する。具体的には、肝臓領域を対象領域、肝臓領域以外の領域を背景領域に設定し、低解像度画像ML内の全画素位置において所定画素サイズの判別領域を設定し、グラフカット法を用いて判別領域を対象領域と背景領域とに分割する。 The first extraction unit 14 divides a region other than the liver region and the liver region of the low resolution image ML using the graph cut method, and extracts the liver region from the low resolution image ML. Specifically, a liver region is set as a target region, and a region other than the liver region is set as a background region, a determination region having a predetermined pixel size is set at all pixel positions in the low resolution image ML, and determination is performed using a graph cut method. The area is divided into a target area and a background area.
 グラフカット法においては、まず、図3に示すように、判別領域中の各画素を表すノードNij、各画素が取り得るラベル(本実施形態では対象領域または背景領域)を表すノードS,T、隣接する画素のノード同士をつなぐリンクであるn-link、並びに各画素を表すノードNijと対象領域を表すノードSおよび背景領域を表すノードTとをつなぐリンクであるs-linkおよびt-linkから構成されるグラフを作成する。なお、説明を簡単にするために、図3においては判別領域を3×3の2次元の領域としている。 In the graph cut method, first, as shown in FIG. 3, nodes Sij, representing nodes Nij representing pixels in the discrimination area, and labels (target areas or background areas in the present embodiment) that each pixel can take, From n-link, which is a link connecting nodes of adjacent pixels, and from s-link and t-link, which is a link connecting node Nij representing each pixel, node S representing a target region, and node T representing a background region Create a composed graph. In order to simplify the description, in FIG. 3, the discrimination area is a 3 × 3 two-dimensional area.
 ここで、n-linkは、隣接する画素が同一領域の画素である確からしさをリンクの太さで表すものであり、その確からしさの値はそれらの隣接する画素間の距離および画素値の差に基づいて算出できる。 Here, n-link represents the probability that adjacent pixels are pixels in the same region by the thickness of the link, and the value of the probability is the distance between the adjacent pixels and the difference between the pixel values. Can be calculated based on
 また、各画素を表すノードNijと対象領域を表すノードSとをつなぐs-linkは、各画素が対象領域に含まれる画素である確からしさを表すものであり、各画素を表すノードと背景領域を表すノードTとをつなぐt-linkは、各画素が背景領域に含まれる画素である確からしさを表すものである。それらの確からしさの値は、その画素が対象領域または背景領域のいずれかを示す画素であるかの情報がすでに与えられている場合には、その与えられた情報に従って設定でき、そのような情報が与えられてない場合には、s-linkについては、対象領域を推定し、推定した対象領域の濃度分布のヒストグラムに基づいて確からしさの値を設定し、t-linkについては、背景領域を推定し、推定した背景領域の濃度分布のヒストグラムに基づいて確からしさの値を設定することができる。 The s-link connecting the node Nij representing each pixel and the node S representing the target area represents the probability that each pixel is a pixel included in the target area, and the node representing each pixel and the background area The t-link connecting the node T representing the likelihood represents the probability that each pixel is a pixel included in the background area. If the information on whether the pixel is a pixel indicating the target region or the background region is already given, the probability values can be set according to the given information. Is not given, for s-link, the target area is estimated, and a probability value is set based on the histogram of the density distribution of the estimated target area. For t-link, the background area is set. The probability value can be set based on the estimated density distribution histogram of the background region.
 ここで、図3において、ノードN11,N12,N21,N22,N31により表される画素が対象領域内に設定された画素であるとすると、各ノードN11,N12,N21,N22,N31とノードSとをつなぐs-linkは太くなり、各ノードN11,N12,N21,N22,N31をつなぐn-linkは太くなる。一方、ノードN13,N23,N32,N33により表される画素が背景領域内に設定された画素であるとすると、各ノードN13,N23,N32,N33とノードTとをつなぐt-linkは太くなり、各ノードN13,N23,N32,N33をつなぐn-linkは太くなる。 Here, in FIG. 3, if the pixels represented by the nodes N11, N12, N21, N22, and N31 are pixels set in the target area, the nodes N11, N12, N21, N22, N31, and the node S The s-link connecting the two nodes becomes thick, and the n-link connecting the nodes N11, N12, N21, N22, and N31 becomes thicker. On the other hand, if the pixels represented by the nodes N13, N23, N32, and N33 are pixels set in the background area, the t-link that connects the nodes N13, N23, N32, and N33 and the node T becomes thicker. The n-link connecting the nodes N13, N23, N32, and N33 becomes thicker.
 そして、対象領域と背景領域とは互いに排他的な領域であるので、例えば図4に破線で示すように、s-link、t-linkおよびn-linkのうち適当なリンクを切断してノードSをノードTから切り離すことにより、判別領域を対象領域と背景領域とに分割できる。ここで、切断するすべてのs-link、t-linkおよびn-linkの確からしさの値の合計が最も小さくなるような切断を行うことにより、最適な領域分割をすることができる。 Since the target area and the background area are mutually exclusive areas, for example, as shown by a broken line in FIG. 4, an appropriate link among s-link, t-link, and n-link is cut to obtain a node S Is separated from the node T, the discrimination area can be divided into the target area and the background area. Here, by performing the cutting so that the sum of the probability values of all the s-links, t-links, and n-links to be cut is minimized, the optimum region division can be performed.
 第1抽出部14は、以上のように低解像度画像MLの領域分割を行い、低解像度画像MLから対象領域である肝臓領域を抽出する。図5は低解像度画像から抽出した肝臓領域の輪郭を示す図である。なお、本実施形態は3次元の低解像度画像MLから肝臓領域を抽出するものであるが、ここでは説明のために、3次元画像M0を構成する1つの2次元画像の低解像度画像において抽出された肝臓領域の輪郭を実線で示している。 The first extraction unit 14 divides the region of the low resolution image ML as described above, and extracts the liver region that is the target region from the low resolution image ML. FIG. 5 is a diagram showing the outline of the liver region extracted from the low-resolution image. In the present embodiment, the liver region is extracted from the three-dimensional low-resolution image ML, but here, for the sake of explanation, it is extracted from the low-resolution image of one two-dimensional image constituting the three-dimensional image M0. The outline of the liver region is indicated by a solid line.
 輪郭領域設定部16は、第1抽出部14が抽出した肝臓領域の輪郭を含む輪郭領域を3次元画像M0に設定する。図6は第1抽出部14が低解像度画像MLから抽出した肝臓領域の輪郭を、3次元画像M0に設定した状態を示す図である。なお、ここでは説明のために、3次元画像M0を構成する1つの2次元画像に設定した肝臓領域の輪郭を実線で示している。本実施形態においては、低解像度画像MLは3次元画像M0の1/4の解像度であるため、低解像度画像MLから抽出した肝臓領域の輪郭は4倍に拡大されて、3次元画像M0に設定されることとなる。このため、設定された輪郭は、3次元画像M0に含まれる肝臓領域の輪郭とは完全には一致せず、解像度の差に基づく凹凸を含むものとなっている。 The contour region setting unit 16 sets a contour region including the contour of the liver region extracted by the first extraction unit 14 in the three-dimensional image M0. FIG. 6 is a diagram illustrating a state in which the outline of the liver region extracted from the low resolution image ML by the first extraction unit 14 is set in the three-dimensional image M0. Here, for the sake of explanation, the outline of the liver region set in one two-dimensional image constituting the three-dimensional image M0 is shown by a solid line. In the present embodiment, the low resolution image ML has a resolution that is ¼ that of the three-dimensional image M0. Therefore, the outline of the liver region extracted from the low resolution image ML is enlarged four times and set to the three-dimensional image M0. Will be. For this reason, the set outline does not completely match the outline of the liver region included in the three-dimensional image M0, and includes irregularities based on the difference in resolution.
 輪郭領域設定部16は、3次元画像M0に設定した輪郭を内側に収縮させかつ外側に膨張させ、膨張した輪郭と収縮した輪郭とにより囲まれる領域を輪郭領域E0に設定する。なお、輪郭領域E0の幅方向(すなわち輪郭に垂直な方向)におけるサイズは、低解像度画像MLと3次元画像M0とのサイズに基づいて、(3次元画像M0のサイズ/低解像度画像MLのサイズ+1)×2の演算により決定する。本実施形態においては、3次元画像M0のサイズ/低解像度画像MLのサイズ=4であるため、輪郭領域E0の範囲サイズは10画素に決定される。なお、輪郭領域E0のサイズを決定する手法は、上記手法に限定されるものではなく、任意の手法を適宜適用することができる。 The contour region setting unit 16 contracts the contour set in the three-dimensional image M0 inward and expands outward, and sets a region surrounded by the expanded contour and the contracted contour as the contour region E0. Note that the size of the contour region E0 in the width direction (ie, the direction perpendicular to the contour) is based on the size of the low-resolution image ML and the three-dimensional image M0 (the size of the three-dimensional image M0 / the size of the low-resolution image ML). +1) × 2 is determined by calculation. In the present embodiment, since the size of the three-dimensional image M0 / the size of the low resolution image ML = 4, the range size of the contour region E0 is determined to be 10 pixels. The method for determining the size of the contour region E0 is not limited to the above method, and any method can be applied as appropriate.
 ここで、本実施形態においては、輪郭の収縮および膨張は、モフォロジー処理により行う。具体的には、図7に示すような構造要素を用いて、3次元画像M0に設定された輪郭上の注目画素を中心とした所定の幅の中の最小値を検索するイロージョン処理を行うことにより輪郭を1画素収縮し、さらに収縮した輪郭に対してイロージョン処理を行うことにより、さらに輪郭を収縮させる。そして、このようなイロージョン処理を4回行うことにより、3次元画像M0に設定された輪郭を内側に4画素収縮させる。 Here, in the present embodiment, the contraction and expansion of the contour are performed by morphology processing. Specifically, an erosion process for searching for the minimum value in a predetermined width centered on the target pixel on the contour set in the three-dimensional image M0 is performed using a structural element as shown in FIG. Thus, the contour is contracted by one pixel, and the contour is further contracted by performing erosion processing on the contracted contour. Then, by performing such erosion processing four times, the contour set in the three-dimensional image M0 is contracted four pixels inward.
 また、図7に示すような構造要素を用いて、3次元画像M0に設定された輪郭上の注目画素を中心とした所定の幅の中の最大値を検索するダイレーション処理を行うことにより輪郭を1画素膨張させ、さらに膨張させた輪郭に対してダイレーション処理を行うことにより、さらに輪郭を膨張させる。そして、このようなダイレーション処理を5回行うことにより、3次元画像M0に設定された輪郭を外側に5画素膨張させる。 Further, by using a structural element as shown in FIG. 7, the contour is obtained by performing a dilation process for searching for the maximum value in a predetermined width centered on the pixel of interest on the contour set in the three-dimensional image M0. Is expanded by one pixel, and the contour is further expanded by performing dilation processing on the expanded contour. Then, by performing such dilation processing five times, the outline set in the three-dimensional image M0 is expanded outward by five pixels.
 そして、輪郭領域設定部16は、膨張させた輪郭と収縮させた輪郭とにより囲まれる領域を輪郭領域E0に設定する。図8は3次元画像M0に設定された輪郭領域を示す図である。なお、輪郭はイロージョン処理により内側に4画素収縮され、ダイレーション処理により外側に5画素膨張されているため、輪郭の画素1画素と併せて、輪郭領域E0の幅方向のサイズは10画素となっている。 Then, the contour region setting unit 16 sets a region surrounded by the expanded contour and the contracted contour as the contour region E0. FIG. 8 is a diagram showing a contour region set in the three-dimensional image M0. Since the contour is shrunk 4 pixels inward by erosion processing and expanded 5 pixels outward by dilation processing, the size in the width direction of the contour region E0 is 10 pixels together with 1 pixel of the contour. ing.
 第2抽出部18は、3次元画像M0に設定された輪郭領域E0を、グラフカット法を用いて肝臓領域と肝臓領域以外の領域とに領域分割し、さらに領域分割の結果に基づいて、3次元画像M0から肝臓領域の全体を抽出する。なお、輪郭領域E0における輪郭の内側の領域は肝臓領域である可能性が高く、輪郭領域E0における輪郭の外側の領域は背景領域である可能性が高い。このため、輪郭領域E0に対してグラフカット法を適用するに際しては、輪郭領域E0における設定した輪郭の外側ほどt-linkの値を大きくし、輪郭領域E0における設定した輪郭の内側ほどs-linkの値を大きくする。これにより、効率よくかつ正確に、輪郭領域E0を肝臓領域とそれ以外の領域とに分割することができる。 The second extraction unit 18 divides the contour region E0 set in the three-dimensional image M0 into a liver region and a region other than the liver region using a graph cut method, and further, based on the result of the region division, 3 The entire liver region is extracted from the dimension image M0. The region inside the contour in the contour region E0 is likely to be a liver region, and the region outside the contour in the contour region E0 is highly likely to be a background region. For this reason, when the graph cut method is applied to the contour region E0, the value of t-link is increased toward the outside of the set contour in the contour region E0, and s-link is increased toward the inside of the set contour in the contour region E0. Increase the value of. As a result, the contour region E0 can be efficiently and accurately divided into the liver region and the other regions.
 第2抽出部18は、以上のように輪郭領域E0の領域分割を行い、輪郭領域E0から対象領域である肝臓領域を抽出する。図9は輪郭領域E0から抽出した肝臓領域の輪郭を示す図である。なお、本実施形態は3次元画像M0から肝臓領域を抽出するものであるが、ここでは説明のために、3次元画像M0を構成する1つの2次元画像において抽出された肝臓領域の輪郭を実線で示している。図9に示すように、3次元画像M0の輪郭領域E0を領域分割することにより得られる肝臓領域の輪郭は、肝臓の表面を滑らかにつなぐものとなっている。 The second extraction unit 18 divides the contour region E0 as described above, and extracts the liver region that is the target region from the contour region E0. FIG. 9 is a diagram showing the contour of the liver region extracted from the contour region E0. In the present embodiment, the liver region is extracted from the three-dimensional image M0. However, for the sake of explanation, the outline of the liver region extracted in one two-dimensional image constituting the three-dimensional image M0 is illustrated by a solid line. Is shown. As shown in FIG. 9, the contour of the liver region obtained by dividing the contour region E0 of the three-dimensional image M0 smoothly connects the surfaces of the liver.
 そして、第2抽出部18は、3次元画像M0の輪郭領域E0から抽出した輪郭の内部の領域を肝臓領域として抽出する。 And the 2nd extraction part 18 extracts the area | region inside the outline extracted from the outline area | region E0 of the three-dimensional image M0 as a liver area | region.
 表示制御部20は、抽出した肝臓領域等を表示部24に表示する。 The display control unit 20 displays the extracted liver region and the like on the display unit 24.
 入力部22は、例えばキーボードおよびマウス等からなるものであり、放射線技師等の使用者による各種指示を画像処理装置1に入力する。 The input unit 22 includes, for example, a keyboard and a mouse, and inputs various instructions from a user such as a radiologist to the image processing apparatus 1.
 表示部24は、例えば液晶ディスプレイやCRTディスプレイ等からなるものであり、抽出した肝臓領域等の画像を必要に応じて表示する。 The display unit 24 is composed of, for example, a liquid crystal display, a CRT display, or the like, and displays an image of the extracted liver region or the like as necessary.
 次いで、本実施形態において行われる処理について説明する。図10は本実施形態において行われる処理を示すフローチャートである。まず、画像取得部10が、X線CT装置2から複数のCT画像を取得して3次元画像M0を生成する(ステップST1)。次いで、低解像度画像生成部12が、3次元画像M0を多重解像度変換して、低解像度画像MLを生成し(ステップST2)、第1抽出部14が、低解像度画像MLから肝臓領域を抽出する(ステップST3)。 Next, processing performed in the present embodiment will be described. FIG. 10 is a flowchart showing processing performed in the present embodiment. First, the image acquisition unit 10 acquires a plurality of CT images from the X-ray CT apparatus 2 and generates a three-dimensional image M0 (step ST1). Next, the low resolution image generation unit 12 multi-resolution converts the three-dimensional image M0 to generate a low resolution image ML (step ST2), and the first extraction unit 14 extracts a liver region from the low resolution image ML. (Step ST3).
 次いで、輪郭領域設定部16が、低解像度画像MLから抽出した肝臓領域の輪郭を3次元画像M0に設定し、上述したようにイロージョン処理およびダイレーション処理を行って、輪郭領域E0を3次元画像M0に設定する(ステップST4)。そして、第2抽出部18が、輪郭領域E0から肝臓領域の輪郭を抽出し、かつ3次元画像M0から肝臓領域を抽出する(ステップST5)。さらに、表示制御部20が抽出した肝臓領域を表示部24に表示し(ステップST6)、処理を終了する。 Next, the contour region setting unit 16 sets the contour of the liver region extracted from the low-resolution image ML to the three-dimensional image M0, and performs the erosion processing and the dilation processing as described above, thereby converting the contour region E0 into the three-dimensional image. Set to M0 (step ST4). And the 2nd extraction part 18 extracts the outline of a liver area | region from the outline area | region E0, and extracts a liver area | region from the three-dimensional image M0 (step ST5). Further, the liver region extracted by the display control unit 20 is displayed on the display unit 24 (step ST6), and the process ends.
 図11は表示された肝臓領域を示す図である。図11に示すように、本実施形態によれば肝臓領域が精度良く抽出されていることが分かる。 FIG. 11 is a diagram showing the displayed liver region. As shown in FIG. 11, according to the present embodiment, it can be seen that the liver region is accurately extracted.
 このように、本実施形態によれば、3次元画像M0の低解像度画像MLを生成し、グラフカット法により低解像度画像MLから肝臓領域等の特定領域を抽出するようにしたものである。ここで、低解像度画像MLは3次元画像M0よりも画素数が少ない。例えば、低解像度画像MLの解像度が3次元画像M0の1/4である場合、画素数は1/64となる。このため、低解像度画像MLを用いることにより、演算量および使用メモリは少なくすることができるが、領域抽出の精度がそれほどよいものではない。このため、本実施形態においては、低解像度画像MLから抽出した肝臓領域の輪郭を含む輪郭領域E0を3次元画像M0に設定し、グラフカット法により輪郭領域E0から肝臓領域を抽出するようにしたものである。ここで、図7に示すように、輪郭領域E0のサイズは3次元画像M0と比較して大幅に少ないものとなっている。このように、本実施形態においては、3次元画像M0の低解像度画像MLおよび3次元画像M0の輪郭領域E0に対してのみグラフカット法を適用して肝臓領域を抽出するようにしたため、3次元画像M0の全体にグラフカット法を適用した場合と比較して、処理時間を大幅に短縮することができ、かつ演算のためのメモリ量を大幅に低減することができる。
 なお、上記実施形態においては、医用の3次元画像M0から肝臓領域を抽出しているが、抽出する領域はこれに限定されるものではなく、脳、心臓、肺野、膵臓、脾臓、腎臓、血管等の、医用3次元画像に含まれる各種構造物の領域を抽出する際に本発明を適用することにより、演算の処理量および処理時間を短縮することができる。
As described above, according to the present embodiment, the low-resolution image ML of the three-dimensional image M0 is generated, and a specific region such as a liver region is extracted from the low-resolution image ML by the graph cut method. Here, the low-resolution image ML has a smaller number of pixels than the three-dimensional image M0. For example, when the resolution of the low resolution image ML is 1/4 of the three-dimensional image M0, the number of pixels is 1/64. For this reason, by using the low-resolution image ML, the amount of calculation and the memory used can be reduced, but the accuracy of region extraction is not so good. For this reason, in this embodiment, the contour region E0 including the contour of the liver region extracted from the low-resolution image ML is set as the three-dimensional image M0, and the liver region is extracted from the contour region E0 by the graph cut method. Is. Here, as shown in FIG. 7, the size of the contour region E0 is significantly smaller than that of the three-dimensional image M0. As described above, in the present embodiment, the liver region is extracted by applying the graph cut method only to the low-resolution image ML of the three-dimensional image M0 and the contour region E0 of the three-dimensional image M0. Compared with the case where the graph cut method is applied to the entire image M0, the processing time can be greatly shortened, and the amount of memory for calculation can be greatly reduced.
In the above embodiment, the liver region is extracted from the medical three-dimensional image M0, but the region to be extracted is not limited to this, and the brain, heart, lung field, pancreas, spleen, kidney, By applying the present invention when extracting regions of various structures included in a medical three-dimensional image such as blood vessels, it is possible to reduce the processing amount and processing time of computation.
 また、上記実施形態においては、3次元画像M0の1/4の解像度の低解像度画像MLにおける領域抽出の結果を3次元画像M0に適用して肝臓領域を抽出しているが、低解像度画像MLにおける領域抽出の結果を用いて、3次元画像M0の1/2の解像度の低解像度画像に輪郭領域を設定して肝臓領域を抽出し、さらに3次元画像M0の1/2の解像度の低解像度画像における領域抽出の結果を3次元画像M0に適用するようにしてもよい。すなわち、低解像度画像における領域抽出の結果を用いての、低解像度画像よりも1段階解像度が高い解像度画像に対する輪郭領域の設定、および1段階解像度が高い解像度画像からの肝臓領域の抽出を、処理対象の3次元画像M0まで繰り返して、3次元画像M0から肝臓領域を抽出するようにしてもよい。 In the above embodiment, the liver region is extracted by applying the result of region extraction in the low-resolution image ML having a resolution of 1/4 of the three-dimensional image M0 to the three-dimensional image M0, but the low-resolution image ML is extracted. Is used to extract a liver region by setting a contour region in a low-resolution image having a resolution of 1/2 that of the three-dimensional image M0, and further to a low resolution having a resolution of 1/2 that of the three-dimensional image M0. The region extraction result in the image may be applied to the three-dimensional image M0. That is, processing of setting a contour region for a resolution image having a higher one-step resolution than that of the low-resolution image and extracting a liver region from the resolution image having a higher one-step resolution using the region extraction result in the low-resolution image The liver region may be extracted from the three-dimensional image M0 by repeating up to the target three-dimensional image M0.
 また、上記実施形態においては、医用3次元画像を処理の対象としているが、医用の2次元画像を処理の対象としてもよい。また、医用画像のみならず、デジタルカメラ等を用いて取得した画像から、人物等の領域を抽出する際にも、本発明を適用できることはもちろんである。とくに、近年のデジタルカメラは取得される画像の画素数が大きいため、グラフカット法を用いて人物等の領域を抽出する際に多大な演算を要するが、本発明を適用することにより、演算の処理量および処理時間を大幅に短縮することができる。 In the above embodiment, a medical three-dimensional image is a processing target, but a medical two-dimensional image may be a processing target. Of course, the present invention can be applied not only to extracting a medical image but also to extracting an area such as a person from an image acquired using a digital camera or the like. In particular, since recent digital cameras have a large number of pixels in an acquired image, a large amount of computation is required when extracting a region such as a person using the graph cut method. The processing amount and processing time can be greatly reduced.
 また、動画像から領域を抽出する場合にも、本発明を適用することができる。動画像は複数のフレームからなるため、各フレームから領域を抽出することが考えられるが、各フレームに含まれる画像は画質が悪く、精度良く領域を抽出できない。ここで、動画像は、複数のフレームが時間軸に沿って並んだ3次元の画像と見なすことができる。このように3次元画像と見なした動画像からグラフカット法を用いて領域を抽出する際に、本発明を適用することにより、演算の処理量および処理時間を大幅に短縮することができ、かつ精度良く動画像から領域を抽出することができる。 Also, the present invention can be applied when extracting a region from a moving image. Since a moving image is composed of a plurality of frames, it is conceivable to extract a region from each frame. However, the image included in each frame has poor image quality, and the region cannot be extracted with high accuracy. Here, the moving image can be regarded as a three-dimensional image in which a plurality of frames are arranged along the time axis. In this way, when extracting a region from a moving image regarded as a three-dimensional image using the graph cut method, by applying the present invention, it is possible to greatly reduce the processing amount and processing time of calculation, In addition, the region can be extracted from the moving image with high accuracy.

Claims (6)

  1.  グラフカット法により処理対象画像から特定領域を抽出する画像処理装置であって、
     前記処理対象画像の低解像度画像を生成し、前記グラフカット法により該低解像度画像から前記特定領域を抽出する第1の抽出手段と、
     前記特定領域の抽出結果に基づいて、前記処理対象画像における前記特定領域の輪郭を含む輪郭領域を、前記処理対象画像に設定する輪郭領域設定手段と、
     前記グラフカット法により、前記輪郭領域から前記特定領域に対応する領域を抽出する第2の抽出手段とを備えたことを特徴とする画像処理装置。
    An image processing apparatus that extracts a specific area from a processing target image by a graph cut method,
    First extraction means for generating a low resolution image of the processing target image and extracting the specific region from the low resolution image by the graph cut method;
    A contour region setting means for setting a contour region including the contour of the specific region in the processing target image in the processing target image based on the extraction result of the specific region;
    An image processing apparatus comprising: a second extraction unit configured to extract a region corresponding to the specific region from the contour region by the graph cut method.
  2.  前記輪郭領域設定手段は、前記輪郭領域のサイズを、前記低解像度画像と前記処理対象画像との解像度の差に基づいて決定する手段であることを特徴とする請求項1記載の画像処理装置。 2. The image processing apparatus according to claim 1, wherein the contour region setting unit is a unit that determines the size of the contour region based on a resolution difference between the low-resolution image and the processing target image.
  3.  前記輪郭領域設定手段は、モフォロジー処理のイロージョン処理およびダイレーション処理により、前記輪郭領域を設定する手段であることを特徴とする請求項1または2記載の画像処理装置。 3. The image processing apparatus according to claim 1, wherein the contour region setting means is a means for setting the contour region by erosion processing and dilation processing of morphology processing.
  4.  前記第2の抽出手段は、前記輪郭領域における前記特定領域の輪郭の内側にある画素に対する前記グラフカット法におけるs-linkの値を大きくし、前記輪郭領域における前記特定領域の輪郭の外側にある画素に対する前記グラフカット法におけるt-linkの値を大きくする手段であることを特徴とする請求項1から3のいずれか1項記載の画像処理装置。 The second extraction means increases the value of s-link in the graph cut method for pixels inside the contour of the specific region in the contour region, and is outside the contour of the specific region in the contour region. 4. The image processing apparatus according to claim 1, wherein the image processing apparatus is a means for increasing a t-link value in the graph cut method for a pixel.
  5.  グラフカット法により処理対象画像から特定領域を抽出する画像処理方法であって、
     前記処理対象画像の低解像度画像を生成し、前記グラフカット法により該低解像度画像から前記特定領域を抽出し、
     前記特定領域の抽出結果に基づいて、前記処理対象画像における前記特定領域の輪郭を含む輪郭領域を前記処理対象画像に設定し、
     前記グラフカット法により、前記輪郭領域から前記特定領域に対応する領域を抽出することを特徴とする画像処理方法。
    An image processing method for extracting a specific area from a processing target image by a graph cut method,
    Generating a low-resolution image of the processing target image, extracting the specific region from the low-resolution image by the graph cut method,
    Based on the extraction result of the specific region, a contour region including the contour of the specific region in the processing target image is set in the processing target image,
    An image processing method, wherein an area corresponding to the specific area is extracted from the outline area by the graph cut method.
  6.  グラフカット法により処理対象画像から特定領域を抽出する画像処理方法をコンピュータに実行させるためのプログラムであって、
     前記処理対象画像の低解像度画像を生成し、前記グラフカット法により該低解像度画像から前記特定領域を抽出する手順と、
     前記特定領域の抽出結果に基づいて、前記処理対象画像における前記特定領域の輪郭を含む輪郭領域を前記処理対象画像に設定する手順と、
     前記グラフカット法により、前記輪郭領域から前記特定領域に対応する領域を抽出する手順とをコンピュータに実行させることを特徴とするプログラム。
    A program for causing a computer to execute an image processing method for extracting a specific area from a processing target image by a graph cut method,
    Generating a low-resolution image of the processing target image, and extracting the specific region from the low-resolution image by the graph cut method;
    Based on the extraction result of the specific region, a procedure for setting a contour region including the contour of the specific region in the processing target image to the processing target image;
    A program for causing a computer to execute a procedure for extracting an area corresponding to the specific area from the outline area by the graph cut method.
PCT/JP2013/005556 2012-09-27 2013-09-20 Image processing device, method, and program WO2014050044A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CA2886092A CA2886092A1 (en) 2012-09-27 2013-09-20 Image processing apparatus, method and program
EP13841776.1A EP2901933A4 (en) 2012-09-27 2013-09-20 Image processing device, method, and program
CN201380050478.9A CN104717925A (en) 2012-09-27 2013-09-20 Image processing device, method, and program
BR112015006523A BR112015006523A2 (en) 2012-09-27 2013-09-20 image processing apparatus, method and program
US14/665,365 US20150193943A1 (en) 2012-09-27 2015-03-23 Image processing apparatus, method and program
IN2613DEN2015 IN2015DN02613A (en) 2012-09-27 2015-03-31

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-213619 2012-09-27
JP2012213619A JP5836908B2 (en) 2012-09-27 2012-09-27 Image processing apparatus and method, and program

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/665,365 Continuation US20150193943A1 (en) 2012-09-27 2015-03-23 Image processing apparatus, method and program

Publications (1)

Publication Number Publication Date
WO2014050044A1 true WO2014050044A1 (en) 2014-04-03

Family

ID=50387485

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/005556 WO2014050044A1 (en) 2012-09-27 2013-09-20 Image processing device, method, and program

Country Status (8)

Country Link
US (1) US20150193943A1 (en)
EP (1) EP2901933A4 (en)
JP (1) JP5836908B2 (en)
CN (1) CN104717925A (en)
BR (1) BR112015006523A2 (en)
CA (1) CA2886092A1 (en)
IN (1) IN2015DN02613A (en)
WO (1) WO2014050044A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096331A (en) * 2015-08-21 2015-11-25 南方医科大学 Graph cut-based lung 4D-CT tumor automatic segmentation method

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5881625B2 (en) * 2013-01-17 2016-03-09 富士フイルム株式会社 Region dividing apparatus, program, and method
JP6220310B2 (en) * 2014-04-24 2017-10-25 株式会社日立製作所 Medical image information system, medical image information processing method, and program
EP3018626B1 (en) * 2014-11-04 2018-10-17 Sisvel Technology S.r.l. Apparatus and method for image segmentation
JP5858188B1 (en) 2015-06-15 2016-02-10 富士ゼロックス株式会社 Image processing apparatus, image processing method, image processing system, and program
KR102202398B1 (en) 2015-12-11 2021-01-13 삼성전자주식회사 Image processing apparatus and image processing method thereof
JP6562869B2 (en) * 2016-04-01 2019-08-21 富士フイルム株式会社 Data classification apparatus, method and program
JP6611660B2 (en) * 2016-04-13 2019-11-27 富士フイルム株式会社 Image alignment apparatus and method, and program
JP6611255B2 (en) * 2016-06-09 2019-11-27 日本電信電話株式会社 Image processing apparatus, image processing method, and image processing program
JP6833444B2 (en) 2016-10-17 2021-02-24 キヤノン株式会社 Radiation equipment, radiography system, radiography method, and program
AU2019214330A1 (en) * 2018-02-02 2020-08-20 Moleculight Inc. Wound imaging and analysis
US10964012B2 (en) * 2018-06-14 2021-03-30 Sony Corporation Automatic liver segmentation in CT
JP7052103B2 (en) * 2021-02-01 2022-04-11 キヤノン株式会社 Radiography equipment, radiography system, radiography method, and program
JP7365066B2 (en) * 2021-12-08 2023-10-19 株式会社palan display system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003010172A (en) 2001-07-04 2003-01-14 Hitachi Medical Corp Method and device for extracting and displaying specific area of internal organ
JP2007307358A (en) * 2006-04-17 2007-11-29 Fujifilm Corp Method, apparatus and program for image treatment
JP2008185480A (en) * 2007-01-30 2008-08-14 Matsushita Electric Works Ltd Human body detector
JP2012223315A (en) * 2011-04-19 2012-11-15 Fujifilm Corp Medical image processing apparatus, method, and program

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8913830B2 (en) * 2005-01-18 2014-12-16 Siemens Aktiengesellschaft Multilevel image segmentation
US7822274B2 (en) * 2006-01-17 2010-10-26 Siemens Medical Solutions Usa, Inc. Banded graph cut segmentation algorithms with laplacian pyramids
US8050498B2 (en) * 2006-07-21 2011-11-01 Adobe Systems Incorporated Live coherent image selection to differentiate foreground and background pixels
US7881540B2 (en) * 2006-12-05 2011-02-01 Fujifilm Corporation Method and apparatus for detection using cluster-modified graph cuts
US8131075B2 (en) * 2007-03-29 2012-03-06 Siemens Aktiengesellschaft Fast 4D segmentation of large datasets using graph cuts
JP4493679B2 (en) * 2007-03-29 2010-06-30 富士フイルム株式会社 Target region extraction method, apparatus, and program
US8121407B1 (en) * 2008-03-17 2012-02-21 Adobe Systems Incorporated Method and apparatus for localized labeling in digital images
US8213726B2 (en) * 2009-06-19 2012-07-03 Microsoft Corporation Image labeling using multi-scale processing
JP2011015262A (en) * 2009-07-03 2011-01-20 Panasonic Corp Image decoder
CN101996393B (en) * 2009-08-12 2012-08-01 复旦大学 Super-resolution method based on reconstruction
KR101669840B1 (en) * 2010-10-21 2016-10-28 삼성전자주식회사 Disparity estimation system and method for estimating consistent disparity from multi-viewpoint video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003010172A (en) 2001-07-04 2003-01-14 Hitachi Medical Corp Method and device for extracting and displaying specific area of internal organ
JP2007307358A (en) * 2006-04-17 2007-11-29 Fujifilm Corp Method, apparatus and program for image treatment
JP2008185480A (en) * 2007-01-30 2008-08-14 Matsushita Electric Works Ltd Human body detector
JP2012223315A (en) * 2011-04-19 2012-11-15 Fujifilm Corp Medical image processing apparatus, method, and program

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ALI, ASEM M.: "Automatic lung segmentation of volumetric low-dose CT scans using graph cuts", ADVANCES IN VISUAL COMPUTING, 2008, BERLIN HEIDELBERG, pages 258 - 267, XP019112080 *
DANEK ET AL.: "Segmentation of touching cell nuclei using a two-stage graph cut model", IMAGE ANALYSIS, 2009, BERLIN HEIDELBERG, pages 410 - 419, XP019121153 *
HOWE, N.R. ET AL: "BETTER FOREGROUND SEGMENTATION THROUGH GRAPH CUTS", 26 July 2004 (2004-07-26), XP055253712, Retrieved from the Internet <URL:http://arxiv.org/abs/cs/0401017> [retrieved on 20131224], DOI: 10.1016/J.ESWA.2010.09.137 *
LAURENT MASSOPTIER: "Fully automatic liver segmentation through graph-cut technique", EMBS 2007. 29TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE, 2007, pages 5243 - 5246, XP031337403 *
See also references of EP2901933A4
TAKASHI IJIRI: "Contour-based approach for reining volume image segmentation", IPSJ SIG NOTES GRAPHICS TO CAD(CG), vol. 5, 1 February 2011 (2011-02-01), pages 1 - 6, XP055253707 *
Y.Y. BOYKOV; M. JOLLY: "Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images", PROCEEDINGS OF ''INTERNATIONAL CONFERENCE ON COMPUTER VISION, vol. I, 2001, pages 105 - 112, XP010553969

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096331A (en) * 2015-08-21 2015-11-25 南方医科大学 Graph cut-based lung 4D-CT tumor automatic segmentation method

Also Published As

Publication number Publication date
EP2901933A1 (en) 2015-08-05
BR112015006523A2 (en) 2017-07-04
EP2901933A4 (en) 2016-08-03
JP5836908B2 (en) 2015-12-24
CA2886092A1 (en) 2014-04-03
US20150193943A1 (en) 2015-07-09
CN104717925A (en) 2015-06-17
JP2014064835A (en) 2014-04-17
IN2015DN02613A (en) 2015-09-18

Similar Documents

Publication Publication Date Title
JP5836908B2 (en) Image processing apparatus and method, and program
Zhu et al. How can we make GAN perform better in single medical image super-resolution? A lesion focused multi-scale approach
Armanious et al. Unsupervised medical image translation using cycle-MedGAN
Guo et al. Progressive image inpainting with full-resolution residual network
Isaac et al. Super resolution techniques for medical image processing
CN109978037B (en) Image processing method, model training method, device and storage medium
US8842936B2 (en) Method, apparatus, and program for aligning images
CN104182954B (en) Real-time multi-modal medical image fusion method
JP6195714B2 (en) Medical image processing apparatus and method, and program
US20080107318A1 (en) Object Centric Data Reformation With Application To Rib Visualization
JP2015129987A (en) System and method of forming medical high-resolution image
JP5037705B2 (en) Image processing apparatus and method, and program
KR20200137768A (en) A Method and Apparatus for Segmentation of Orbital Bone in Head and Neck CT image by Using Deep Learning and Multi-Graylevel Network
JP2021027982A (en) Image processing apparatus and image processing method
Wang et al. Left atrial appendage segmentation based on ranking 2-D segmentation proposals
CN108038840B (en) Image processing method and device, image processing equipment and storage medium
JP2017189337A (en) Image positioning apparatus and method, and program
JP2024144633A (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND PROGRAM
Susan et al. Deep learning inpainting model on digital and medical images-a review.
JP4099357B2 (en) Image processing method and apparatus
US9727965B2 (en) Medical image processing apparatus and medical image processing method
Nitta et al. Deep learning based lung region segmentation with data preprocessing by generative adversarial nets
WO2020137677A1 (en) Image processing device, image processing method, and program
JP6817784B2 (en) Super-resolution device and program
Athreya et al. Ultrasound Image Enhancement using CycleGAN and Perceptual Loss

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13841776

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2886092

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112015006523

Country of ref document: BR

REEP Request for entry into the european phase

Ref document number: 2013841776

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2013841776

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 112015006523

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20150324