WO2014050044A1 - Image processing device, method, and program - Google Patents
Image processing device, method, and program Download PDFInfo
- Publication number
- WO2014050044A1 WO2014050044A1 PCT/JP2013/005556 JP2013005556W WO2014050044A1 WO 2014050044 A1 WO2014050044 A1 WO 2014050044A1 JP 2013005556 W JP2013005556 W JP 2013005556W WO 2014050044 A1 WO2014050044 A1 WO 2014050044A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- image
- contour
- low
- resolution
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000000605 extraction Methods 0.000 claims abstract description 32
- 239000000284 extract Substances 0.000 claims abstract description 14
- 230000010339 dilation Effects 0.000 claims description 7
- 230000003628 erosive effect Effects 0.000 claims description 7
- 238000003672 processing method Methods 0.000 claims description 7
- 210000004185 liver Anatomy 0.000 abstract description 54
- 238000010586 diagram Methods 0.000 description 13
- 210000000056 organ Anatomy 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 2
- 210000002216 heart Anatomy 0.000 description 2
- 210000004072 lung Anatomy 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000000496 pancreas Anatomy 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 210000000952 spleen Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/162—Segmentation; Edge detection involving graph-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20072—Graph-based image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
Definitions
- the present invention relates to an image processing apparatus and method for extracting a region from an image using a graph cut method, and a program.
- Patent Document 1 proposes a method of extracting a specific region of an organ from a number of two-dimensional images constituting a three-dimensional image, and stacking the extracted regions to generate a three-dimensional image of the specific region. Yes.
- Patent Document 1 since the method described in Patent Document 1 extracts a region from a two-dimensional image, the extraction range may be slightly different in each two-dimensional image. The extraction accuracy of the area is not so good.
- a graph cut method is known as a method for extracting a desired region from an image (see Non-Patent Document 1).
- the graph cut method includes a node Nij representing each pixel in the image, nodes S and T representing whether each pixel is a target region or a background region, an n-link which is a link connecting nodes of adjacent pixels, and each A graph composed of s-link and t-link, which is a link connecting a node Nij representing a pixel, a node S representing a target region, and a node T representing a background region, is created, and each pixel is a pixel of the target region Or a pixel in the background area is represented by the thickness (value size) of s-link, t-link and n-link, respectively, and the target area and the background area depending on the link thickness obtained as a result of the operation
- This is a technique for extracting a target region from an image.
- the present invention has been made in view of the above circumstances, and an object thereof is to shorten the processing time required for extracting an area from an image using the graph cut method and to reduce the amount of memory for calculation. .
- An image processing apparatus is an image processing apparatus that extracts a specific area from a processing target image by a graph cut method, First extraction means for generating a low-resolution image of the processing target image and extracting a specific region from the low-resolution image by a graph cut method; A contour region setting means for setting a contour region including the contour of the specific region in the processing target image to the processing target image based on the extraction result of the specific region; And a second extraction means for extracting a region corresponding to the specific region from the contour region by a graph cut method.
- the contour region setting unit may be a unit that determines the size of the contour region based on the difference in resolution between the low-resolution image and the processing target image.
- the contour region setting means may be a means for setting a contour region by erosion processing and dilation processing of morphology processing.
- the second extraction means increases the value of s-link in the graph cut method for the pixels inside the contour of the specific region in the contour region, and Means for increasing the value of t-link in the graph cut method for pixels outside the contour may be used.
- An image processing method is an image processing method for extracting a specific region from a processing target image by a graph cut method, Generate a low-resolution image of the image to be processed, extract a specific area from the low-resolution image by the graph cut method, Based on the extraction result of the specific region, the contour region including the contour of the specific region in the processing target image is set as the processing target image, A region corresponding to the specific region is extracted from the contour region by a graph cut method.
- the image processing method according to the present invention may be provided as a program for causing a computer to execute the image processing method.
- a low-resolution image of the processing target image is generated, and a specific region is extracted from the low-resolution image by the graph cut method.
- the number of pixels of the low resolution image is smaller than that of the processing target image, the amount of calculation and the memory used can be reduced, but the accuracy of the region extraction is not so good.
- a contour area that includes a contour in the specific area in the processing target image is set as the processing target image, and the contour area corresponds to the specific area by the graph cut method. Regions are extracted.
- the specific area is extracted by applying the graph cut method only to the contour region of the low-resolution image and the processing target image, so the graph cutting method is applied to the entire processing target image.
- the processing time can be greatly shortened, and the amount of memory for calculation can be greatly reduced.
- FIG. 1 is a schematic block diagram showing the configuration of an image processing apparatus according to an embodiment of the present invention.
- Diagram for explaining multi-resolution conversion Diagram for explaining the graph cut method Diagram for explaining area division by graph cut method Diagram showing the outline of the liver region extracted from the low-resolution image The figure which shows the state which set the outline of the liver area
- FIG. 1 is a schematic block diagram showing the configuration of an image processing apparatus according to an embodiment of the present invention.
- the configuration of the image processing apparatus 1 shown in FIG. 1 is realized by executing a program read in an auxiliary storage device (not shown) on a computer (for example, a personal computer). Further, this program is stored in an information storage medium such as a CD-ROM or distributed via a network such as the Internet and installed in a computer.
- the image processing apparatus 1 generates a three-dimensional image M0 from a plurality of two-dimensional images captured by, for example, the X-ray CT apparatus 2, and automatically uses a graph cut method to identify a specific area included in the three-dimensional image M0.
- An image acquisition unit 10, a low-resolution image generation unit 12, a first extraction unit 14, a contour region setting unit 16, a second extraction unit 18, and a display control unit 20 are included.
- an input unit 22 and a display unit 24 are connected to the image processing apparatus 1.
- the three-dimensional image M0 represents the thorax of the human body, and the specific area is the liver area.
- the image acquisition unit 10 acquires, for example, a plurality of CT images (two-dimensional images) captured by the X-ray CT apparatus 2, and generates a three-dimensional image M0 from the plurality of two-dimensional images.
- the image acquisition unit 10 may acquire not only CT images but also two-dimensional images such as so-called MRI images, RI images, PET images, X-ray images, and the like.
- the X-ray CT apparatus 2 may generate a three-dimensional image M0, and the image acquisition unit 10 may perform only the process of acquiring the three-dimensional image M0.
- the low resolution image generation unit 12 converts the three-dimensional multiresolution image Ms2 having a resolution of 1/4 of the three-dimensional image M0 to the low resolution image ML.
- the first extraction unit 14 divides a region other than the liver region and the liver region of the low resolution image ML using the graph cut method, and extracts the liver region from the low resolution image ML. Specifically, a liver region is set as a target region, and a region other than the liver region is set as a background region, a determination region having a predetermined pixel size is set at all pixel positions in the low resolution image ML, and determination is performed using a graph cut method. The area is divided into a target area and a background area.
- nodes Sij representing nodes Nij representing pixels in the discrimination area, and labels (target areas or background areas in the present embodiment) that each pixel can take, From n-link, which is a link connecting nodes of adjacent pixels, and from s-link and t-link, which is a link connecting node Nij representing each pixel, node S representing a target region, and node T representing a background region Create a composed graph.
- the discrimination area is a 3 ⁇ 3 two-dimensional area.
- n-link represents the probability that adjacent pixels are pixels in the same region by the thickness of the link, and the value of the probability is the distance between the adjacent pixels and the difference between the pixel values.
- the s-link connecting the node Nij representing each pixel and the node S representing the target area represents the probability that each pixel is a pixel included in the target area, and the node representing each pixel and the background area
- the t-link connecting the node T representing the likelihood represents the probability that each pixel is a pixel included in the background area. If the information on whether the pixel is a pixel indicating the target region or the background region is already given, the probability values can be set according to the given information. Is not given, for s-link, the target area is estimated, and a probability value is set based on the histogram of the density distribution of the estimated target area. For t-link, the background area is set. The probability value can be set based on the estimated density distribution histogram of the background region.
- the pixels represented by the nodes N11, N12, N21, N22, and N31 are pixels set in the target area, the nodes N11, N12, N21, N22, N31, and the node S The s-link connecting the two nodes becomes thick, and the n-link connecting the nodes N11, N12, N21, N22, and N31 becomes thicker.
- the pixels represented by the nodes N13, N23, N32, and N33 are pixels set in the background area, the t-link that connects the nodes N13, N23, N32, and N33 and the node T becomes thicker. The n-link connecting the nodes N13, N23, N32, and N33 becomes thicker.
- the target area and the background area are mutually exclusive areas, for example, as shown by a broken line in FIG. 4, an appropriate link among s-link, t-link, and n-link is cut to obtain a node S Is separated from the node T, the discrimination area can be divided into the target area and the background area.
- the cutting by performing the cutting so that the sum of the probability values of all the s-links, t-links, and n-links to be cut is minimized, the optimum region division can be performed.
- the first extraction unit 14 divides the region of the low resolution image ML as described above, and extracts the liver region that is the target region from the low resolution image ML.
- FIG. 5 is a diagram showing the outline of the liver region extracted from the low-resolution image.
- the liver region is extracted from the three-dimensional low-resolution image ML, but here, for the sake of explanation, it is extracted from the low-resolution image of one two-dimensional image constituting the three-dimensional image M0.
- the outline of the liver region is indicated by a solid line.
- the contour region setting unit 16 sets a contour region including the contour of the liver region extracted by the first extraction unit 14 in the three-dimensional image M0.
- FIG. 6 is a diagram illustrating a state in which the outline of the liver region extracted from the low resolution image ML by the first extraction unit 14 is set in the three-dimensional image M0.
- the outline of the liver region set in one two-dimensional image constituting the three-dimensional image M0 is shown by a solid line.
- the low resolution image ML has a resolution that is 1 ⁇ 4 that of the three-dimensional image M0. Therefore, the outline of the liver region extracted from the low resolution image ML is enlarged four times and set to the three-dimensional image M0. Will be. For this reason, the set outline does not completely match the outline of the liver region included in the three-dimensional image M0, and includes irregularities based on the difference in resolution.
- the contour region setting unit 16 contracts the contour set in the three-dimensional image M0 inward and expands outward, and sets a region surrounded by the expanded contour and the contracted contour as the contour region E0.
- the size of the contour region E0 in the width direction is based on the size of the low-resolution image ML and the three-dimensional image M0 (the size of the three-dimensional image M0 / the size of the low-resolution image ML). +1) ⁇ 2 is determined by calculation.
- the range size of the contour region E0 is determined to be 10 pixels.
- the method for determining the size of the contour region E0 is not limited to the above method, and any method can be applied as appropriate.
- the contraction and expansion of the contour are performed by morphology processing. Specifically, an erosion process for searching for the minimum value in a predetermined width centered on the target pixel on the contour set in the three-dimensional image M0 is performed using a structural element as shown in FIG. Thus, the contour is contracted by one pixel, and the contour is further contracted by performing erosion processing on the contracted contour. Then, by performing such erosion processing four times, the contour set in the three-dimensional image M0 is contracted four pixels inward.
- the contour is obtained by performing a dilation process for searching for the maximum value in a predetermined width centered on the pixel of interest on the contour set in the three-dimensional image M0. Is expanded by one pixel, and the contour is further expanded by performing dilation processing on the expanded contour. Then, by performing such dilation processing five times, the outline set in the three-dimensional image M0 is expanded outward by five pixels.
- FIG. 8 is a diagram showing a contour region set in the three-dimensional image M0. Since the contour is shrunk 4 pixels inward by erosion processing and expanded 5 pixels outward by dilation processing, the size in the width direction of the contour region E0 is 10 pixels together with 1 pixel of the contour. ing.
- the second extraction unit 18 divides the contour region E0 set in the three-dimensional image M0 into a liver region and a region other than the liver region using a graph cut method, and further, based on the result of the region division, 3
- the entire liver region is extracted from the dimension image M0.
- the region inside the contour in the contour region E0 is likely to be a liver region, and the region outside the contour in the contour region E0 is highly likely to be a background region. For this reason, when the graph cut method is applied to the contour region E0, the value of t-link is increased toward the outside of the set contour in the contour region E0, and s-link is increased toward the inside of the set contour in the contour region E0. Increase the value of.
- the contour region E0 can be efficiently and accurately divided into the liver region and the other regions.
- the second extraction unit 18 divides the contour region E0 as described above, and extracts the liver region that is the target region from the contour region E0.
- FIG. 9 is a diagram showing the contour of the liver region extracted from the contour region E0.
- the liver region is extracted from the three-dimensional image M0.
- the outline of the liver region extracted in one two-dimensional image constituting the three-dimensional image M0 is illustrated by a solid line. Is shown.
- the contour of the liver region obtained by dividing the contour region E0 of the three-dimensional image M0 smoothly connects the surfaces of the liver.
- the 2nd extraction part 18 extracts the area
- the display control unit 20 displays the extracted liver region and the like on the display unit 24.
- the input unit 22 includes, for example, a keyboard and a mouse, and inputs various instructions from a user such as a radiologist to the image processing apparatus 1.
- the display unit 24 is composed of, for example, a liquid crystal display, a CRT display, or the like, and displays an image of the extracted liver region or the like as necessary.
- FIG. 10 is a flowchart showing processing performed in the present embodiment.
- the image acquisition unit 10 acquires a plurality of CT images from the X-ray CT apparatus 2 and generates a three-dimensional image M0 (step ST1).
- the low resolution image generation unit 12 multi-resolution converts the three-dimensional image M0 to generate a low resolution image ML (step ST2)
- the first extraction unit 14 extracts a liver region from the low resolution image ML. (Step ST3).
- the contour region setting unit 16 sets the contour of the liver region extracted from the low-resolution image ML to the three-dimensional image M0, and performs the erosion processing and the dilation processing as described above, thereby converting the contour region E0 into the three-dimensional image.
- M0 step ST4
- the 2nd extraction part 18 extracts the outline of a liver area
- the liver region extracted by the display control unit 20 is displayed on the display unit 24 (step ST6), and the process ends.
- FIG. 11 is a diagram showing the displayed liver region. As shown in FIG. 11, according to the present embodiment, it can be seen that the liver region is accurately extracted.
- the low-resolution image ML of the three-dimensional image M0 is generated, and a specific region such as a liver region is extracted from the low-resolution image ML by the graph cut method.
- the low-resolution image ML has a smaller number of pixels than the three-dimensional image M0. For example, when the resolution of the low resolution image ML is 1/4 of the three-dimensional image M0, the number of pixels is 1/64. For this reason, by using the low-resolution image ML, the amount of calculation and the memory used can be reduced, but the accuracy of region extraction is not so good.
- the contour region E0 including the contour of the liver region extracted from the low-resolution image ML is set as the three-dimensional image M0, and the liver region is extracted from the contour region E0 by the graph cut method. Is.
- the size of the contour region E0 is significantly smaller than that of the three-dimensional image M0.
- the liver region is extracted by applying the graph cut method only to the low-resolution image ML of the three-dimensional image M0 and the contour region E0 of the three-dimensional image M0. Compared with the case where the graph cut method is applied to the entire image M0, the processing time can be greatly shortened, and the amount of memory for calculation can be greatly reduced.
- the liver region is extracted from the medical three-dimensional image M0, but the region to be extracted is not limited to this, and the brain, heart, lung field, pancreas, spleen, kidney,
- the present invention when extracting regions of various structures included in a medical three-dimensional image such as blood vessels, it is possible to reduce the processing amount and processing time of computation.
- the liver region is extracted by applying the result of region extraction in the low-resolution image ML having a resolution of 1/4 of the three-dimensional image M0 to the three-dimensional image M0, but the low-resolution image ML is extracted.
- the region extraction result in the image may be applied to the three-dimensional image M0.
- processing of setting a contour region for a resolution image having a higher one-step resolution than that of the low-resolution image and extracting a liver region from the resolution image having a higher one-step resolution using the region extraction result in the low-resolution image may be extracted from the three-dimensional image M0 by repeating up to the target three-dimensional image M0.
- a medical three-dimensional image is a processing target, but a medical two-dimensional image may be a processing target.
- the present invention can be applied not only to extracting a medical image but also to extracting an area such as a person from an image acquired using a digital camera or the like.
- a large amount of computation is required when extracting a region such as a person using the graph cut method. The processing amount and processing time can be greatly reduced.
- the present invention can be applied when extracting a region from a moving image. Since a moving image is composed of a plurality of frames, it is conceivable to extract a region from each frame. However, the image included in each frame has poor image quality, and the region cannot be extracted with high accuracy.
- the moving image can be regarded as a three-dimensional image in which a plurality of frames are arranged along the time axis. In this way, when extracting a region from a moving image regarded as a three-dimensional image using the graph cut method, by applying the present invention, it is possible to greatly reduce the processing amount and processing time of calculation, In addition, the region can be extracted from the moving image with high accuracy.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
処理対象画像の低解像度画像を生成し、グラフカット法により低解像度画像から特定領域を抽出する第1の抽出手段と、
特定領域の抽出結果に基づいて、処理対象画像における特定領域の輪郭を含む輪郭領域を、処理対象画像に設定する輪郭領域設定手段と、
グラフカット法により、輪郭領域から特定領域に対応する領域を抽出する第2の抽出手段とを備えたことを特徴とするものである。 An image processing apparatus according to the present invention is an image processing apparatus that extracts a specific area from a processing target image by a graph cut method,
First extraction means for generating a low-resolution image of the processing target image and extracting a specific region from the low-resolution image by a graph cut method;
A contour region setting means for setting a contour region including the contour of the specific region in the processing target image to the processing target image based on the extraction result of the specific region;
And a second extraction means for extracting a region corresponding to the specific region from the contour region by a graph cut method.
処理対象画像の低解像度画像を生成し、グラフカット法により低解像度画像から特定領域を抽出し、
特定領域の抽出結果に基づいて、処理対象画像における特定領域の輪郭を含む輪郭領域を、処理対象画像に設定し、
グラフカット法により、輪郭領域から特定領域に対応する領域を抽出することを特徴とするものである。 An image processing method according to the present invention is an image processing method for extracting a specific region from a processing target image by a graph cut method,
Generate a low-resolution image of the image to be processed, extract a specific area from the low-resolution image by the graph cut method,
Based on the extraction result of the specific region, the contour region including the contour of the specific region in the processing target image is set as the processing target image,
A region corresponding to the specific region is extracted from the contour region by a graph cut method.
なお、上記実施形態においては、医用の3次元画像M0から肝臓領域を抽出しているが、抽出する領域はこれに限定されるものではなく、脳、心臓、肺野、膵臓、脾臓、腎臓、血管等の、医用3次元画像に含まれる各種構造物の領域を抽出する際に本発明を適用することにより、演算の処理量および処理時間を短縮することができる。 As described above, according to the present embodiment, the low-resolution image ML of the three-dimensional image M0 is generated, and a specific region such as a liver region is extracted from the low-resolution image ML by the graph cut method. Here, the low-resolution image ML has a smaller number of pixels than the three-dimensional image M0. For example, when the resolution of the low resolution image ML is 1/4 of the three-dimensional image M0, the number of pixels is 1/64. For this reason, by using the low-resolution image ML, the amount of calculation and the memory used can be reduced, but the accuracy of region extraction is not so good. For this reason, in this embodiment, the contour region E0 including the contour of the liver region extracted from the low-resolution image ML is set as the three-dimensional image M0, and the liver region is extracted from the contour region E0 by the graph cut method. Is. Here, as shown in FIG. 7, the size of the contour region E0 is significantly smaller than that of the three-dimensional image M0. As described above, in the present embodiment, the liver region is extracted by applying the graph cut method only to the low-resolution image ML of the three-dimensional image M0 and the contour region E0 of the three-dimensional image M0. Compared with the case where the graph cut method is applied to the entire image M0, the processing time can be greatly shortened, and the amount of memory for calculation can be greatly reduced.
In the above embodiment, the liver region is extracted from the medical three-dimensional image M0, but the region to be extracted is not limited to this, and the brain, heart, lung field, pancreas, spleen, kidney, By applying the present invention when extracting regions of various structures included in a medical three-dimensional image such as blood vessels, it is possible to reduce the processing amount and processing time of computation.
Claims (6)
- グラフカット法により処理対象画像から特定領域を抽出する画像処理装置であって、
前記処理対象画像の低解像度画像を生成し、前記グラフカット法により該低解像度画像から前記特定領域を抽出する第1の抽出手段と、
前記特定領域の抽出結果に基づいて、前記処理対象画像における前記特定領域の輪郭を含む輪郭領域を、前記処理対象画像に設定する輪郭領域設定手段と、
前記グラフカット法により、前記輪郭領域から前記特定領域に対応する領域を抽出する第2の抽出手段とを備えたことを特徴とする画像処理装置。 An image processing apparatus that extracts a specific area from a processing target image by a graph cut method,
First extraction means for generating a low resolution image of the processing target image and extracting the specific region from the low resolution image by the graph cut method;
A contour region setting means for setting a contour region including the contour of the specific region in the processing target image in the processing target image based on the extraction result of the specific region;
An image processing apparatus comprising: a second extraction unit configured to extract a region corresponding to the specific region from the contour region by the graph cut method. - 前記輪郭領域設定手段は、前記輪郭領域のサイズを、前記低解像度画像と前記処理対象画像との解像度の差に基づいて決定する手段であることを特徴とする請求項1記載の画像処理装置。 2. The image processing apparatus according to claim 1, wherein the contour region setting unit is a unit that determines the size of the contour region based on a resolution difference between the low-resolution image and the processing target image.
- 前記輪郭領域設定手段は、モフォロジー処理のイロージョン処理およびダイレーション処理により、前記輪郭領域を設定する手段であることを特徴とする請求項1または2記載の画像処理装置。 3. The image processing apparatus according to claim 1, wherein the contour region setting means is a means for setting the contour region by erosion processing and dilation processing of morphology processing.
- 前記第2の抽出手段は、前記輪郭領域における前記特定領域の輪郭の内側にある画素に対する前記グラフカット法におけるs-linkの値を大きくし、前記輪郭領域における前記特定領域の輪郭の外側にある画素に対する前記グラフカット法におけるt-linkの値を大きくする手段であることを特徴とする請求項1から3のいずれか1項記載の画像処理装置。 The second extraction means increases the value of s-link in the graph cut method for pixels inside the contour of the specific region in the contour region, and is outside the contour of the specific region in the contour region. 4. The image processing apparatus according to claim 1, wherein the image processing apparatus is a means for increasing a t-link value in the graph cut method for a pixel.
- グラフカット法により処理対象画像から特定領域を抽出する画像処理方法であって、
前記処理対象画像の低解像度画像を生成し、前記グラフカット法により該低解像度画像から前記特定領域を抽出し、
前記特定領域の抽出結果に基づいて、前記処理対象画像における前記特定領域の輪郭を含む輪郭領域を前記処理対象画像に設定し、
前記グラフカット法により、前記輪郭領域から前記特定領域に対応する領域を抽出することを特徴とする画像処理方法。 An image processing method for extracting a specific area from a processing target image by a graph cut method,
Generating a low-resolution image of the processing target image, extracting the specific region from the low-resolution image by the graph cut method,
Based on the extraction result of the specific region, a contour region including the contour of the specific region in the processing target image is set in the processing target image,
An image processing method, wherein an area corresponding to the specific area is extracted from the outline area by the graph cut method. - グラフカット法により処理対象画像から特定領域を抽出する画像処理方法をコンピュータに実行させるためのプログラムであって、
前記処理対象画像の低解像度画像を生成し、前記グラフカット法により該低解像度画像から前記特定領域を抽出する手順と、
前記特定領域の抽出結果に基づいて、前記処理対象画像における前記特定領域の輪郭を含む輪郭領域を前記処理対象画像に設定する手順と、
前記グラフカット法により、前記輪郭領域から前記特定領域に対応する領域を抽出する手順とをコンピュータに実行させることを特徴とするプログラム。 A program for causing a computer to execute an image processing method for extracting a specific area from a processing target image by a graph cut method,
Generating a low-resolution image of the processing target image, and extracting the specific region from the low-resolution image by the graph cut method;
Based on the extraction result of the specific region, a procedure for setting a contour region including the contour of the specific region in the processing target image to the processing target image;
A program for causing a computer to execute a procedure for extracting an area corresponding to the specific area from the outline area by the graph cut method.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2886092A CA2886092A1 (en) | 2012-09-27 | 2013-09-20 | Image processing apparatus, method and program |
EP13841776.1A EP2901933A4 (en) | 2012-09-27 | 2013-09-20 | Image processing device, method, and program |
CN201380050478.9A CN104717925A (en) | 2012-09-27 | 2013-09-20 | Image processing device, method, and program |
BR112015006523A BR112015006523A2 (en) | 2012-09-27 | 2013-09-20 | image processing apparatus, method and program |
US14/665,365 US20150193943A1 (en) | 2012-09-27 | 2015-03-23 | Image processing apparatus, method and program |
IN2613DEN2015 IN2015DN02613A (en) | 2012-09-27 | 2015-03-31 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-213619 | 2012-09-27 | ||
JP2012213619A JP5836908B2 (en) | 2012-09-27 | 2012-09-27 | Image processing apparatus and method, and program |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/665,365 Continuation US20150193943A1 (en) | 2012-09-27 | 2015-03-23 | Image processing apparatus, method and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014050044A1 true WO2014050044A1 (en) | 2014-04-03 |
Family
ID=50387485
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/005556 WO2014050044A1 (en) | 2012-09-27 | 2013-09-20 | Image processing device, method, and program |
Country Status (8)
Country | Link |
---|---|
US (1) | US20150193943A1 (en) |
EP (1) | EP2901933A4 (en) |
JP (1) | JP5836908B2 (en) |
CN (1) | CN104717925A (en) |
BR (1) | BR112015006523A2 (en) |
CA (1) | CA2886092A1 (en) |
IN (1) | IN2015DN02613A (en) |
WO (1) | WO2014050044A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105096331A (en) * | 2015-08-21 | 2015-11-25 | 南方医科大学 | Graph cut-based lung 4D-CT tumor automatic segmentation method |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5881625B2 (en) * | 2013-01-17 | 2016-03-09 | 富士フイルム株式会社 | Region dividing apparatus, program, and method |
JP6220310B2 (en) * | 2014-04-24 | 2017-10-25 | 株式会社日立製作所 | Medical image information system, medical image information processing method, and program |
EP3018626B1 (en) * | 2014-11-04 | 2018-10-17 | Sisvel Technology S.r.l. | Apparatus and method for image segmentation |
JP5858188B1 (en) | 2015-06-15 | 2016-02-10 | 富士ゼロックス株式会社 | Image processing apparatus, image processing method, image processing system, and program |
KR102202398B1 (en) | 2015-12-11 | 2021-01-13 | 삼성전자주식회사 | Image processing apparatus and image processing method thereof |
JP6562869B2 (en) * | 2016-04-01 | 2019-08-21 | 富士フイルム株式会社 | Data classification apparatus, method and program |
JP6611660B2 (en) * | 2016-04-13 | 2019-11-27 | 富士フイルム株式会社 | Image alignment apparatus and method, and program |
JP6611255B2 (en) * | 2016-06-09 | 2019-11-27 | 日本電信電話株式会社 | Image processing apparatus, image processing method, and image processing program |
JP6833444B2 (en) | 2016-10-17 | 2021-02-24 | キヤノン株式会社 | Radiation equipment, radiography system, radiography method, and program |
AU2019214330A1 (en) * | 2018-02-02 | 2020-08-20 | Moleculight Inc. | Wound imaging and analysis |
US10964012B2 (en) * | 2018-06-14 | 2021-03-30 | Sony Corporation | Automatic liver segmentation in CT |
JP7052103B2 (en) * | 2021-02-01 | 2022-04-11 | キヤノン株式会社 | Radiography equipment, radiography system, radiography method, and program |
JP7365066B2 (en) * | 2021-12-08 | 2023-10-19 | 株式会社palan | display system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003010172A (en) | 2001-07-04 | 2003-01-14 | Hitachi Medical Corp | Method and device for extracting and displaying specific area of internal organ |
JP2007307358A (en) * | 2006-04-17 | 2007-11-29 | Fujifilm Corp | Method, apparatus and program for image treatment |
JP2008185480A (en) * | 2007-01-30 | 2008-08-14 | Matsushita Electric Works Ltd | Human body detector |
JP2012223315A (en) * | 2011-04-19 | 2012-11-15 | Fujifilm Corp | Medical image processing apparatus, method, and program |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8913830B2 (en) * | 2005-01-18 | 2014-12-16 | Siemens Aktiengesellschaft | Multilevel image segmentation |
US7822274B2 (en) * | 2006-01-17 | 2010-10-26 | Siemens Medical Solutions Usa, Inc. | Banded graph cut segmentation algorithms with laplacian pyramids |
US8050498B2 (en) * | 2006-07-21 | 2011-11-01 | Adobe Systems Incorporated | Live coherent image selection to differentiate foreground and background pixels |
US7881540B2 (en) * | 2006-12-05 | 2011-02-01 | Fujifilm Corporation | Method and apparatus for detection using cluster-modified graph cuts |
US8131075B2 (en) * | 2007-03-29 | 2012-03-06 | Siemens Aktiengesellschaft | Fast 4D segmentation of large datasets using graph cuts |
JP4493679B2 (en) * | 2007-03-29 | 2010-06-30 | 富士フイルム株式会社 | Target region extraction method, apparatus, and program |
US8121407B1 (en) * | 2008-03-17 | 2012-02-21 | Adobe Systems Incorporated | Method and apparatus for localized labeling in digital images |
US8213726B2 (en) * | 2009-06-19 | 2012-07-03 | Microsoft Corporation | Image labeling using multi-scale processing |
JP2011015262A (en) * | 2009-07-03 | 2011-01-20 | Panasonic Corp | Image decoder |
CN101996393B (en) * | 2009-08-12 | 2012-08-01 | 复旦大学 | Super-resolution method based on reconstruction |
KR101669840B1 (en) * | 2010-10-21 | 2016-10-28 | 삼성전자주식회사 | Disparity estimation system and method for estimating consistent disparity from multi-viewpoint video |
-
2012
- 2012-09-27 JP JP2012213619A patent/JP5836908B2/en active Active
-
2013
- 2013-09-20 BR BR112015006523A patent/BR112015006523A2/en not_active IP Right Cessation
- 2013-09-20 CA CA2886092A patent/CA2886092A1/en not_active Abandoned
- 2013-09-20 EP EP13841776.1A patent/EP2901933A4/en not_active Withdrawn
- 2013-09-20 WO PCT/JP2013/005556 patent/WO2014050044A1/en active Application Filing
- 2013-09-20 CN CN201380050478.9A patent/CN104717925A/en active Pending
-
2015
- 2015-03-23 US US14/665,365 patent/US20150193943A1/en not_active Abandoned
- 2015-03-31 IN IN2613DEN2015 patent/IN2015DN02613A/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003010172A (en) | 2001-07-04 | 2003-01-14 | Hitachi Medical Corp | Method and device for extracting and displaying specific area of internal organ |
JP2007307358A (en) * | 2006-04-17 | 2007-11-29 | Fujifilm Corp | Method, apparatus and program for image treatment |
JP2008185480A (en) * | 2007-01-30 | 2008-08-14 | Matsushita Electric Works Ltd | Human body detector |
JP2012223315A (en) * | 2011-04-19 | 2012-11-15 | Fujifilm Corp | Medical image processing apparatus, method, and program |
Non-Patent Citations (7)
Title |
---|
ALI, ASEM M.: "Automatic lung segmentation of volumetric low-dose CT scans using graph cuts", ADVANCES IN VISUAL COMPUTING, 2008, BERLIN HEIDELBERG, pages 258 - 267, XP019112080 * |
DANEK ET AL.: "Segmentation of touching cell nuclei using a two-stage graph cut model", IMAGE ANALYSIS, 2009, BERLIN HEIDELBERG, pages 410 - 419, XP019121153 * |
HOWE, N.R. ET AL: "BETTER FOREGROUND SEGMENTATION THROUGH GRAPH CUTS", 26 July 2004 (2004-07-26), XP055253712, Retrieved from the Internet <URL:http://arxiv.org/abs/cs/0401017> [retrieved on 20131224], DOI: 10.1016/J.ESWA.2010.09.137 * |
LAURENT MASSOPTIER: "Fully automatic liver segmentation through graph-cut technique", EMBS 2007. 29TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE, 2007, pages 5243 - 5246, XP031337403 * |
See also references of EP2901933A4 |
TAKASHI IJIRI: "Contour-based approach for reining volume image segmentation", IPSJ SIG NOTES GRAPHICS TO CAD(CG), vol. 5, 1 February 2011 (2011-02-01), pages 1 - 6, XP055253707 * |
Y.Y. BOYKOV; M. JOLLY: "Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images", PROCEEDINGS OF ''INTERNATIONAL CONFERENCE ON COMPUTER VISION, vol. I, 2001, pages 105 - 112, XP010553969 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105096331A (en) * | 2015-08-21 | 2015-11-25 | 南方医科大学 | Graph cut-based lung 4D-CT tumor automatic segmentation method |
Also Published As
Publication number | Publication date |
---|---|
EP2901933A1 (en) | 2015-08-05 |
BR112015006523A2 (en) | 2017-07-04 |
EP2901933A4 (en) | 2016-08-03 |
JP5836908B2 (en) | 2015-12-24 |
CA2886092A1 (en) | 2014-04-03 |
US20150193943A1 (en) | 2015-07-09 |
CN104717925A (en) | 2015-06-17 |
JP2014064835A (en) | 2014-04-17 |
IN2015DN02613A (en) | 2015-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5836908B2 (en) | Image processing apparatus and method, and program | |
Zhu et al. | How can we make GAN perform better in single medical image super-resolution? A lesion focused multi-scale approach | |
Armanious et al. | Unsupervised medical image translation using cycle-MedGAN | |
Guo et al. | Progressive image inpainting with full-resolution residual network | |
Isaac et al. | Super resolution techniques for medical image processing | |
CN109978037B (en) | Image processing method, model training method, device and storage medium | |
US8842936B2 (en) | Method, apparatus, and program for aligning images | |
CN104182954B (en) | Real-time multi-modal medical image fusion method | |
JP6195714B2 (en) | Medical image processing apparatus and method, and program | |
US20080107318A1 (en) | Object Centric Data Reformation With Application To Rib Visualization | |
JP2015129987A (en) | System and method of forming medical high-resolution image | |
JP5037705B2 (en) | Image processing apparatus and method, and program | |
KR20200137768A (en) | A Method and Apparatus for Segmentation of Orbital Bone in Head and Neck CT image by Using Deep Learning and Multi-Graylevel Network | |
JP2021027982A (en) | Image processing apparatus and image processing method | |
Wang et al. | Left atrial appendage segmentation based on ranking 2-D segmentation proposals | |
CN108038840B (en) | Image processing method and device, image processing equipment and storage medium | |
JP2017189337A (en) | Image positioning apparatus and method, and program | |
JP2024144633A (en) | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, AND PROGRAM | |
Susan et al. | Deep learning inpainting model on digital and medical images-a review. | |
JP4099357B2 (en) | Image processing method and apparatus | |
US9727965B2 (en) | Medical image processing apparatus and medical image processing method | |
Nitta et al. | Deep learning based lung region segmentation with data preprocessing by generative adversarial nets | |
WO2020137677A1 (en) | Image processing device, image processing method, and program | |
JP6817784B2 (en) | Super-resolution device and program | |
Athreya et al. | Ultrasound Image Enhancement using CycleGAN and Perceptual Loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13841776 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2886092 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112015006523 Country of ref document: BR |
|
REEP | Request for entry into the european phase |
Ref document number: 2013841776 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013841776 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 112015006523 Country of ref document: BR Kind code of ref document: A2 Effective date: 20150324 |