KR101727670B1 - Device and method for medical image segmentation based on user interactive input - Google Patents

Device and method for medical image segmentation based on user interactive input Download PDF

Info

Publication number
KR101727670B1
KR101727670B1 KR1020150088559A KR20150088559A KR101727670B1 KR 101727670 B1 KR101727670 B1 KR 101727670B1 KR 1020150088559 A KR1020150088559 A KR 1020150088559A KR 20150088559 A KR20150088559 A KR 20150088559A KR 101727670 B1 KR101727670 B1 KR 101727670B1
Authority
KR
South Korea
Prior art keywords
image
dimensional
divided
user
region
Prior art date
Application number
KR1020150088559A
Other languages
Korean (ko)
Other versions
KR20170000040A (en
Inventor
박안진
엄주범
이병일
안재성
Original Assignee
한국광기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국광기술원 filed Critical 한국광기술원
Priority to KR1020150088559A priority Critical patent/KR101727670B1/en
Publication of KR20170000040A publication Critical patent/KR20170000040A/en
Application granted granted Critical
Publication of KR101727670B1 publication Critical patent/KR101727670B1/en

Links

Images

Classifications

    • G06K9/342
    • G06K9/6206
    • G06K9/6224
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/162Segmentation; Edge detection involving graph-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The medical image dividing apparatus according to the present invention comprises: a display unit for displaying a three-dimensional medical image image; A user input unit for generating a two-dimensional divided object image in which a region of interest and a background region to be separated from the user are selected from the three-dimensional medical image image displayed from the display unit; An image expanding unit for expanding the two-dimensional divided image in three dimensions; And an image dividing unit for dividing the ROI and the ROI in the three-dimensional divided image generated by the image expanding unit to generate a three-dimensional output image.
According to the present invention, since the user inputs a region of interest and a background region to be directly divided, it is possible to divide a region of interest, which the user desires, rather than dividing the image into specific regions such as bones, liver, lungs and the like. In this case, since the user does not need to select the two-dimensional image to be divided several times corresponding to the three-dimensional image, the user can minimize the user input and output the final division result with only a small number of inputs.

Description

TECHNICAL FIELD [0001] The present invention relates to a medical image dividing device and a method for dividing a medical image into a plurality of medical images,

The present invention relates to a medical image segmentation apparatus and method for dividing a specific portion selected by a user in a medical image, and more particularly, to a medical image segmentation apparatus and method, And to a medical image segmentation apparatus and method capable of dividing an interest region into three dimensions based on an approximate position of a background and outputting the divided region.

Recently, computerized tomography (CT) or magnetic resonance imaging (MRI) has been widely used in medical institutions to acquire more precise medical images. Such an image acquiring apparatus acquires an image of the human body by moving the two-dimensional images in the Z-axis direction perpendicular to the image plane, and accumulates the acquired images on the Z-axis to generate three-dimensional image data.

The technique of separating the human body part from the background image by receiving the 3D image data is called "medical image segmentation technique ". The segmentation technique can be applied to a 2D or 3D image of a region of interest that is divided from the entire image when performing diagnosis or research in various medical fields such as Computer-integrated Surgery, Anatomy Research, and Pathology. Visualization can improve accuracy and efficiency.

Various techniques for medical image segmentation have been researched and developed, among which Threshold, Edge Detection, and Region Growing techniques are typically used. The threshold method is a technique for generating a histogram for a given image and automatically or manually setting a threshold value to divide the background and the region of interest. Edge detection is a technique for detecting discontinuous pixels with high differences in neighboring pixels and dividing the background and the region of interest into boundaries of the detected pixels. The region growth technique is a technique that divides the region of interest by expanding the region from a given specific location based on the similarity of neighboring pixels.

Although these techniques are included in most medical image analysis software as functions for image segmentation, since only the neighboring pixels are considered (edge detection, area growth technique) or depend on a specific threshold value (threshold technique) There is a disadvantage in that it can not be divided or excessively divided.

User-input-based image segmentation techniques that enable user-system interactions to provide an approximate location for a region of interest for sophisticated image segmentation have been variously researched and developed. This technique divides the background and the area of interest and feeds it back to the user. The system provides additional information to the system when there is an error in the feedback information. In order to derive the final result through feedback between the user and the system, this is called interactive image segmentation. In order to improve the user's convenience, the development is focused on the feedback speed (division processing speed) .

(US Patent No. 6973212) is one of the most representative methods in the interactive image segmentation using graph cuts (US Patent No. 6973212). The similarity between the dictionary information inputted by the user and each pixel of the image and the similarity with neighboring pixels We set the energy function on the basis of it and perform the image segmentation by obtaining the optimal solution for the function by the graph cut algorithm. The divided result is fed back to the user, and the user can input additional advance information if the division result is unsatisfactory. Based on the additional preliminary information, the image segmentation of the graph cut technique is performed again, and the preliminary information input / feedback process is repeated for optimal image segmentation.

Since the graph cut technique has an advantage of providing an optimal division result according to the prior information, it is widely used not only for medical images but also for dividing various types of images. However, It is difficult to be practically used because the response time for graph segmentation is slow when applied to three-dimensional medical images composed of images. For example, when dividing a 3-D medical image of 512x512x512 size, it takes 30 seconds or longer to execute the graph cut. This is the reaction time for one user input. Since user input is required several times to several tens of times in order to obtain the final result, the user can feel a feeling of stiffness in the reaction speed of the system, Can take.

In the related art, in order to reduce the number of feedbacks, a technique of additionally reflecting shape information to the graph cut technique (U.S. Patent Publication No. 2007-0014473) and a technique utilizing Geodesic information (PCT WO2009-142858) have been proposed. However, this is also insufficient to apply to 3D medical image segmentation because of the slow response rate to user input.

Since the 3D medical image is composed of several tens or hundreds of two-dimensional image sets, the user needs to input information about all 2D images including the region of interest in order to perform complementary image segmentation . There is a need for an apparatus and a method for dividing a three-dimensional image even if a user selects a single portion of the three-dimensional image only two-dimensionally.

US Patent No. 6973212 U.S. Published Patent Application No. 2007-0014473

The present invention divides a three-dimensional image of a region of interest from a selected two-dimensional image to be divided even if the user selects a two-dimensional image at a time in a three-dimensional medical image provided by a medical imaging apparatus such as MRI, CT, And a medical image dividing apparatus and method capable of outputting a medical image.

It is another object of the present invention to provide a medical image segmentation apparatus and method which can improve the segmentation processing speed with respect to user input and enable rapid feedback.

According to an aspect of the present invention, there is provided a medical image segmentation apparatus comprising: a display unit for displaying a three-dimensional medical image; A user input unit for generating a two-dimensional divided object image in which a region of interest and a background region to be separated from the user are selected from the three-dimensional medical image image displayed from the display unit; An image expanding unit for expanding the two-dimensional divided image in three dimensions; And an image divider for dividing the region of interest and the background region in the three-dimensional divided image generated by the image expanding unit to generate a three-dimensional output image.

Preferably, the display unit according to the present invention may additionally display a two-dimensional image of horizontal, sagittal or coronal coordinates of the coordinates selected by the user in the 3D medical image.

Preferably, the user input unit according to the present invention feeds back the three-dimensional output image generated by the image dividing unit and further receives a two-dimensional divided target image in which a region of interest and a background region, .

Preferably, the user input unit according to the present invention receives coordinate values of x, y, and z from a user in a three-dimensional medical image image displayed from a display unit, and receives horizontally, sagittal, Dimensional segmentation target image of FIG.

Preferably, the user input unit according to the present invention may additionally receive a setting range for a region of interest and a background region selected by the user in the two-dimensional divided image.

Preferably, the image expanding unit according to the present invention can extend the two-dimensional divided image three-dimensionally based on the similarity of neighboring pixels.

Preferably, the image dividing unit according to the present invention includes: a first database for storing image data of a region of interest of a three-dimensional divided image; And a second database for storing image data for a background area of the three-dimensional divided object image.

Preferably, the image dividing unit according to the present invention includes a first module for sequentially changing a reference pixel value to match pixel values of data stored in the first and second databases with reference pixel values; And a second module for setting a point at which the coordinate values of the amount data in which the pixel values matched in the first module coincide with each other as a division boundary surface.

According to another aspect of the present invention, there is provided a medical image segmentation method comprising: (a) displaying a three-dimensional medical image; (b) generating a two-dimensional divided object image in which a region of interest and a background region to be divided are selected from a user in the three-dimensional medical image image displayed in the step (a); (C) expanding the two-dimensional divided object image into three dimensions; And (d) dividing an area of interest and a background area in the three-dimensional divided image generated in step (c) to generate a three-dimensional output image.

According to the present invention, since the user inputs a region of interest and a background region to be directly divided, it is possible to divide a region of interest, which the user desires, rather than dividing the image into specific regions such as bones, liver, lungs and the like.

In this case, since the user does not need to select the two-dimensional image to be divided several times corresponding to the three-dimensional image, the user can minimize the user input and output the final division result with only a small number of inputs.

In addition, the present invention provides an advantage that an image of a sophisticated region of interest can be obtained because the divided result image is fed back to the user input unit so that the user can gradually correct the divided region of interest.

In addition, since the first module and the second module of the image segmentation unit match the reference plane of the image segmentation with a single pixel value, the segmentation processing speed is advantageous.

The database of the image partitioning unit and the division process according to the first and second modules can generate and store additional meshes and can be digitized. Accordingly, the digitized image of the three-dimensional region of interest can be applied to the production of a human-image-based content that can be widely used in movies, TV documentaries, and the like. In addition, it can be used as an educational material for human education or anatomy lecture in educational institutions such as schools because it can output the actual image of the interested part of the human image. In the case of the patient, There is an effect.

1 shows a medical image segmentation apparatus according to an embodiment of the present invention.
2 shows a medical image displayed on a display according to an embodiment of the present invention.
FIG. 3 shows a two-dimensional image to be divided input into a user input unit according to an embodiment of the present invention.
4 is a view conceptually showing a state in which an image enlargement unit according to an embodiment of the present invention expands a two-dimensional divided image in three dimensions based on the similarity of neighboring pixels.
5 illustrates images of neighboring pixels that are automatically generated by the image extension unit according to an embodiment of the present invention.
6 is a view showing a three-dimensional output image in which a hue value is set in a region of interest in the volume image generating module according to the embodiment of the present invention.
7 illustrates a medical image segmentation method according to an embodiment of the present invention.

Hereinafter, the present invention will be described in detail with reference to the accompanying drawings. However, the present invention is not limited to or limited by the exemplary embodiments. Like reference numerals in the drawings denote members performing substantially the same function.

The objects and effects of the present invention can be understood or clarified naturally by the following description, and the purpose and effect of the present invention are not limited by the following description. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.

1 shows a medical image segmentation apparatus 1 according to an embodiment of the present invention. Referring to FIG. 1, the medical image segmentation apparatus 1 may include a display unit 10, a user input unit 30, an image expansion unit 50, and an image division unit 70.

The display unit 10 can display a three-dimensional medical image image. The three-dimensional medical image image means a three-dimensional image image picked up from a medical imaging apparatus such as MRI, X-ray, CT, and the like.

Fig. 2 shows a medical image displayed on the display unit 10. Fig. 2, the display unit 10 may additionally display a two-dimensional image of horizontal, sagittal, or coronal coordinates of a coordinate selected by the user in the 3D medical image.

FIG. 2 shows an example of an image of a horizontal plane 3 selected by a user in a 3D medical image. In addition to the horizontal plane 3, the display unit 10 includes a sidewall, which is a cross-section that divides the body into left and right sides, and a body that is parallel to the right and left sides of the body. A two-dimensional image of a tubular surface in cross section can be displayed together.

The display unit 10 displays the three-dimensional and two-dimensional cross-sectional images selected by the user together so that the user can accurately select an area to be divided. The two-dimensional cross-sectional image displayed together with the display unit 10 may be a two-dimensional image in which the region of interest selected by the initial user is not selected and may be a two-dimensional image generated by the user input unit 30 and the image expanding unit 50 It may be a two-dimensional divided target image to be divided.

The user input unit 30 may generate a two-dimensional divided image in which a region of interest and a background region to be separated from the user are selected in the three-dimensional medical image image displayed from the display unit 10. [

The user input unit 30 receives coordinate values of x, y, and z from a user in a three-dimensional medical image image displayed from the display unit 10 and displays the coordinates of the horizontal, sagittal, or coronal It is possible to generate a two-dimensional divided object image. The user input unit 30 can additionally receive a setting range for the interest area and the background area selected by the user in the two-dimensional divided image.

FIG. 3 shows a two-dimensional image to be divided input into the user input unit 30. FIG. Referring to FIG. 3, the user selects a horizontal plane, sagittal plane, or one end face of the coronal plane of the 3D medical image image displayed on the display unit 10. The two-dimensional image to be selected may be determined by the three-dimensional coordinates selected by the user. The user can select the horizontal plane of the desired position by adjusting the y-axis displacement amount. In addition, the user can select the sagittal plane of the desired position by adjusting the x-axis displacement amount. In addition, the user can select the coronal plane of the desired position by adjusting the z-axis displacement amount.

The two-dimensional image of the selected horizontal plane, sagittal plane, or coronal plane is an image including a region of interest to be divided by the user and is hereinafter referred to as a two-dimensional divided object image. Thereafter, it is possible to select the region of interest and background region to be divided in the selected cross section. A well-known input interface such as a scroll bar, a mouse click, or a wheel can be used for selection of a region of interest or a background region.

The region of interest and the background region may be selected in the form of point, line or closed area. The user input unit 30 generates a two-dimensional divided target image selected by the user, and the display unit 10 can display it again on the screen. The image data of the two-dimensional divided object image and the region of interest and background region may be transmitted to the image extension unit 50.

The user input unit 30 feeds back the three-dimensional output image generated by the image dividing unit 70, which will be described later, to add a two-dimensional divided image in which the region of interest and the background region, Input can be received.

In this case, when the feedback result is transmitted to the user input unit 30, the display unit 10 displays a three-dimensional output image as an output result of the image division unit, not the first three-dimensional medical image. In the displayed three-dimensional output image, the user can repeat the above-described selection process.

This feedback process is a complementary image segmentation process. It can be understood that the user confirms the divided result values and provides additional information to the system when there is an error in the result value. Since feedback between user and system can be corrected and derived, the user convenience is improved and more sophisticated segmentation images can be obtained. For the interaction feedback to be effective, the feedback rate for the result of the division must be high. The medical image dividing apparatus 1 according to the present embodiment expands the two-dimensional image to be divided generated by the user input unit 30 three-dimensionally on the system. Therefore, the user does not have to repeatedly select a similar two-dimensional image, and the speed of the feedback performing process can be dramatically improved.

The image expanding unit 50 can expand the two-dimensional image to be divided into three dimensions. The image expanding unit 50 can extend the two-dimensional image to be segmented three-dimensionally based on the similarity of neighboring pixels.

The image extension unit 50 may include a database in which a dictionary information processing list 501 and a dictionary information list 503 are stored. The preliminary information processing list 501 can be understood as a list of pixels which have not yet been processed, in which a calculation process is required in the two-dimensional image to be divided generated by the user input unit 30. The dictionary information list 503 can be understood as a list of pixels subjected to extension processing of a two-dimensional divided object image according to the execution of the image expansion unit 50. [

4 is a view conceptually showing a manner in which the image expanding unit 50 expands a two-dimensional divided image in three dimensions based on the similarity of neighboring pixels. When the two-dimensional division target image is input to the image expansion unit 50, all the pixels of the two-dimensional division target image are added to the pre-information processing list 501. [

Then, the image expanding unit 50 generates a neighboring pixel list for each pixel Vc included in the preliminary information processing list. The neighboring pixels are all neighboring pixels in the x, y, and z axes in the three-dimensional spatial coordinates. The example of FIG. 4 shows a state in which neighboring pixels are considered in 3X3. In this case, the number of neighboring pixels is 26. The pixels included in the dictionary information list 503 may be excluded from the neighboring pixel list in order to avoid duplicate calculation.

Then, the image expanding unit 50 calculates the similarity between the specific pixel Vc of the preprocessing list 501 and the neighboring pixels, and adds the pixel having the degree of similarity lower than the threshold value to the dictionary information list 503. The degree of similarity may be an absolute value of a difference between two pixel values. The threshold value is appropriately selected according to the user so that the two-dimensional divided object image is expanded only to adjacent pixels having similar pixel values. If the pixel of the unprocessed two-dimensional image to be divided exists in the pre-information processing list 501, the above process is repeated based on the pixel Vc.

With such an expansion of the two-dimensional division target image, the image expansion unit 50 acquires pixel data of three sides in the two-dimensional division target image. As such a set of pixels, the image expansion unit 50 generates a three-dimensionally expanded divided object image.

5 shows images of neighboring pixels that are automatically generated by the image expansion unit 50. FIG. The adjacent two-dimensional images 52, 53, 54, and 55 positioned on the upper and lower sides of the two-dimensional divided target image 51 are acquired, and the acquisition criterion is set to a pixel value similar to the two- . Referring to FIG. 5, it can be seen that the upper adjacent images 52 and 53 and the lower adjacent images 54 and 55, which have similar pixel values within the threshold value, with the two-dimensional divided image 51 are obtained.

The expanded two-dimensional divided target image including the adjacent image can be stored in a separate database, respectively. Each of the two-dimensional images is then transmitted to an image division unit 70, and a process of dividing the ROI and the ROI is performed for each image.

As described above, the image expansion unit 50 can perform the three-dimensional expansion process considering all the neighboring pixels in the x, y, and z axes of the two-dimensional divided object image. Therefore, it is possible to automatically acquire the dictionary information of a plurality of neighboring two-dimensional images from the two-dimensional divided target image for one sheet. The obtained two-dimensional image to be divided including the adjacent image is transmitted to the image division unit 70.

The image divider 70 may generate a three-dimensional output image by dividing the region of interest and the background region in the three-dimensional divided image generated by the image expanding unit 50. In this case, the image dividing unit 70 can receive the matched three-dimensional split target image from the image enlarging unit 50, and can obtain the two-dimensional split target image including the adjacent image used for the matching of the three- Respectively. As the two-dimensional image to be divided including the adjacent image eventually forms the three-dimensional image to be divided, the generation result of the adjacent two-dimensional image to be divided obtained in the image expanding unit 50 is referred to as a three- .

The image division unit 70 may include a first database 701, a second database 703, a third database 705, a first module 707, and a second module 709.

The first database 701 may store image data of a region of interest of the three-dimensional divided object image. The second database 703 may store image data for the background area of the three-dimensional divided object image. The third database 705 may store image data of an area excluding the ROI and the background area in the 3D partitioning target image.

The first module 707 may sequentially change the reference pixel value to match the pixel value of the data stored in the first database 701 and the second database 703 with the reference pixel value.

The second module 709 can set a point at which the coordinate values of the amount data in which the pixel values matched in the first module 707 coincide with each other as a division boundary surface. With such an algorithm, it is possible to quickly determine the division boundary surface of the three-dimensional division target image.

The process of setting the boundary between the first module 707 and the second module 709 is conceptually shown in the following FIG.

[Drawing]

Figure 112015060357427-pat00001

The image dividing unit 70 can perform image segmentation in units of two-dimensional images since the three-dimensional divided image is composed of a set of two-dimensional images. Referring to the above figure in relation to the process of dividing the area of interest L1 and the background area L2, a water pool is formed by filling water from a conceptually low valley, and a point where the two pools meet is divided into two It can be understood as distinguishing the puddle.

If this is applied to image segmentation, the x-axis in the figure can be understood as the coordinate value of the three-dimensional segmentation target image (or each two-dimensional segmentation target image). That is, the value of the x-axis becomes information on the positions of the pixels in the three-dimensional image to be divided. The y-axis is the pixel value. In this case, the first module 707 scans the pixel value from 0 in the three-dimensional division target image. The first module 707 sequentially increases the pixel values from 0 to 255, and matches the pixels of the three-dimensional divided image having the corresponding pixel values.

A pixel whose first module 707 varies from 0 to 255 is referred to as a reference pixel value. Referring to FIG. 5, it can be understood that the pixels matched according to the scan of the reference pixel value are displayed as if water is gradually coming on. In this case, L1 is the pixel information of the first database 701 in which the image data of the ROI is stored, L2 is the pixel information of the second database 703 in which the image data of the background area is stored, Can be understood as pixel information of the stored third database 705. [ In this process, a boundary surface in which the coordinate values of the pixels coincide with each other is generated. The second module 709 sets a point at which the coordinate values of the matched pixels of the data stored in the first database 701 and the second database 703 coincide with each other as a dividing boundary. This process is repeated in the image divider 70 until the pixel value reaches the system maximum value. If the division boundary is determined in the above process, the image division unit 70 can generate a three-dimensional output image by separating the region of interest and the background region based on the division boundary.

Although not shown in the figure, the image divider 70 may further include a three-dimensional volume image generation module and a mesh module. The 3D volume image generation module can process a 3D output image so that the user can view the divided 3D output image at a desired time point. The 3D volume image generation module can provide a user interface for a user to enlarge, reduce, rotate, move, and the like a region of interest through an input device such as a mouse or a keyboard. A volume rendering algorithm may be applied to the volume image generation module. In addition, a hue value can be set in the divided region of interest within the three-dimensional medical image so that the position of the region of interest in the human body can be visually confirmed.

6 is a view showing a three-dimensional output image in which a hue value is set in a region of interest in the volume image generating module.

The mesh module converts the segmented three-dimensional output image into a mesh. Mesh transformations can be converted to a variety of 3D editing tools or file formats that can be used in CAD / CAM. Especially, if the extension is converted to the (.stl) file format, 3D printing will enable real-time output of the area of interest in the human body. This allows the medical or medical practitioner to use it for a variety of purposes including patient explanations or student lectures.

7 illustrates a medical image segmentation method according to an embodiment of the present invention. Referring to FIG. 7, a medical image segmentation method includes steps of (a) displaying an image, (b) creating a segmentation target image (S30), (c) expanding an image (S50), and And (d) creating an output image (S70).

(a) Step S10 can display a three-dimensional medical image. (a) Step S10 is a process performed in the display unit 10, omitting its use.

(b) Step (S30) may generate a two-dimensional divided object image in which a region of interest and a background region to be divided are selected from the user in the three-dimensional medical image image displayed in (a) step S10. (b) Step S30 may include a section selection step S301, a region of interest selection step S303, and a background region selection step S305.

The section selection step S301 is a step of selecting a section image of a coronal, sagittal, or horizontal plane to be divided in the three-dimensional medical image image displayed on the display unit 10 by the user.

The region-of-interest selection step S303 and the background-region selection step S305 correspond to the region of interest to be divided into points or lines in the two-dimensional image of the horizontal plane, coronal plane or sagittal plane selected by the user in step S301, Quot; region "

(c) Step (S50) may expand the two-dimensional image to be divided generated in step (b) to three dimensions. (c) Step (S50) means a process performed in the image expanding unit 50, and its use is omitted.

(d) Step (S70) may generate a three-dimensional output image by dividing the region of interest and the background region in the three-dimensional divided image generated in Step (c) (S50). (d) Step S70 may include a pixel value matching step S701, a divided boundary surface setting step S703, a dividing step S705, and a three-dimensional image generating step S707.

The pixel value matching step S701 refers to a process performed in the first module 707. [ The partitioning boundary setting step S703 is a process performed by the second module 709. [ The segmenting step S705 is a process of separating the image of the ROI from the image of the ROI based on the boundary determined in the ROI setting step S703. The three-dimensional image generating step S707 is a process performed by the volume generating module. The three-dimensional image creating step S707 is a process of generating an image of the ROI in three dimensions by matching a plurality of divided two-dimensional images obtained as a result of the dividing step S705 do.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. will be. Therefore, the scope of the present invention should not be limited to the above-described embodiments, but should be determined by all changes or modifications derived from the scope of the appended claims and equivalents of the following claims.

1: medical image dividing device 10: display part
30: user input unit 50: image enlarging unit
501: Dictionary information processing list 503: Dictionary information list
70: Image partitioning part 701: First database
703: second database 705: third database
707: first module 709: second module
S10: display step S30: division target image creation step
S301: Selection of section S303: Selection of region of interest
S305: Select background area S50: Image expansion step
S70: Output image generation step S701: Pixel value matching step
S703: Splitting interface setting step S705: Splitting step
S707: Three-dimensional image creation step

Claims (9)

A display unit for displaying a three-dimensional medical image image;
A user input unit for generating a two-dimensional divided target image in which a region of interest and a background region to be divided are selected from the user in the three-dimensional medical image image displayed from the display unit;
An image expanding unit for expanding the two-dimensional division target image in three dimensions; And
And an image divider for dividing the region of interest and the background region in the three-dimensional divided image generated by the image expanding unit to generate a three-dimensional output image,
Dimensional image of a region of interest from a two-dimensional divided target image selected from a user,
Wherein the image dividing unit comprises:
A first database for storing image data of a region of interest of the three-dimensional divided image; And
And a second database for storing image data for a background region of the three-dimensional divided object image,
A first module for sequentially changing a reference pixel value and matching a pixel value of data stored in the first database and the second database with the reference pixel value; And
Further comprising a second module for setting a point at which the coordinate values of the data in which the pixel values matched in the first module coincide with the dividing interface.
The method according to claim 1,
The display unit includes:
Dimensional image of a horizontal, sagittal or coronal coordinate of a coordinate selected by the user in the 3D medical image.
The method according to claim 1,
Wherein the user input unit comprises:
Wherein the three-dimensional output image generated by the image dividing unit is fed back and the 2D image to be divided is further input to the three-dimensional output image, .
The method according to claim 1,
Wherein the user input unit comprises:
Dimensional coordinate system in which a coordinate value of x, y, z is received from a user in a three-dimensional medical image image displayed from the display unit to generate a two-dimensional division target image of horizontal, sagittal, or coronal Wherein the medical image dividing device is a medical image dividing device.
5. The method of claim 4,
Wherein the user input unit comprises:
And a setting range for the background area and the interest area selected by the user are additionally input in the 2D image.
The method according to claim 1,
The image expansion unit may include:
And divides the image of the two-dimensional division target three-dimensionally based on the degree of similarity of neighboring pixels.
delete delete (a) displaying a three-dimensional medical image image;
(b) generating a two-dimensional divided object image in which a region of interest and a background region to be divided are selected from a user in the three-dimensional medical image image displayed in the step (a);
(c) expanding the two-dimensional division target image in three dimensions; And
(d) generating a three-dimensional output image by dividing the ROI and the ROI from the three-dimensional ROI generated in the step (c)
Dimensional image of a region of interest from a two-dimensional divided target image selected from a user,
The step (d)
(d-1) storing image data for a region of interest of the three-dimensional divided object image in a first database;
(d-2) storing image data for a background region of the three-dimensional division target image in a second database;
(d-3) sequentially matching a reference pixel value and matching a pixel value of data stored in the first database and the second database with the reference pixel value; And
(d-4) setting a point at which the coordinate values of the data in which the pixel values are matched, as a division boundary.
KR1020150088559A 2015-06-22 2015-06-22 Device and method for medical image segmentation based on user interactive input KR101727670B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150088559A KR101727670B1 (en) 2015-06-22 2015-06-22 Device and method for medical image segmentation based on user interactive input

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150088559A KR101727670B1 (en) 2015-06-22 2015-06-22 Device and method for medical image segmentation based on user interactive input

Publications (2)

Publication Number Publication Date
KR20170000040A KR20170000040A (en) 2017-01-02
KR101727670B1 true KR101727670B1 (en) 2017-04-18

Family

ID=57810430

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150088559A KR101727670B1 (en) 2015-06-22 2015-06-22 Device and method for medical image segmentation based on user interactive input

Country Status (1)

Country Link
KR (1) KR101727670B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101930644B1 (en) 2017-09-15 2018-12-18 한국과학기술원 Method and apparatus for fully automated segmenation of a joint using the patient-specific optimal thresholding and watershed algorithm
CN111723875B (en) * 2020-07-16 2021-06-22 哈尔滨工业大学 SAR three-dimensional rotating ship target refocusing method based on CV-RefocusNet
CN112766258B (en) * 2020-12-31 2024-07-02 深圳市联影高端医疗装备创新研究院 Image segmentation method, system, electronic device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007140810A (en) * 2005-11-17 2007-06-07 Korea Inst Of Industrial Technology Three-dimensional shape retrieval device and method
JP4675509B2 (en) * 2001-07-04 2011-04-27 株式会社日立メディコ Apparatus and method for extracting and displaying specific region of organ

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070014473A (en) 2005-07-28 2007-02-01 삼성전자주식회사 Apparatus for manufacturing a semiconductor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4675509B2 (en) * 2001-07-04 2011-04-27 株式会社日立メディコ Apparatus and method for extracting and displaying specific region of organ
JP2007140810A (en) * 2005-11-17 2007-06-07 Korea Inst Of Industrial Technology Three-dimensional shape retrieval device and method

Also Published As

Publication number Publication date
KR20170000040A (en) 2017-01-02

Similar Documents

Publication Publication Date Title
US8532359B2 (en) Biodata model preparation method and apparatus, data structure of biodata model and data storage device of biodata model, and load dispersion method and apparatus of 3D data model
US7450749B2 (en) Image processing method for interacting with a 3-D surface represented in a 3-D image
CN106663309B (en) Method and storage medium for user-guided bone segmentation in medical imaging
US7773786B2 (en) Method and apparatus for three-dimensional interactive tools for semi-automatic segmentation and editing of image objects
Kalra Developing fe human models from medical images
CN107169919B (en) Method and system for accelerated reading of 3D medical volumes
JP2019526124A (en) Method, apparatus and system for reconstructing an image of a three-dimensional surface
US9697600B2 (en) Multi-modal segmentatin of image data
KR101760287B1 (en) Device and method for medical image segmentation
WO2007058993A2 (en) Surface-based characteristic path generation
CN110189352A (en) A kind of root of the tooth extracting method based on oral cavity CBCT image
CN102858266A (en) Reduction and removal of artifacts from a three-dimensional dental X-ray data set using surface scan information
KR101105494B1 (en) A reconstruction method of patient-customized 3-D human bone model
US11744554B2 (en) Systems and methods of determining dimensions of structures in medical images
CN101689298A (en) Imaging system and imaging method for imaging an object
EP2976737B1 (en) View classification-based model initialization
CN107194909A (en) Medical image-processing apparatus and medical imaging processing routine
US8086013B2 (en) Image processing apparatus and image processing method
CN102132322B (en) Apparatus for determining modification of size of object
KR20180009707A (en) Image processing apparatus, image processing method, and, computer readable medium
KR101727670B1 (en) Device and method for medical image segmentation based on user interactive input
CN113645896A (en) System for surgical planning, surgical navigation and imaging
US9530238B2 (en) Image processing apparatus, method and program utilizing an opacity curve for endoscopic images
Goswami et al. 3D modeling of X-ray images: a review
JP6840481B2 (en) Image processing device and image processing method

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant