KR101185728B1 - A segmentatin method of medical image and apparatus thereof - Google Patents

A segmentatin method of medical image and apparatus thereof Download PDF

Info

Publication number
KR101185728B1
KR101185728B1 KR1020110095149A KR20110095149A KR101185728B1 KR 101185728 B1 KR101185728 B1 KR 101185728B1 KR 1020110095149 A KR1020110095149 A KR 1020110095149A KR 20110095149 A KR20110095149 A KR 20110095149A KR 101185728 B1 KR101185728 B1 KR 101185728B1
Authority
KR
South Korea
Prior art keywords
segmentation
region
medical image
determined
pointer
Prior art date
Application number
KR1020110095149A
Other languages
Korean (ko)
Inventor
김수경
김한영
Original Assignee
주식회사 인피니트헬스케어
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 인피니트헬스케어 filed Critical 주식회사 인피니트헬스케어
Priority to KR1020110095149A priority Critical patent/KR101185728B1/en
Priority to PCT/KR2012/007178 priority patent/WO2013042889A1/en
Application granted granted Critical
Publication of KR101185728B1 publication Critical patent/KR101185728B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

PURPOSE: A method and apparatus for segmenting a medical image are provided to obtain the optimum 3D segmentation volume by determining the optimum segmentation seed according to the kind of lesion. CONSTITUTION: Location information of a pointer is extracted according to the input of a user(S120). The kind of a lesion shown in a slice medical image is determined(S150). A segmentation region including the location of the pointer is determined by performing a predetermined segmentation algorithm according to the kind of the lesion(S160). The determined segmentation region is selected as a lesion diagnosis region of the slice medical image. [Reference numerals] (AA) Start; (BB) Finish; (S110) Controlling a pointer according to a user input; (S120) Extracting the location information of a pointer; (S130) Removing granular noise; (S140) Extracting the optimum location information of a pointer; (S150) Determining the kind of lesion based on an angle profile and medical image information relating to the optimum location information; (S160) Determining a segmentation region using a preset segmentation algorithm according to the kind of lesion; (S170) Previously displaying the determined segmentation region on a slice medical image; (S180) Is the selection of a user determined?; (S190) Determining the selected segmentation region as a seed of a three dimensional volume segmentation; (S200) Determining each segmentation region of a plurality of different slice medical images as the seed; (S210) Generating a three dimensional segmentation using a seed and segmentation regions

Description

Segmentation method in medical imaging and its apparatus {A SEGMENTATIN METHOD OF MEDICAL IMAGE AND APPARATUS THEREOF}

The present invention relates to segmentation in medical images, and more particularly, to identify a lesion type in a sliced medical image, and to determine the type of the lesion, for example, a hypertumor or a lung tumor. By selecting the optimal segmentation region for the lesion type by the user's interaction using a segmentation algorithm according to a brain tumor, etc., an optimal seed for generating a three-dimensional segmentation volume of the lesion ( The present invention relates to a segmentation method and apparatus for medical imaging in which a user can select a seed and obtain an optimized three-dimensional segmentation volume.

The present invention is derived from research carried out by the Ministry of Knowledge Economy and the Korea Industrial Technology Evaluation and Management Center as part of the Knowledge Economy Technology Innovation Project (Industrial Source Technology Development Project) [Project Number: 10038419, Project Name: Intelligent Image Diagnosis and Treatment Support system].

Cancer is important for diagnosing and carefully monitoring the disease as early as possible, and doctors are interested in not only the primary tumor but also secondary tumors that may have metastasized through the rest of the body.

Such a tumor-like lesion may be diagnosed and monitored through a three-dimensional segmentation volume, which may be formed from segmentation of each of the plurality of two-dimensional medical images.

In the 3D segmentation volume according to the related art, when a doctor, that is, a user selects a specific position to be monitored in the 2D medical image, performs the 2D segmentation using the information on the 2D segmentation result and performs the 2D segmentation result. As a basis, a three-dimensional segmentation volume is generated.

Since the segmentation method according to the related art cannot know the segmentation result of the specific position selected by the user in the 2D medical image, the result generated through the 3D segmentation volume generated after the 3D segmentation process is completed can be known. . In other words, if the generated 3D segmentation volume is not satisfactory, the user may reselect the location of the lesion to be confirmed in the 2D medical image and check the result through the 3D segmentation volume. This can put a load on the system or device that creates the segmentation volume and can be inconvenient for the user.

In addition, in the conventional segmentation method, since the lesion type is determined from the slice medical image and the optimized segmentation according to the determined lesion type cannot be performed, the optimized segmentation method cannot perform the optimized two-dimensional segmentation according to the lesion. There is a problem that it is difficult to obtain a three-dimensional segmentation volume.

Thus, there is a need for a method for obtaining an optimized two-dimensional segmentation seed for a lesion type selected by a user in a sliced medical image.

United States Patent No. 7953265 (Registration date 2011.05.31) United States Patent Application Publication No. 2009-97727 (Published: 2009.04.16)

The present invention is derived to solve the above problems of the prior art, by determining the lesion type in the slice medical image, and performing segmentation corresponding to the determined lesion type, to obtain an optimal segmentation seed according to the lesion type It is an object of the present invention to provide a segmentation method and apparatus thereof for medical imaging.

Specifically, the present invention uses a brightness value for the position information of the pointer and an angular profile at the corresponding location, for example, a lesion type for the corresponding location, for example, a hyper tumer, a run tumer, a brain tumer, a general tumer, or the like. To provide a segmentation method in a medical image capable of acquiring an optimal segmentation seed for generating a three-dimensional segmentation volume according to the lesion by performing a predetermined segmentation algorithm according to the determined lesion type. For the purpose of

In addition, the present invention can obtain an optimal three-dimensional segmentation volume by acquiring the optimal two-dimensional segmentation seed according to the type of lesion, thereby reducing the load for obtaining the three-dimensional segmentation volume in the medical image It is an object of the present invention to provide a segmentation method and an apparatus thereof.

In addition, the present invention by displaying the optimal two-dimensional segmentation seed determined according to the type of lesion on the slice medical image in advance, and determined by the user's selection, the segmentation in the medical image that allows the user to select the optimal segmentation seed It is an object to provide a method and an apparatus thereof.

In order to achieve the above object, a segmentation method in a medical image according to an embodiment of the present invention comprises the steps of extracting the position information of the pointer according to the user input from the slice (medical) image displayed on the screen; Determining a lesion type based on at least one of information on a slice medical image related to the extracted position information of the pointer and an angle profile based on the position information of the pointer; Determining a segmentation area including a position of the pointer using a segmentation algorithm preset according to the determined lesion type; And selecting the segmentation region as the lesion diagnosis region for the slice medical image.

The method may further include extracting a rung region and a bone region from the slice medical image, wherein the determining may include: a first preset profile of the rung region among the angle profiles; If larger than a reference value, the lesion type is determined as a rung tumor, and if the profile meeting the extracted region of the angular profile is greater than a second preset reference value, the lesion type is brain tumor. Can be determined by

The determining may include calculating a range of brightness values based on information of the slice medical image associated with the extracted position information of the pointer; Determining a first segmentation area including a position of the pointer using a segmentation algorithm preset according to the calculated range of the brightness value and the determined lesion type; Applying a preset fitting model to the determined first segmentation region; And determining an optimal segmentation region from the first segmentation region by using the fitting model.

The determining of the first segmentation area may include determining a second segmentation area including the location of the pointer using the calculated range of the brightness value; Selecting the second segmentation area into a first profile that meets a pre-extracted rung area of the angle profile and a second profile that does not meet the rung area when the determined lesion type is a rung tumer; Interpolating the second profile to the first profile to form a fence; And determining the first segmentation region from the second segmentation region based on the first profile and the formed fence.

The determining of the first segmentation area may include determining a second segmentation area including the location of the pointer using the calculated range of the brightness value; Selecting an anti-seed outside the second segmentation region when the determined lesion type is a brain tumer; Forming an anti seed region based on the brightness value of the selected anti seed; And determining the first segmentation region from the second segmentation region by using the formed anti seed region.

The determining may include determining a lesion type as a hyper tutor if the brightness value of the position information of the pointer is greater than a preset threshold value, and determining the lesion type as a hyper tumer. In this case, an area greater than or equal to the threshold value including the location of the pointer may be determined as the segmentation area.

The method may further include displaying the determined segmentation region on the slice medical image in advance, and selecting the segmentation region on the slice medical image when the segmentation region is displayed by the user. Can be selected as a diagnostic area for lesions.

The extracting may include detecting optimal position information in a predetermined peripheral area including a brightness value of the extracted position information of the pointer and position information of the pointer, and determining the optimal position information. The type of lesion may be determined based on at least one of information on the slice medical image related to the slice and an angle profile based on the optimal position information of the pointer.

The method may further include determining the selected lesion diagnosis region as a seed for segmentation of a 3D volume image; Determining a segmentation region of each of the plurality of slice medical images associated with the slice medical image based on the determined seed; And generating a 3D segmentation volume by using a segmentation region of each of the determined seeds and the plurality of slice medical images.

A segmentation apparatus in a medical image according to an embodiment of the present invention includes a position extraction unit for extracting position information of a pointer according to a user input from a slice medical image displayed on a screen; A discrimination unit for determining a lesion type based on at least one of sliced medical image information related to the extracted position information of the pointer and an angle profile based on the position information of the pointer; A determination unit to determine a segmentation area including a position of the pointer using a segmentation algorithm preset according to the determined lesion type; And a selection unit for selecting the segmentation region as the lesion diagnosis region for the slice medical image.

According to the present invention, the segmentation area is determined in the slice medical image, segmentation corresponding to the determined lesion type is displayed in advance, and the segmentation area determined in advance is selected by the user. By performing segmentation of another sliced medical image to generate a 3D segmentation volume using the selected segmentation region as a seed for the lesion, an optimal segmentation seed according to the type of lesion is obtained to generate an optimal 3D segmentation volume. have.

Furthermore, since the present invention obtains an optimal two-dimensional segmentation seed for generating a three-dimensional segmentation volume by using a segmentation algorithm according to the type of lesion, the system load for generating an optimal three-dimensional segmentation volume can be reduced.

Specifically, the present invention displays the segmentation result performed according to the determined lesion type in advance on the screen so that the user selects an optimal segmentation seed by the user, and the segmentation seed is selected by the user's selection. This reduces the load on creating three-dimensional segmentation volumes.

In other words, the present invention can determine whether the user adopts the pre-segmentation result for the lesion in the two-dimensional slice image currently displayed to the user, so that the validity of the pre-segmentation result can be quickly verified. In addition, since the pre-segmentation results adopted by the user are the first-validated results, when performing segmentation in the 3D image by using the verified pre-segmentation results as the seed region, since the seed region includes excellent information, relatively fewer resources are used. Excellent 3D segmentation results can be obtained.

In addition, since the present invention determines an optimal segmentation seed by determining the type of lesion, an optimal three-dimensional segmentation result for various types of lesions can be obtained, thereby increasing reliability of the segmentation result.

1 is a flowchart illustrating an operation of a segmentation method in a medical image according to an exemplary embodiment.
FIG. 2 shows an operation flowchart of an embodiment of step S140 shown in FIG. 1.
3 is a flowchart illustrating an embodiment of step S150 illustrated in FIG. 1.
4 is a flowchart illustrating an embodiment of step S160 shown in FIG. 1.
FIG. 5 is a diagram illustrating an example of a medical image for describing an operation flowchart illustrated in FIG. 4.
FIG. 6 illustrates an operation flowchart of another embodiment of step S160 shown in FIG. 1.
FIG. 7 is a diagram illustrating an example of a medical image for describing an operation flowchart illustrated in FIG. 6.
8 is a flowchart illustrating still another embodiment of operation S160 shown in FIG. 1.
FIG. 9 is a diagram illustrating an example of a medical image for describing an operation flowchart illustrated in FIG. 8.
10 illustrates a configuration of a segmentation apparatus in a medical image according to an embodiment of the present invention.
FIG. 11 is a diagram illustrating an example configuration of a determination unit illustrated in FIG. 10.

Other objects and features of the present invention will become apparent from the following description of embodiments with reference to the accompanying drawings.

Preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

However, the present invention is not limited to or limited by the embodiments. Like reference symbols in the drawings denote like elements.

Hereinafter, a segmentation method and a device in a medical image according to an embodiment of the present invention will be described in detail with reference to FIGS. 1 to 11.

1 is a flowchart illustrating an operation of a segmentation method in a medical image according to an exemplary embodiment.

Referring to FIG. 1, the segmentation method controls a pointer displayed on a slice medical image according to a user input, for example, a mouse movement, in the slice medical image selected by the user (S110).

When the pointer position is changed or the pointer is stopped by the pointer control according to the user input, the position information of the pointer is extracted (S120).

Here, the location information of the pointer may be coordinate information in the slice medical image.

When the location information of the pointer is extracted, the granular noise is removed from the slice medical image, and the optimal location information of the pointer is extracted based on the slice medical image information related to the extracted location information of the pointer (S130 and S140).

Of course, the step S130 of removing the granular noise may be performed before the slice medical image is displayed on the screen when the slice medical image is selected by the user.

The optimal position information of the pointer in step S140 may be a seed point for determining the segmentation area. The step S140 of extracting the optimal position information will be described in detail with reference to FIG. 2 as follows.

FIG. 2 shows an operation flowchart of an embodiment of step S140 shown in FIG. 1.

Referring to FIG. 2, the extracting of the optimal location information (S140) may include a circular or rectangular shape having a predetermined size around a location of a pointer, for example, a pointer location according to a user's motion or a user's input. An average value of brightness values of the area is calculated (S220).

Here, the brightness values of the circular or rectangular region mean slice medical image information corresponding to the circular or rectangular region in the slice medical image.

In operation S220, when the average value of the brightness values of the predetermined area including the pointer position is calculated, the calculated average value is compared with the brightness value of the corresponding pointer position, and the comparison result indicates that the brightness value of the pointer position is within an error range of the average value. It is determined whether there is (S230, S240).

For example, it is determined whether the brightness value of the pointer position is between two values of "average value-a" and "average value + a". Here, the value a may be dynamically determined or predetermined depending on the situation.

As a result of the determination in step S240, when the brightness value of the pointer position is out of an error range with respect to the average value, the optimum position information is extracted from the brightness values included in the predetermined region based on the average value.

Here, the optimal position information in the predetermined region may be position information having a brightness value corresponding to an average value among the brightness values included in the predetermined region. When there are a plurality of brightness values corresponding to the average value, the position and the position of the pointer The adjacent position can be extracted as the optimal position information. Of course, the position closest to the position of the pointer may be extracted as the optimal position information. However, the present invention is not limited thereto, and an arbitrary position having an average value may be arbitrarily extracted as the optimal position information.

On the other hand, if the brightness value of the pointer position exists within the error range with respect to the average value as a result of the determination in step S240, the position information of the current pointer is extracted as the optimal position information (S260).

Referring back to FIG. 1, when the optimal position information of the pointer is extracted in step S140, the slice is sliced based on at least one of slice medical image information related to the extracted optimal position information and an angle profile based on the optimal position information. The type of lesion in which the pointer is positioned in the medical image is determined (S150).

Here, the angle profile means a profile from 0 degrees to 360 degrees, and may be a profile for the same angle, for example, a 10 degree interval from 0 degrees to 360 degrees, or a profile for different angles.

If the lesion type in the medical image where the pointer is located or the lesion type present in the slice medical image is determined by step S150, the segmentation algorithm is set in advance according to the determined lesion type, and thus the pointer position information or the optimal position information. A segmentation area including a is determined (S160).

The segmentation region determined in step S160 is an optimal segmentation region determined by the segmentation algorithm for the lesion.

For example, when the determined lesion type is a rung tumer, the rung tumer segmentation algorithm is performed, and when the determined lesion type is a brain tumer, the brain tumer segmentation algorithm is performed to perform the analysis on the corresponding lesion. Determine the optimal segmentation area.

When the optimal segmentation region including the pointer position information or the optimal position information is determined, the determined segmentation region, that is, the optimal segmentation region, is previously displayed on the slice medical image (S170).

When the optimal segmentation region previously displayed on the screen is selected by the user, the segmentation region is selected as a lesion diagnosis region in the slice medical image, and the selected lesion diagnosis region is determined as a seed for performing 3D volume segmentation. (S180, S190).

As a method of determining the segmentation area displayed on the screen by the user in operation S180, various methods, such as a click or double click of a pointer and a shortcut key input, may be applied.

When the seed for the corresponding lesion is determined, a segmentation region of each of the plurality of other slice medical images related to the slice medical image is determined based on the determined seed (S200).

When the segmentation region of each of the plurality of other slice medical images is determined, a 3D segmentation volume is generated using the segmentation region of the corresponding slice medical image, that is, the segmentation region of each of the plurality of slice medical images different from the seed (S210).

On the other hand, if the segmentation region previously displayed on the screen is not selected by the user in step S180, that is, if the user is not satisfied with the segmentation region, step S110 is performed again.

As described above, the present invention may determine an optimal segmentation region for a corresponding lesion by determining a lesion type of a sliced medical image and performing a segmentation algorithm corresponding to the determined lesion, and interacting with the user with the determined segmentation region. Through the selection, the optimal segmentation seed desired by the user can be selected, and thus an optimal three-dimensional segmentation volume can be obtained.

In addition, the present invention has an advantage that the user input is simplified and the user interface is simplified because the pre-segmentation process is performed according to the location information of the pointer and the type of the lesion.

In addition, the present invention checks the segmentation region previously displayed on the slice medical image, and since the segmentation seed is determined by user selection, the 3D segmentation performance can be improved and the user's satisfaction can be improved.

FIG. 3 is a flowchart illustrating an embodiment of operation S150 of FIG. 1, which illustrates a process of determining a hyper tutor, a rung tumer, a brain tutor, and a general tumer.

Referring to FIG. 3, in determining the type of lesion (S150), it is determined whether the brightness value of the optimal position information of the pointer extracted in step S140 is greater than a preset threshold value, for example, 500 [HU] (S310). .

Here, HU is a unit of Hounsfield.

In operation S310, if the brightness value of the optimal position information is greater than the threshold value, the lesion type is determined by a hyper tutor. If the brightness value of the optimal position information is less than the threshold value, the lesion type is determined as a hypotumer. The process of determining the rung tumer, the brain tumer and the general tumer is performed (S320).

Here, the hyper-tumer means a higher brightness value than the surroundings, the hypo-tumer means a lower brightness value than the surroundings, and the hyper-tutors and hypo-tumers can be understood by those skilled in the art. Omit.

Of course, it should be appreciated that the threshold for discriminating with the hyper tumer may vary depending on the situation.

If it is determined that the lesion is a hypotumer in step S310, the rung region and the bone region are extracted from the slice medical image, and the angle profiles based on the extracted rung region, the bone region and the optimal position information are analyzed (S330 and S340). ).

Here, the angular profile to be analyzed is to analyze the angle profile that meets the rung area and the main area of the overall angle profile, and to analyze what percentage of the angle profile meets the rung area or the bone area. .

Of course, both the rung area and the main area may be extracted from the slice medical image, but only the main area may be extracted. When both areas are extracted, the angular profile is analyzed for both areas, but only one area is extracted. In the case of extraction, only the angular profile of one region needs to be analyzed.

In the method of extracting the rung region in operation S330, an area having a brightness value smaller than a first set value, for example, −200 [HU] in the slice medical image may be extracted as the rung region. For example, in the slice medical image of FIG. 5, an inner region of a body having a brightness value smaller than −200 [HU] is extracted as the rung region 510.

In addition, the method of extracting the region viewed in operation S330 may extract, as the region where the brightness value is greater than the second set value, for example, 5000 [HU], from the slice medical image. For example, in the slice medical image illustrated in FIG. 7, a region having a brightness value greater than 500 [HU] is extracted to the region 710.

If the angle profile that meets the rung region analyzed by step S340 is greater than a first reference value, for example, 50 [%], the lesion on which the pointer is located is determined as a rung tumer (S350 and S360).

On the other hand, if the angle profile that meets the rung area is smaller than the first reference value as a result of the determination in step S350, it is determined whether the angle profile that meets the present area analyzed by step S340 is greater than the preset second reference value, for example, 80 [%]. (S370).

As a result of the determination in step S370, when the angle profile meeting the region is greater than the second reference value, the lesion on which the pointer is located is determined by the brain tumer (S380).

On the contrary, as a result of the determination of S370, when the angle profile meeting the region is smaller than the second reference value, the lesion on which the pointer is located is determined as a general tumer except the rung and brain tumers (S490).

Of course, the first reference value and the second reference value for determining the rung and brain tumers may vary according to circumstances.

When the lesion type is determined by the process of FIG. 3, the segmentation area of the corresponding lesion is determined using a segmentation algorithm for the determined lesion type. When the determined lesion type is a rung tumer, a brain tutor, or a general tumer, the segmentation area is determined. The assumption to determine will be described with reference to FIGS. 4 to 9.

4 is an operation flowchart of an embodiment of step S160 shown in FIG. 1, and is an operation flowchart of a case in which the lesion determined by step S150 is a rung tumer.

Referring to FIG. 4, the determining of the segmentation region of the rung tumer (S160) may include a range of brightness values for determining the segmentation region based on medical image information, eg, a brightness value, of the optimal position information extracted by step S140. Calculate (S410).

Here, the brightness value range may be calculated by applying a constant standard deviation based on the brightness value for optimal position information, or may be calculated by designating a predetermined range of predetermined values.

When the brightness value range is calculated, the first segmentation area corresponding to the brightness value range is determined using the calculated brightness value range (S420).

In this case, the first segmentation region may include location information of the pointer or extracted optimal location information according to a user input.

When the first segmentation area is determined, the first segmentation area is selected as a first profile that meets the rung area and a second profile that does not meet the rung area using the angle profile (S430).

Here, since the second profile that does not meet the rung area among the first segmentation areas is a part out of the rung area, a process of limiting the area for the second profile should be performed.

That is, a fence is formed by interpolating an area corresponding to the second profile to an area corresponding to the first profile (S440).

For example, as illustrated in FIG. 5, the fence 520 is formed at the boundary portion away from the rung region 510 and the rung region.

When the fence is formed, the second segmentation area is determined based on the formed fence and the first segmentation area corresponding to the first profile (S450).

When the second segmentation area is determined, an optimal segmentation area 530 is determined from the second segmentation area by applying a preset fitting model to the determined second segmentation area as shown in FIG. S460, S470).

Here, the fitting model may include a deformable model, a snake model, and the like, and the fitting models such as the deformable model and the snake model are within the scope obvious to those skilled in the art for applying to the steps S460 and S470. Can be modified in

FIG. 6 is a flowchart illustrating an operation of another embodiment of step S160 illustrated in FIG. 1, and illustrates an operation flowchart of a case where the lesion determined by step S150 is a brain tumer.

Referring to FIG. 6, in the determining of the segmentation region of the brain tumer (S160), the brightness value range for determining the segmentation region is calculated based on the brightness value of the optimal position information extracted in step S140 (S610). .

When the brightness value range is calculated, the first segmentation area corresponding to the brightness value range is determined using the calculated brightness value range (S620).

In this case, the first segmentation region may include location information of the pointer or extracted optimal location information according to a user input.

When the first segmentation area is determined, an anti-seed for correcting the first segmentation area is selected (S630).

Here, the anti seed refers to a normal tissue (tissue) that is not a lesion in the brain, and the anti seed is selected at least one from the area between the first segmentation area and the main area.

For example, in the medical image illustrated in FIG. 7, a plurality of positions corresponding to normal tissues of a brain near to the region 710 may be selected. The positions may be adjacent to the region 710 and may be outside the first segmentation region. Can be a location to be located.

When the anti seed is selected, a range of brightness values for forming the anti seed region is calculated based on the brightness value of the selected anti seed (S640).

Here, the range of the brightness value for forming the anti seed region may be set in advance, or may be set using the brightness value of the peripheral region around the selected anti seed.

Based on the brightness value calculated in step S640, an anti-seed region 720 is formed as shown in FIG. 7 (S650).

Here, the anti-seed region is formed to determine a more accurate segmentation region by correcting the anti-seed region because the determined first segmentation region is uncertain.

When the anti seed region is formed, a second segmentation region is determined from the first segmentation region based on the formed anti seed region and the first segmentation region (S660).

In this case, the determined second segmentation region is a segmentation region including a pointer position formed using the anti-seed region as a fence, and may be a region smaller than the first segmentation region but larger than the first segmentation region.

When the second segmentation area is determined, an optimal segmentation area 730 is determined from the second segmentation area as shown in FIG. 7 by applying a preset fitting model to the determined second segmentation area (see FIG. 7). S670, S680).

Here, the fitting model may include a deformable model, a snake model, and the like.

FIG. 8 is a flowchart illustrating operations of another embodiment of step S160 illustrated in FIG. 1, and is a flowchart illustrating operations when the lesion determined by step S150 is a general tumer.

Referring to FIG. 8, in the determining of the segmentation region of the general tutor (S160), the brightness value range for determining the segmentation region is calculated based on the brightness value of the optimal position information extracted in step S140 (S810). .

When the brightness value range is calculated, the first segmentation region corresponding to the brightness value range is determined using the calculated brightness value range (S820).

When the first segmentation region is determined, an optimal segmentation region 910 is determined from the first segmentation region as shown in FIG. 9 by applying a preset fitting model to the determined first segmentation region (see FIG. 9). S830, S840).

Here, the fitting model may include a deformable model, a snake model, and the like.

4 to 9, the optimal segmentation region for the rung tumer, the brain tumer, and the general tumer can be determined, and in the case of the hyper tumer, it corresponds to a preset brightness value range in the peripheral region including the optimal position information. The region to be determined determines the optimal segmentation region.

As described above, in the segmentation method according to the lesion type according to the present invention, by determining the type of the lesion in which the pointer is located in the slice medical image, and performing a segmentation algorithm for the lesion, an optimal segmentation region for the lesion may be determined. Since the optimal segmentation region determined in this way is used, a highly reliable three-dimensional segmentation volume can be generated for the lesion.

10 illustrates a configuration of a segmentation apparatus in a medical image according to an embodiment of the present invention.

Referring to FIG. 10, the segmentation apparatus includes a position extractor 1010, a mask extractor 1020, a determiner 1030, a determiner 1040, a display unit 1050, and a selector 1060.

The location extractor 1010 extracts location information of the pointer according to a user input from the slice medical image displayed on the screen.

Here, the location extractor 1010 may continuously extract the location information of the pointer in real time, and may extract the location information only when the pointer is fixed without extracting the location information while the pointer moves according to a user input. .

Furthermore, the location extractor 1010 compares the brightness value of the extracted location information of the pointer with the average value of the brightness values of a preset peripheral area including the location information of the pointer, and the brightness value of the location information of the pointer. If it is out of the preset error range with respect to the average value, the optimum position information is detected in the preset peripheral area.If the brightness value of the pointer position information is within the error range with respect to the average value, the position information of the corresponding pointer is detected. Is detected as optimal position information.

The mask extractor 1020 extracts a rung region and a bone region from the slice medical image.

In this case, the mask extractor 1020 may extract the inner region of the trunk of the slice medical image having a brightness value smaller than the first set value (for example, −200 [HU]) as the rung area, and the brightness value may be set to 0. 2 An area larger than a set value (for example, 5000 [HU]) can be extracted as the present area.

The determination unit 1030 may include at least an angle profile based on the slice medical image information related to the position information (or the optimal position information) of the pointer extracted by the position extraction unit 1010 and the position information of the pointer. Determine the type of lesion based on one.

In this case, the determination unit 1030 may rung the lesion type when the angle profile that meets the rung area extracted by the mask extractor 1020 is greater than the first reference value in the entire angle profile based on the location information of the pointer. If the angle profile meets the present region extracted by the mask extractor 1020 in the overall angle profile is greater than the second reference value, the lesion type may be determined by the brain tumer.

In addition, the determination unit 1030 determines the lesion type by a hyper tutor when the brightness value of the pointer position information (or the optimal position information) is larger than a preset threshold value, and determines the other type by a general tumer.

The determination unit 1040 determines a segmentation area including the location of the pointer by using a segmentation algorithm set in advance according to the lesion type determined by the determination unit 1030.

That is, the determination unit 1040 determines the optimal segmentation region that may vary according to the lesion by performing a segmentation algorithm on the corresponding lesion.

Of course, the determination unit 1040 determines that the peripheral region including the location of the pointer having a predetermined brightness value as the optimal segmentation region when the lesion type is a hyper tumer, and when the lesion type is a general tumer. In FIG. 8, an optimal segmentation region is determined by applying a fitting model to the first segmentation region formed by the calculated brightness value range.

The detailed configuration of the determination unit 1040 for the case where the lesion type determined by the determination unit 1030 is the rung tumer and the brain tumer will be described in FIG. 11.

The display unit 1050 previously displays the segmentation region determined by the determination unit 1040, that is, the optimal segmentation region for the corresponding lesion on the slice medical image displayed on the screen.

The selection unit 1060 checks the segmentation area displayed by the user by the display unit 1050, and when the segmentation area of the lesion previously displayed on the screen is satisfactorily selected by the user, the selection unit 1060 selects the selected segmentation area for the slice medical image displayed on the screen. Select the lesion diagnosis area.

In this case, the segmentation area selected by the selection unit 1060 may be used as a seed for segmentation of the 3D volume image.

Also, although not shown in FIG. 10, a segment for determining segmentation regions of each of the plurality of slice medical images associated with the slice medical image using the segmentation seed selected by the selection unit, and each of the determined seeds and the plurality of slice medical images, respectively. It may also include a configuration for generating a three-dimensional segmentation volume for the lesion using the segmentation area of the.

FIG. 11 is a diagram illustrating an example configuration of a determination unit illustrated in FIG. 10.

Referring to FIG. 11, the determination unit 1040 includes a calculation unit 1110, a first region determination unit 1120, a selection unit 1130, an anti seed selection unit 1140, a formation unit 1150, and a second region. The determination unit 1160, the application unit 1170, and the third region determination unit 1180 are included.

If the lesion type determined by the determination unit 1030 is a rung tumer, the determination unit 1040 may include a calculation unit 1110, a first region determination unit 1120, a selection unit 1130, a formation unit 1150, When the optimal segmentation region is determined by using the second region determiner 1160, the application unit 1170, and the third region determiner 1180, and the lesion type determined by the determiner 1030 is a brain tumer. The calculation unit 1110, the first region determiner 1120, the anti seed selector 1140, the forming unit 1150, the second region determiner 1160, the application unit 1170, and the third region determiner An optimal segmentation region is determined using 1180.

When the lesion type is described as a rung tumer and a brain tumer, the following description will be given.

1) When the lesion type is rung tumer

The calculation unit 1110 calculates a range of brightness values based on the slice medical image information associated with the extracted pointer position information (or optimal position information).

The first area determiner 1120 determines the first segmentation area including the location of the pointer by using the range of brightness values calculated by the calculator 1110.

The selector 1130 selects the first segmentation area into a first profile that meets the entire angular profile extracted rung area and a second profile that does not meet the rung area.

The forming unit 1150 interpolates the first segmentation region corresponding to the second profile to the first segmentation region corresponding to the first profile to form a fence.

The second area determiner 1160 determines the second segmentation area from the first segmentation area based on the first segmentation area corresponding to the first profile and the formed fence.

The application unit 1170 applies a preset fitting model, for example, a deformable model or a snake model, to a second segmentation region determined by the second region determination unit 1160, and applies the third region determination unit ( 1180 determines an optimal segmentation region from the second segmentation region using the fitting model.

2) Type of lesion is brain tumer

The calculation unit 1110 calculates a range of brightness values based on the slice medical image information associated with the extracted pointer position information (or optimal position information).

The first area determiner 1120 determines the first segmentation area including the location of the pointer by using the range of brightness values calculated by the calculator 1110.

The anti seed selector 1140 selects an anti seed from a normal brain tissue between the first segmentation region and the extracted main region.

In this case, the anti seed selector 1140 may select at least one brain tissue adjacent to the present region as an anti seed.

The forming unit 1150 determines a range of brightness values for forming the anti seed region based on the brightness value of the selected anti seed, and forms an anti seed region using the determined brightness value range.

Herein, the brightness value range for forming the anti seed region may be preset or determined using the brightness value of the peripheral region of the selected anti seed.

The second region determiner 1160 determines the second segmentation region from the first segmentation region by using the formed anti seed region.

At this time, the anti seed region serves as a fence for determining the second segmentation region.

The applier 1170 applies a fitting model preset to the second segmentation area determined by the second area determiner 1160, and the third area determiner 1180 uses the fitting model to determine the second fitting area. The optimal segmentation region is determined from the segmentation region.

The segmentation method in a medical image according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed by various computer means and recorded in a computer readable medium. The computer readable medium may include program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.

In the present invention as described above has been described by the specific embodiments, such as specific components and limited embodiments and drawings, but this is provided to help a more general understanding of the present invention, the present invention is not limited to the above embodiments. For those skilled in the art, various modifications and variations are possible from these descriptions.

Accordingly, the spirit of the present invention should not be construed as being limited to the embodiments described, and all of the equivalents or equivalents of the claims, as well as the following claims, belong to the scope of the present invention .

Claims (14)

In the segmentation method in a medical image using a segmentation device,
Extracting location information of a pointer according to a user input from a slice medical image displayed on a screen by a location extracting unit of the segmentation device;
The slice medical image is represented based on at least one of information of a slice medical image associated with the extracted position information of the pointer and an angle profile based on the position information of the pointer, by the determination unit of the segmentation apparatus. Determining the type of lesion;
Determining a segmentation area including a position of the pointer by using a segmentation algorithm preset according to the determined lesion type in the determination unit of the segmentation device; And
Selecting the segmentation region as the lesion diagnosis region for the slice medical image by the determination unit of the segmentation apparatus;
Segmentation method in a medical image comprising a.
The method of claim 1,
Extracting a lung region and a bone region from the slice medical image
Further comprising:
The determining step
If the profile meeting the extracted lung region of the angular profile is greater than a first predetermined reference value, the lesion type is determined as a lung tumor, and the profile meeting the extracted bone region of the angular profile is The method of segmentation in a medical image, characterized in that the type of lesion is determined as a brain tumor if greater than a second reference value.
The method of claim 1,
The step of determining
Calculating a range of brightness values based on information of the slice medical image associated with the extracted position information of the pointer;
Determining a first segmentation area including a position of the pointer using a segmentation algorithm preset according to the calculated range of the brightness value and the determined lesion type;
Applying a preset fitting model to the determined first segmentation region; And
Determining an optimal segmentation region from the first segmentation region using the fitting model
Segmentation method in a medical image comprising a.
The method of claim 3,
The determining of the first segmentation area may include
Determining a second segmentation area including a position of the pointer using the calculated range of brightness values;
Selecting the second segmentation area into a first profile that meets a previously extracted lung area and a second profile that does not meet the lung area when the determined lesion type is a lung tumor;
Interpolating the second profile to the first profile to form a fence; And
Determining the first segmentation region from the second segmentation region based on the first profile and the formed fence.
Segmentation method in a medical image comprising a.
The method of claim 3,
The determining of the first segmentation area may include
Determining a second segmentation area including a position of the pointer using the calculated range of brightness values;
Selecting an anti-seed outside the second segmentation region when the determined lesion type is a brain tumor;
Forming an anti seed region based on the brightness value of the selected anti seed; And
Determining the first segmentation region from the second segmentation region by using the formed anti seed region.
Segmentation method in a medical image comprising a.
The method of claim 1,
The determining step
When the brightness value of the location information of the pointer is greater than a predetermined threshold value, the lesion type is determined as a hyper tumor,
The step of determining
And the segmentation area of the medical image is determined as the segmentation area when the determined lesion type is a hyper tumor.
The method of claim 1,
Displaying the determined segmentation area on the slice medical image in advance;
Further comprising:
The selecting step
And selecting the segmentation area as the lesion diagnosis area for the slice medical image when the segmentation area displayed in advance is selected by a user.
The method of claim 1,
The extracting step
Detecting optimum position information in a predetermined peripheral area including brightness values of the extracted position information of the pointer and position information of the pointer,
The determining step
In the medical image, characterized in that the lesion type is determined based on at least one of the information of the slice medical image associated with the detected optimal position information and an angle profile based on the optimal position information of the pointer. Segmentation method.
The method of claim 1,
Determining the selected lesion diagnosis region as a seed for segmentation of a 3D volume image;
Determining a segmentation region of each of the plurality of slice medical images associated with the slice medical image based on the determined seed; And
Generating a 3D segmentation volume by using a segmentation region of each of the determined seeds and the plurality of slice medical images;
Segmentation method in a medical image, characterized in that it further comprises.
A computer-readable recording medium in which a program for executing the method of any one of claims 1 to 9 is recorded. A location extraction unit for extracting location information of a pointer according to a user input from a slice medical image displayed on a screen;
A discrimination unit for determining a lesion type based on at least one of sliced medical image information related to the extracted position information of the pointer and an angle profile based on the position information of the pointer;
A determination unit to determine a segmentation area including a position of the pointer using a segmentation algorithm preset according to the determined lesion type; And
A selection unit for selecting the segmentation region as the lesion diagnosis region for the slice medical image
Segmentation device in a medical image comprising a.
The method of claim 11,
Mask extractor for extracting lung and bone regions from the slice medical image
Further comprising:
The determining unit
If the profile meeting the extracted lung region of the angular profile is greater than a first predetermined reference value, the lesion type is determined as a lung tumor, and the profile meeting the extracted bone region of the angular profile is The segmentation apparatus in the medical image, characterized in that the type of lesion is determined as a brain tumor if greater than a second preset reference value.
The method of claim 11,
The determining unit
Calculating a range of brightness values based on information of the slice medical image associated with the extracted position information of the pointer;
A first area determiner configured to determine a first segmentation area including a location of the pointer using the calculated range of brightness values;
A selection unit for selecting the first segmentation region into a first profile that meets a previously extracted lung region and a second profile that does not meet the lung region when the determined lesion type is a lung tumor;
A forming part interpolating the second profile to the first profile to form a fence;
A second region determiner which determines a second segmentation region from the first segmentation region based on the first profile and the formed fence;
An application unit applying a preset fitting model to the determined second segmentation area; And
A third region determiner configured to determine an optimal segmentation region from the second segmentation region using the fitting model
Segmentation device in a medical image, characterized in that it comprises a.
The method of claim 11,
The determining unit
Calculating a range of brightness values based on information of the slice medical image associated with the extracted position information of the pointer;
A first area determiner configured to determine a first segmentation area including a location of the pointer using the calculated range of brightness values;
An anti-seed selection unit for selecting an anti-seed outside the first segmentation region when the determined lesion type is a brain tumor;
A forming unit forming an anti seed region based on the selected brightness value of the anti seed;
A second region determiner configured to determine a second segmentation region from the first segmentation region by using the formed anti seed region;
An application unit applying a preset fitting model to the determined second segmentation area; And
A third region determiner configured to determine an optimal segmentation region from the second segmentation region using the fitting model
Segmentation device in a medical image, characterized in that it comprises a.
KR1020110095149A 2011-09-21 2011-09-21 A segmentatin method of medical image and apparatus thereof KR101185728B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020110095149A KR101185728B1 (en) 2011-09-21 2011-09-21 A segmentatin method of medical image and apparatus thereof
PCT/KR2012/007178 WO2013042889A1 (en) 2011-09-21 2012-09-06 Method and device for performing segmentation in medical images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110095149A KR101185728B1 (en) 2011-09-21 2011-09-21 A segmentatin method of medical image and apparatus thereof

Publications (1)

Publication Number Publication Date
KR101185728B1 true KR101185728B1 (en) 2012-09-25

Family

ID=47114090

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110095149A KR101185728B1 (en) 2011-09-21 2011-09-21 A segmentatin method of medical image and apparatus thereof

Country Status (2)

Country Link
KR (1) KR101185728B1 (en)
WO (1) WO2013042889A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102099350B1 (en) * 2019-06-07 2020-04-09 주식회사 뷰노 Method for aiding quantification of lesions in medical imagery and apparatus using the same
CN113712594A (en) * 2020-05-25 2021-11-30 株式会社日立制作所 Medical image processing apparatus and medical imaging apparatus
WO2022139068A1 (en) * 2020-12-22 2022-06-30 주식회사 딥노이드 Deep learning-based lung disease diagnosis assistance system and deep learning-based lung disease diagnosis assistance method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784711A (en) * 2020-07-08 2020-10-16 麦克奥迪(厦门)医疗诊断系统有限公司 Lung pathology image classification and segmentation method based on deep learning
WO2024046142A1 (en) * 2022-08-30 2024-03-07 Subtle Medical, Inc. Systems and methods for image segmentation of pet/ct using cascaded and ensembled convolutional neural networks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10192256A (en) 1996-11-01 1998-07-28 General Electric Co <Ge> Image segmentation system and method to segment plural slice images into anatomical structure
US20080269598A1 (en) 2005-02-11 2008-10-30 Koninklijke Philips Electronics N.V. Identifying Abnormal Tissue in Images of Computed Tomography
JP2008272480A (en) 2007-05-07 2008-11-13 General Electric Co <Ge> Method and device for improving and/or validating 3d-segmentation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101014563B1 (en) * 2009-08-07 2011-02-16 주식회사 메디슨 Ultrasound system and method for performing segmentation of vessel

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10192256A (en) 1996-11-01 1998-07-28 General Electric Co <Ge> Image segmentation system and method to segment plural slice images into anatomical structure
US20080269598A1 (en) 2005-02-11 2008-10-30 Koninklijke Philips Electronics N.V. Identifying Abnormal Tissue in Images of Computed Tomography
JP2008272480A (en) 2007-05-07 2008-11-13 General Electric Co <Ge> Method and device for improving and/or validating 3d-segmentation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102099350B1 (en) * 2019-06-07 2020-04-09 주식회사 뷰노 Method for aiding quantification of lesions in medical imagery and apparatus using the same
CN113712594A (en) * 2020-05-25 2021-11-30 株式会社日立制作所 Medical image processing apparatus and medical imaging apparatus
CN113712594B (en) * 2020-05-25 2023-12-26 富士胶片医疗健康株式会社 Medical image processing apparatus and medical imaging apparatus
WO2022139068A1 (en) * 2020-12-22 2022-06-30 주식회사 딥노이드 Deep learning-based lung disease diagnosis assistance system and deep learning-based lung disease diagnosis assistance method

Also Published As

Publication number Publication date
WO2013042889A1 (en) 2013-03-28

Similar Documents

Publication Publication Date Title
US10147223B2 (en) Apparatus and method for computer-aided diagnosis
US8611623B2 (en) Network construction apparatus, method and program
KR102154733B1 (en) Apparatus and method for estimating whether malignant tumor is in object by using medical image
JP6059261B2 (en) Intelligent landmark selection to improve registration accuracy in multimodal image integration
JP6530371B2 (en) Interactive follow-up visualization
CA2776203C (en) Determining contours of a vessel using an active contouring model
US9536316B2 (en) Apparatus and method for lesion segmentation and detection in medical images
US9959622B2 (en) Method and apparatus for supporting diagnosis of region of interest by providing comparison image
US8483462B2 (en) Object centric data reformation with application to rib visualization
US10991102B2 (en) Image processing apparatus and image processing method
KR101185728B1 (en) A segmentatin method of medical image and apparatus thereof
EP1947606A1 (en) Medical image processing apparatus and medical image processing method
JP2019530490A (en) Computer-aided detection using multiple images from different views of the region of interest to improve detection accuracy
US8165376B2 (en) System and method for automatic detection of rib metastasis in computed tomography volume
JP2015066311A (en) Image processor, image processing method, program for controlling image processor, and recording medium
JP5388614B2 (en) Medical image processing apparatus, image diagnostic apparatus, and medical image processing program
KR101185727B1 (en) A segmentatin method of medical image and apparatus thereof
KR20210098381A (en) Device and method for visualizing image of lesion
WO2012026090A1 (en) Image display device, method and program
US10390799B2 (en) Apparatus and method for interpolating lesion detection
EP4252657A1 (en) Medical image abnormality detection system and abnormality detection method
KR101654824B1 (en) Method and apparatus for controlling key point matching of medical images
US20080137928A1 (en) Method and System for Registering CT Data Sets
JP2007512890A (en) How to determine the structure of a moving object
JP6746639B2 (en) Image processing apparatus and method, and diagnostic support system

Legal Events

Date Code Title Description
A201 Request for examination
A302 Request for accelerated examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20150915

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20160906

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20180830

Year of fee payment: 7

FPAY Annual fee payment

Payment date: 20190910

Year of fee payment: 8