WO2013051045A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2013051045A1 WO2013051045A1 PCT/JP2011/005566 JP2011005566W WO2013051045A1 WO 2013051045 A1 WO2013051045 A1 WO 2013051045A1 JP 2011005566 W JP2011005566 W JP 2011005566W WO 2013051045 A1 WO2013051045 A1 WO 2013051045A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- correction
- region
- sectional views
- execution unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/008—Cut plane or projection plane definition
Definitions
- This relates to an image processing technique for manually correcting a 3D area set in 3D volume data.
- ⁇ medical image volume data '' a captured three-dimensional medical image
- visualization processing is performed using a plurality of medical image processing algorithms to assist diagnosis.
- medical image processing algorithms used here depending on the case, such as extracting and displaying only the necessary tissue part data from the input medical image volume data or displaying the image with its edges highlighted.
- processing methods There are different types of processing methods.
- Patent Document 1 proposes a method of correcting a contour line of a three-dimensional region as a free curve, and specifically discloses a method of reflecting the movement distance and movement time of a pointing device in the correction of a curve. Yes.
- Patent Document 2 proposes a method in which a guide area is set separately from an area to be extracted, and the extraction area is modified so as to be within the range of the guide area.
- the correction curve may be greatly different from the original image information because it is corrected as a free curve.
- the amount of data becomes large and that it is necessary to set a region to be a guide in addition to the region to be extracted.
- An image processing apparatus that executes image processing on volume data of a three-dimensional image, an extraction processing execution unit that extracts a three-dimensional initial region that satisfies a predetermined condition from the volume data, and performs correction processing on the initial region
- a region correction processing execution unit that extracts a three-dimensional correction region
- a visualization processing execution unit that generates a plurality of cross-sectional views of a three-dimensional image from volume data and outputs at least a part of the plurality of cross-sectional views.
- the visualization processing execution unit selects one or more cross-sectional views having voxels included in the initial region from a plurality of cross-sectional views created by the visualization processing execution unit, and is selected. Output one or more cross-sectional views so that voxels included in the initial region and other regions can be distinguished,
- one or more cross-sectional views having voxels included in the correction area are selected from a plurality of cross-sectional views created by the visualization processing execution unit, and the selected one or more cross-sectional views are displayed.
- the voxel included in the correction area and other areas are output so as to be distinguishable.
- the 3D region of interest automatically extracted from 3D volume data can be modified easily and in a short time.
- FIG. 1 is a system configuration diagram illustrating a configuration example of an image processing apparatus.
- 4 is a flowchart illustrating an example of region extraction correction processing executed by an image processing algorithm execution unit 21. It is an image figure which shows an example of the screen display shown by the display apparatus 12 by the three-dimensional area
- FIG. FIG. 5 is an image diagram showing an example of an area where an initial area extracted by an automatic extraction process execution unit is corrected by changing parameters of an extraction algorithm by an area correction process execution unit 32; It is a parameter table stored in the area correction processing execution unit 32, and is a table showing an example when parameter values themselves are set.
- FIG. 5 is an image diagram showing an example of an area in which an initial area extracted by an automatic extraction process execution unit is corrected by an area correction process execution unit 32, including a process of correction. It is an image figure which shows the example of the some correction area
- FIG. 5 is an image diagram showing an example of an area in which an initial area extracted by an automatic extraction process execution unit is corrected by an area correction process execution unit 32, including a process of correction. It is an image figure which shows the example of the some correction area
- FIG. 1 is a diagram illustrating a configuration example of an image processing apparatus.
- the system converts volume data that has been imaged and reconstructed by an external storage device 10 that stores 3D images (hereinafter also referred to as volume data) and related data, an X-ray CT apparatus, an MRI apparatus, or the like.
- An image processing device 11 that performs image processing on the image
- a display device 12 that displays the image processed image
- an input device 13 that inputs instructions for starting each process and operations for the screen and the extraction result.
- the image processing apparatus 11 includes an internal memory 20 that stores volume data and related information, an image processing algorithm execution unit 21, and an input operation acquisition unit 22.
- the image processing algorithm execution unit 21 is realized by program processing or a dedicated circuit by a CPU (Central Processing Unit) provided in the image processing apparatus 11.
- a storage unit (not shown) for storing a program for realizing the functions of the internal memory 20 and the image processing apparatus 11 includes a RAM (Random Access Memory), a ROM (Read Only Memory), and a HDD (Hard Disk Drive). And a storage medium such as a flash memory.
- the image processing algorithm execution unit 21 performs a visualization process on the volume data and displays it on the display device, a visualization process execution unit 31, an extraction region storage unit 33 that stores information on the extracted region, and an automatically extracted region
- An area correction processing execution unit 32 that performs correction processing and an automatic extraction processing execution unit 34 that performs automatic extraction processing on volume data are provided.
- the three-dimensional automatic extraction region correction process is started by an input from the input device 13 or an instruction from the system (step 40).
- the volume data transferred from the external storage device 10 to the internal memory 20 is input to the image processing algorithm execution unit 21, and the automatic extraction processing execution unit 34 executes automatic extraction processing on the input volume data (step 41).
- the automatic extraction process performed by the automatic extraction process execution unit 34 indicates, for example, processes such as a region expansion method and a LevelSet method.
- the region expansion method here refers to a desired three-dimensional region from volume data by expanding the region while judging whether adjacent voxels meet the expansion condition from one or more seed points specified automatically or manually.
- LevelSet is a method that detects the boundary surface of an object that exists in a two-dimensional space by time-developing a curved surface defined in a three-dimensional space, and then the desired level from volume data. This is a method for extracting a three-dimensional region.
- the extraction area storage unit 33 first stores the three-dimensional area (also referred to as an extraction area) automatically extracted in step 41 as an initial area (step 42).
- the visualization process execution unit 31 executes the visualization process on the initial area stored in the extraction area storage unit 33 (step 43).
- a Multi Planar Reformat MPR
- CMPR Curved Multi Planar Reformat
- CMPR Curved Multi Planar Reformat
- step 43 in order to accurately grasp the three-dimensional shape on the two-dimensional plane, a method of displaying all the voxels set as the initial region on the two-dimensional plane is adopted.
- the visualization processing execution unit 31 stores slices having voxels included in the extraction region, and the data display device for the stored slices There is a method of outputting to 12 and displaying. At this time, it is preferable to color the voxels set as the extraction region in the displayed slice.
- the visualization processing execution unit 31 stores a cross section including a voxel set as an extraction region among a plurality of cross sections parallel to an arbitrarily set plane created by CMPR, and the stored cross section. May be displayed on the display device 12.
- the extracted region is confirmed (step 44). If the initial region has extracted the region of interest without excess or deficiency, the extraction process ends at this point (step 47). ).
- the displayed area (for example, the initial area) is corrected (step 45).
- the correction process used here is, for example, changing the parameters of the automatic extraction process described above And a contour correction by a morphological process or a contour correction method.
- the user does not freely correct the contour, but the initial region or the luminance information of the input volume data is corrected to some extent to be a new extraction region.
- This new extraction area is stored in the extraction area storage unit 33 as a correction area.
- the visualization process execution unit 31 executes the visualization process and displays the correction area extracted in step 45 in the same manner as in step 43 (step 46).
- Steps 44 to 46 are repeated, and when it is determined in Step 44 that the region of interest can be extracted without excess or deficiency, the extraction process is completed (Step 47).
- the extraction area storage unit 33 When displaying a three-dimensional area as a set of a plurality of two-dimensional cross-sectional images, there is a plurality of three-dimensional areas stored in the extraction area storage unit 33, and a different three-dimensional area from the currently displayed three-dimensional area. It may be necessary to switch the display. For example, in the example of FIG. 2, the initial display area is initially visualized, but the visualization process target is switched to the correction area and displayed. In this case, even if the cross-sectional angle created from the original volume data and the slice interval are the same, the displayed 3D area is different in size from the currently displayed 3D area. The number of two-dimensional sections to be performed may be different.
- FIG. 3 is an image diagram illustrating an example of a screen display displayed on the display device 12 by the visualization process of the three-dimensional region executed by the visualization process execution unit 31.
- a hatched portion in FIG. 3 is an extraction region.
- Fig. 3 shows the display of the initial area.
- the initial region spans three slices among a plurality of slices created from the input volume data. Therefore, data of three slices are displayed.
- FIG. 3 shows a display of the correction area 1 when it is determined that correction is necessary in the confirmation performed in step 44.
- the case where the size of the correction area 1 in the slice direction is larger than the initial area is illustrated. Since the correction area 1 is two slices larger than the initial area (that is, the correction area 1 is larger than the initial area and spans five slices), the correction area 1 is automatically displayed so that all the correction areas 1 are displayed. It shows how the number of display slices is changed.
- FIG. 3 shows the display of the next correction area 2 after the correction area 1 is determined to be corrected and the correction area 1 is corrected. Since the size of the correction area 2 is smaller than that of the correction area 1 and the correction area 2 extends over four slices, the number of display slices is reduced for display.
- a three-dimensional visualization image is simultaneously displayed for the purpose of grasping the three-dimensional shape to some extent on the two-dimensional screen.
- You can also As a method for creating a three-dimensional visualization image for example, Volume Rendering is used to set a transparency from a voxel value and add light to a voxel on each line of sight to add light and display it in a stereoscopic manner. (VR), and Maximum Intensity Projection (MIP) that projects the maximum voxel value of voxels on each line of sight.
- the initial area is an image extracted to a shape that is generally considered plausible at the pixel of the input volume data or its part, but a different extracted shape is desirable depending on the image characteristics of the input volume data and the use situation of the system There is also.
- a contrast agent is often used to distinguish a healthy parenchymal part of an organ from a lesion part represented by a tumor on an image.
- a contrast agent is a specific drug that is injected into blood from blood vessels. The contrast agent mixed in blood reaches the tumor and the time it takes for the contrast agent to flow from the tumor to the organ substance is different. It is known that the brightness of the tumor and the organ parenchyma differ depending on the time from the injection to imaging.
- a tumor region extraction process for an image whose luminance is lower than that of the organ substance, it is common to extract a portion with low luminance as a tumor, but depending on the nature of the tumor, it may be further reduced in the tumor. May produce bright areas. Possible causes include the case where the spread of the contrast medium in the tumor is uneven, the case where a thrombus is formed in the tumor, the case where there is a necrotic region in the tumor, and the like.
- extraction processing based on extracting a portion with low luminance is performed as a basic algorithm, only the region with the lowest luminance is often extracted as the initial region. However, in this case, instead of extracting only the region with the lowest luminance, the region where the pixel value is intermediate between the region with the lowest luminance and the region with high luminance (organ substance) is also extracted as a tumor. Is desirable.
- the input device 13 inputs that there is an excess or deficiency, and the region correction execution unit 33 extracts the next most likely region in the extraction region storage unit.
- the visualization processing execution unit 31 presents (step 46 in FIG. 2), and a desired extraction region can be obtained.
- FIG. 4 is an image diagram showing an example in which the initial region extracted by the automatic extraction processing execution unit 34 is corrected by changing the parameters of the extraction algorithm by the region correction processing execution unit 32.
- an image having the luminance in FIG. 4 is an input image.
- each of the regions 1, 2, and 3 has a single luminance within the region, and the automatic extraction algorithm is a binarization process based on a threshold, and an area having a luminance smaller than a preset luminance is selected.
- the algorithm is an extraction region.
- the region 1 is composed of voxels having luminance 1
- the region 2 is luminance 2
- the region 3 is luminance 3.
- the luminance 1 is larger than the luminance 2 and the luminance 2 is larger than the luminance 3.
- the preset initial threshold is larger than the region 3 and smaller than the region 2
- the initial region A is extracted as shown in the lower left of FIG.
- the region 2 is the whole organ region and the region 3 is the lesion part region, and the purpose of the process is the extraction of the lesion part
- the initial region A can extract the target region without excess or deficiency.
- the process ends here.
- the region 1 is a part of an organ and the regions 2 and 3 are lesions
- the initial region A can be extracted only a part of the lesions. Area correction processing is required. Therefore, in order to extract the most likely extraction result after the initial region A, the parameters of the extraction algorithm are changed.
- the correction area shown in the lower right of FIG. 4 is obtained by changing the initial threshold value to a correction threshold value A that is larger than luminance 2 and smaller than luminance 1. B can be obtained, and the target area can be extracted without excess or deficiency.
- Such a change in brightness may be in three or more stages, it is possible to cope with a change in brightness in three or more stages by inputting that there is excess or deficiency when checking the correction area.
- the creation of the correction area and the storage in the area storage unit may be performed before the initial area is displayed, the process may be performed in parallel with the display process, or the process may be performed after input of excess or deficiency. It is also possible to perform. Extraction of the most likely area after the initial area is, for example, a method of changing the parameters of the extraction algorithm used when extracting the initial area and setting the boundary of the area outside the initial area. Since the correction area is created based on the original, the processing time is shorter than when a plurality of area extraction processes are performed. Of course, this is applicable even when the extraction target region has a higher luminance than the surrounding region.
- a method for setting the correction threshold a method may be used in which the area correction processing execution unit has a parameter table as shown in FIG. 5 in advance and sets the threshold with reference to the parameter table, as shown in FIG.
- the area correction processing execution unit has a parameter table as shown in FIG. 5 in advance and sets the threshold with reference to the parameter table, as shown in FIG.
- f1, f2, f3, f′1, f′2, f′3 are arbitrary functions set in advance
- f′ f′3.
- FIG. 7 shows an example of a correction processing item table.
- an initial region is extracted using an automatic extraction method.
- the automatic extraction method here may use an algorithm such as the above-described region expansion method or LevelSet, or may be a method such as binarization using a threshold value depending on image characteristics.
- FIG. 8 is an image diagram showing an example of a region in which the initial region extracted by the automatic extraction processing execution unit 34 is corrected by the region correction processing execution unit 32, including the process of correction.
- the Dilation process is performed until the blank area in the initial area B shown in the left of FIG. 8 is filled.
- the Dilatiob process is a process for expanding the region outline by one pixel. The state in which the blank area inside the initial area B is filled by this processing is shown in FIG. After this, what has been subjected to the erosion process for the same number of floors as the number of times the Dilation process has been performed is shown on the right side of FIG.
- the Erosiob process is a process that reverses the Dilation process, and contracts the outline of the region by one pixel. In this way, a correction area (a) is created by filling in the blank area inside the initial area B, and this is displayed here as the most likely area after the initial area B.
- FIG. 9 is an image diagram showing an example of a plurality of correction areas corrected by the area correction processing execution unit 32. 9 is obtained by replacing the concave portion of the outline of the correction area A shown in the left side of FIG. 9 with a straight line.
- the correction area B in FIG. 9 is an ellipse whose major axis and minor axis are the major axis and minor axis. Is the correction area C on the right side of FIG. In this way, the correction area changes in the order of correction area A ⁇ correction area B ⁇ correction area C. 4, 8, and 9 and the accompanying explanation are for the case where the extraction region is a two-dimensional region, the correction region is generated by the same algorithm even when the extraction region is a three-dimensional region. Is possible.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Processing (AREA)
Abstract
Description
修正領域を表示する場合には、当該修正領域に含まれるボクセルを有する一以上の断面図を当該可視化処理実行部が作成した複数の断面図から選択して、選択された一以上の断面図を当該修正領域に含まれるボクセルとその他の領域とを区別可能に出力する。
また、修正領域の作成および領域格納部への格納は、初期領域の表示前に全て行っていてもよいし、表示処理と並列して処理を行うことや、過不足ありと入力されてから処理を行うことも可能である。初期領域の次に尤もらしい領域の抽出とは例えば,初期領域の抽出時に利用された抽出アルゴリズムのパラメータを変更し,より外側に領域境界を設定するような方法などがあり,これは初期領域を元に修正領域を作成するため、複数の領域抽出処理を行うよりも処理は短時間で済む。また勿論これは抽出対象領域がその周辺領域よりも高輝度となる場合であっても適用可能である。
11 画像処理装置
12 表示装置
13 入力装置
20 内部メモリ
21 画像処理アルゴリズム実行部
22 入力操作取得部
31 可視化処理実行部
32 領域修正処理実行部
33 抽出領域格納部
34 自動抽出処理実行部
Claims (12)
- 三次元画像のボリュームデータに対する画像処理を実行する画像処理装置であって、
前記ボリュームデータから予め与えられた条件を満たす三次元の初期領域を抽出する抽出処理実行部と、
前記初期領域に修正処理を施して三次元の修正領域を抽出する領域修正処理実行部と、
前記ボリュームデータから前記三次元画像の複数の断面図を作成して当該複数の断面図の少なくとも一部を出力する可視化処理実行部とを有しており、
前記可視化処理実行部は、
前記初期領域を表示する場合には、当該初期領域に含まれるボクセルを有する一以上の断面図を当該可視化処理実行部が作成した複数の断面図から選択して、選択された一以上の断面図を当該初期領域に含まれるボクセルとその他の領域とを区別可能に出力し、
前記修正領域を表示する場合には、当該修正領域に含まれるボクセルを有する一以上の断面図を当該可視化処理実行部が作成した複数の断面図から選択して、選択された一以上の断面図を当該修正領域に含まれるボクセルとその他の領域とを区別可能に出力することを特徴とする画像処理装置。 - 請求項1記載の画像処理装置であって、
前記可視化処理実行部は、前記初期領域を表示した後に前記修正領域を表示する場合に、当該期領域に含まれるボクセルを有する断面図の数と当該修正領域に含まれるボクセルを有する断面図の数が異なる場合には、出力する断面図の数を変更することを特徴とする画像処理装置。 - 請求項1記載の画像処理装置であって、
前記可視化処理実行部は、前記初期領域を表示した後に前記修正領域を表示する場合に、前記初期領域と前記修正領域の三次元領域におけるサイズが異なる場合には、出力する断面図の数を変更することを特徴とする画像処理装置。 - 請求項1乃至3記載の画像処理装置であって、
前記領域修正処理実行部は、前記条件として与えられるパラメータを変更することにより前記修正領域を抽出することを特徴とする画像処理装置。 - 請求項1乃至3記載の画像処理装置であって、
前記領域修正処理実行部は、予め定められたルールに沿って領域が変化するように前記修正領域を抽出することを特徴とする画像処理装置。 - 請求項1乃至5記載の画像処理装置であって、
前記領域修正処理実行部は、前記初期領域の輪郭を補正することによって前記修正領域を抽出することを特徴とする画像処理装置。 - 画像処理装置が三次元画像のボリュームデータに対する画像処理を実行する方法であって、
前記画像処理装置の抽出処理実行部が、前記ボリュームデータから予め与えられた条件を満たす三次元の初期領域を抽出するステップと、
前記画像処理装置の領域修正処理実行部が、前記初期領域に修正処理を施して三次元の修正領域を抽出するステップと、
前記画像処理装置の可視化処理実行部が、前記ボリュームデータから前記三次元画像の複数の断面図を作成するステップと、
前記可視化処理実行部が、前記初期領域に含まれるボクセルを有する一以上の断面図を当該可視化処理実行部が作成した複数の断面図から選択して、選択された一以上の断面図を当該初期領域に含まれるボクセルとその他の領域とを区別可能に出力するステップと、
前記可視化処理実行部が、前記修正領域に含まれるボクセルを有する一以上の断面図を当該可視化処理実行部が作成した複数の断面図から選択して、選択された一以上の断面図を当該修正領域に含まれるボクセルとその他の領域とを区別可能に出力するステップとを有することを特徴とする方法。 - 請求項7記載の方法であって、
前記可視化処理実行部は、前記初期領域を表示した後に前記修正領域を表示する場合に、当該期領域に含まれるボクセルを有する断面図の数と当該修正領域に含まれるボクセルを有する断面図の数が異なる場合には、出力する断面図の数を変更することを特徴とする方法。 - 請求項7記載の方法であって、
前記可視化処理実行部は、前記初期領域を表示した後に前記修正領域を表示する場合に、前記初期領域と前記修正領域の三次元領域におけるサイズが異なる場合には、出力する断面図の数を変更することを特徴とする方法。 - 請求項7乃至9記載の方法であって、
前記領域修正処理実行部は、前記条件として与えられるパラメータを変更することにより前記修正領域を抽出することを特徴とする方法。 - 請求項7乃至9記載の方法であって、
前記領域修正処理実行部は、予め定められたルールに沿って領域が変化するように前記修正領域を抽出することを特徴とする方法。 - 請求項7乃至11記載の方法であって、
前記領域修正処理実行部は、前記初期領域の輪郭を補正することによって前記修正領域を抽出することを特徴とする方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/349,453 US9275453B2 (en) | 2011-10-03 | 2011-10-03 | Image processing device and image processing method |
PCT/JP2011/005566 WO2013051045A1 (ja) | 2011-10-03 | 2011-10-03 | 画像処理装置および画像処理方法 |
JP2013537263A JP5857061B2 (ja) | 2011-10-03 | 2011-10-03 | 画像処理装置および画像処理方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/005566 WO2013051045A1 (ja) | 2011-10-03 | 2011-10-03 | 画像処理装置および画像処理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013051045A1 true WO2013051045A1 (ja) | 2013-04-11 |
Family
ID=48043243
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/005566 WO2013051045A1 (ja) | 2011-10-03 | 2011-10-03 | 画像処理装置および画像処理方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9275453B2 (ja) |
JP (1) | JP5857061B2 (ja) |
WO (1) | WO2013051045A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318057A (zh) * | 2014-09-25 | 2015-01-28 | 新乡医学院第一附属医院 | 医学影像三维可视化系统 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5946221B2 (ja) * | 2013-06-11 | 2016-07-05 | 富士フイルム株式会社 | 輪郭修正装置、方法およびプログラム |
US10531825B2 (en) * | 2016-10-14 | 2020-01-14 | Stoecker & Associates, LLC | Thresholding methods for lesion segmentation in dermoscopy images |
JP7054787B2 (ja) | 2016-12-22 | 2022-04-15 | パナソニックIpマネジメント株式会社 | 制御方法、情報端末、及びプログラム |
EP3627442A4 (en) * | 2017-05-31 | 2020-05-20 | Shanghai United Imaging Healthcare Co., Ltd. | IMAGE PROCESSING METHOD AND SYSTEM |
JP7048760B2 (ja) * | 2018-10-31 | 2022-04-05 | 富士フイルム株式会社 | 領域修正装置、方法およびプログラム |
JP7083427B2 (ja) * | 2019-06-04 | 2022-06-10 | 富士フイルム株式会社 | 修正指示領域表示装置、方法およびプログラム |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04183436A (ja) * | 1990-11-20 | 1992-06-30 | Toshiba Corp | 画像処理装置 |
JPH08166995A (ja) * | 1994-12-13 | 1996-06-25 | Toshiba Corp | 医用診断支援システム |
JPH0935043A (ja) * | 1995-07-17 | 1997-02-07 | Toshiba Medical Eng Co Ltd | 診断支援装置 |
JP2008173167A (ja) * | 2007-01-16 | 2008-07-31 | Ziosoft Inc | 領域修正方法 |
JP2009075846A (ja) * | 2007-09-20 | 2009-04-09 | Fujifilm Corp | 輪郭抽出装置及びプログラム |
JP2009072432A (ja) * | 2007-09-21 | 2009-04-09 | Fujifilm Corp | 画像表示装置および画像表示プログラム |
JP2009279206A (ja) * | 2008-05-22 | 2009-12-03 | Ziosoft Inc | 医療画像処理方法および医療画像処理プログラム |
JP2011104027A (ja) * | 2009-11-16 | 2011-06-02 | Hitachi Medical Corp | 二値画像生成方法及び二値画像生成プログラム |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5497776A (en) * | 1993-08-05 | 1996-03-12 | Olympus Optical Co., Ltd. | Ultrasonic image diagnosing apparatus for displaying three-dimensional image |
JP4614548B2 (ja) * | 2001-01-31 | 2011-01-19 | パナソニック株式会社 | 超音波診断装置 |
EP1923839B1 (en) * | 2006-11-14 | 2016-07-27 | Hitachi Aloka Medical, Ltd. | Ultrasound diagnostic apparatus and volume data processing method |
JP2008259682A (ja) * | 2007-04-12 | 2008-10-30 | Fujifilm Corp | 部位認識結果修正装置、方法、およびプログラム |
US8538113B2 (en) * | 2008-09-01 | 2013-09-17 | Hitachi Medical Corporation | Image processing device and method for processing image to detect lesion candidate region |
-
2011
- 2011-10-03 US US14/349,453 patent/US9275453B2/en not_active Expired - Fee Related
- 2011-10-03 WO PCT/JP2011/005566 patent/WO2013051045A1/ja active Application Filing
- 2011-10-03 JP JP2013537263A patent/JP5857061B2/ja not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04183436A (ja) * | 1990-11-20 | 1992-06-30 | Toshiba Corp | 画像処理装置 |
JPH08166995A (ja) * | 1994-12-13 | 1996-06-25 | Toshiba Corp | 医用診断支援システム |
JPH0935043A (ja) * | 1995-07-17 | 1997-02-07 | Toshiba Medical Eng Co Ltd | 診断支援装置 |
JP2008173167A (ja) * | 2007-01-16 | 2008-07-31 | Ziosoft Inc | 領域修正方法 |
JP2009075846A (ja) * | 2007-09-20 | 2009-04-09 | Fujifilm Corp | 輪郭抽出装置及びプログラム |
JP2009072432A (ja) * | 2007-09-21 | 2009-04-09 | Fujifilm Corp | 画像表示装置および画像表示プログラム |
JP2009279206A (ja) * | 2008-05-22 | 2009-12-03 | Ziosoft Inc | 医療画像処理方法および医療画像処理プログラム |
JP2011104027A (ja) * | 2009-11-16 | 2011-06-02 | Hitachi Medical Corp | 二値画像生成方法及び二値画像生成プログラム |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318057A (zh) * | 2014-09-25 | 2015-01-28 | 新乡医学院第一附属医院 | 医学影像三维可视化系统 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2013051045A1 (ja) | 2015-03-30 |
JP5857061B2 (ja) | 2016-02-10 |
US9275453B2 (en) | 2016-03-01 |
US20140286551A1 (en) | 2014-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5857061B2 (ja) | 画像処理装置および画像処理方法 | |
US10878573B2 (en) | System and method for segmentation of lung | |
US9600890B2 (en) | Image segmentation apparatus, medical image device and image segmentation method | |
US8611989B2 (en) | Multi-planar reconstruction lumen imaging method and apparatus | |
US20060181551A1 (en) | Method, computer program product, and apparatus for designating region of interest | |
JP6133026B2 (ja) | 三次元画像をナビゲートし、セグメント化し、抽出するための方法及びシステム | |
JP6273266B2 (ja) | セグメント化の強調表示 | |
US10524823B2 (en) | Surgery assistance apparatus, method and program | |
JP2006075602A (ja) | 血管構造の3d画像データセットからなるプラーク沈着の可視化方法 | |
US20070053553A1 (en) | Protocol-based volume visualization | |
US9117291B2 (en) | Image processing apparatus, image processing method, and non-transitory storage medium | |
US9629599B2 (en) | Imaging device, assignment system and method for assignment of localization data | |
JP4394127B2 (ja) | 領域修正方法 | |
CN104240271A (zh) | 医用图像处理装置 | |
US9019272B2 (en) | Curved planar reformation | |
US20140294269A1 (en) | Medical image data processing apparatus and method | |
US20140029816A1 (en) | Per vessel, vessel tree modelling with shared contours | |
JP6533687B2 (ja) | 医用画像処理装置、医用画像処理方法、及び医用画像処理プログラム | |
US11395601B2 (en) | Medical image processing apparatus | |
JP7003635B2 (ja) | コンピュータプログラム、画像処理装置及び画像処理方法 | |
JP2012085833A (ja) | 3次元医用画像データの画像処理システム、その画像処理方法及びプログラム | |
EP3109824A1 (en) | System and method for handling image data | |
US20170287159A1 (en) | Medical image processing apparatus, medical image processing method, and medical image processing system | |
US20220313360A1 (en) | Incision simulation device, incision simulation method, and program | |
Lievin et al. | Interactive 3D Segmentation and Inspection of Volumetric Medical Datasets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11873614 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013537263 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14349453 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11873614 Country of ref document: EP Kind code of ref document: A1 |