WO2012046451A1 - 医用画像処理装置、及び医用画像処理プログラム - Google Patents
医用画像処理装置、及び医用画像処理プログラム Download PDFInfo
- Publication number
- WO2012046451A1 WO2012046451A1 PCT/JP2011/005628 JP2011005628W WO2012046451A1 WO 2012046451 A1 WO2012046451 A1 WO 2012046451A1 JP 2011005628 W JP2011005628 W JP 2011005628W WO 2012046451 A1 WO2012046451 A1 WO 2012046451A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- medical image
- feature point
- unit
- specifying
- Prior art date
Links
- 238000000605 extraction Methods 0.000 claims abstract description 27
- 210000000056 organ Anatomy 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 15
- 239000000284 extract Substances 0.000 abstract description 9
- 210000004072 lung Anatomy 0.000 description 42
- 230000006870 function Effects 0.000 description 10
- 238000003745 diagnosis Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 3
- 210000002429 large intestine Anatomy 0.000 description 3
- 210000002784 stomach Anatomy 0.000 description 3
- 206010014561 Emphysema Diseases 0.000 description 2
- 230000007423 decrease Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/40—Arrangements for generating radiation specially adapted for radiation diagnosis
- A61B6/4064—Arrangements for generating radiation specially adapted for radiation diagnosis specially adapted for producing a particular type of beam
- A61B6/4085—Cone-beams
Definitions
- Embodiments described herein relate generally to a medical image processing apparatus and a medical image processing program.
- a medical image is acquired by using a medical image photographing apparatus such as an X-ray CT apparatus or an MRI apparatus, and a diagnosis using the medical image is performed.
- a medical image photographing apparatus such as an X-ray CT apparatus or an MRI apparatus
- a diagnosis using the medical image is performed.
- pulmonary emphysema is diagnosed using medical images.
- As a method for specifying (extracting) a lung field region from a medical image there is a region growing method.
- the region expansion method is a method of determining one pixel in a region to be extracted and successively extracting pixels regarded as being in the region with the pixel as a starting point (seed point).
- an observer determines one pixel in the lung field region by visual inspection with reference to a medical image. For example, an observer visually confirms a coronal image or an axial image showing the chest and determines one pixel in the lung field region.
- the determination takes time.
- the diagnosis speed decreases, and as a result, the diagnosis time becomes longer.
- the reproducibility of the determination may be reduced.
- the judgment may change between observers, and even the same observer may change the judgment over time. Thus, there is a possibility that the judgment may change between the observers and within the observers.
- This embodiment is intended to provide a medical image processing apparatus and a medical image processing program that can automatically identify a target organ from a medical image.
- the medical image processing apparatus includes a first extracting unit, an adding unit, a first specifying unit, and a second specifying unit.
- the first extraction unit receives volume data representing a region including the target organ and extracts an air region from the volume data.
- the adding means generates projection image data representing the distribution of the added value of the pixel value by adding the pixel value of each pixel in the air region in the volume data along a predetermined projection direction.
- the first specifying unit specifies the first feature point from the projection image data.
- the second specifying means specifies a point on a line passing through the first feature point in the air region as the second feature point.
- the medical image processing apparatus will be described with reference to FIG.
- the medical image processing apparatus 1 according to this embodiment is connected to a medical image photographing apparatus 90, for example.
- An imaging apparatus such as an X-ray CT apparatus or an MRI apparatus is used for the medical image capturing apparatus 90.
- the medical image photographing apparatus 90 generates medical image data representing a subject by photographing the subject.
- the medical image photographing apparatus 90 generates volume data representing a three-dimensional photographing region by photographing a three-dimensional photographing region.
- an X-ray CT apparatus as the medical image capturing apparatus 90 generates CT image data in a plurality of cross sections having different positions by capturing a three-dimensional capturing region.
- the X-ray CT apparatus generates volume data using a plurality of CT image data.
- the medical image photographing device 90 outputs the volume data to the medical image processing device 1.
- the medical image photographing apparatus 90 photographs a target organ such as the lung, large intestine, or stomach, and acquires volume data representing a region including the target organ.
- a target organ such as the lung, large intestine, or stomach
- volume data representing a region including the target organ is acquired.
- the medical image processing apparatus 1 includes an image storage unit 2, a first extraction unit 3, an addition unit 4, a first specification unit 5, a linear region calculation unit 6, a second specification unit 7, It has the 2nd extraction part 8, the image generation part 9, the display control part 10, and the display part 11.
- FIG. 1 An image storage unit 2, a first extraction unit 3, an addition unit 4, a first specification unit 5, a linear region calculation unit 6, a second specification unit 7, It has the 2nd extraction part 8, the image generation part 9, the display control part 10, and the display part 11.
- the image storage unit 2 stores volume data generated by the medical image photographing apparatus 90.
- the medical image processing apparatus 1 may generate volume data without the medical image capturing apparatus 90 generating volume data.
- the medical image processing apparatus 1 receives a plurality of medical image data (for example, CT image data) generated by the medical image photographing apparatus 90, and generates volume data based on the plurality of medical image data.
- the image storage unit 2 stores the volume data generated by the medical image processing apparatus 1.
- the first extraction unit 3 reads the volume data from the image storage unit 2 and extracts a low pixel value region (air region) from the volume data by threshold processing. For example, the first extraction unit 3 extracts a low pixel value region by binarization processing. Specifically, the first extraction unit 3 sets a region where pixel values such as CT values and luminance values are equal to or lower than a preset threshold value as a density value “1”, and sets a region where the pixel value is greater than the threshold value as a density value. By setting the value to “0”, a region (air region) with a low pixel value (for example, CT value) is extracted. The pixel value may be a CT value or a luminance value. Note that the threshold processing range is within the body surface.
- FIG. 2 shows the extraction result.
- a low pixel value region 100 shown in FIG. 2 is a three-dimensional region extracted from the volume data by the first extraction unit 3.
- the low pixel value region 100 corresponds to, for example, an air region.
- the addition unit 4 generates projection image data representing the distribution of the addition values of the pixel values by adding the pixel values of the respective pixels in the low pixel value region of the volume data along a predetermined projection direction.
- a coronal plane, an axial plane, and a sagittal plane that are orthogonal to each other are defined.
- An axis orthogonal to the coronal plane is defined as the Y axis
- an axis orthogonal to the axial plane is defined as the Z axis
- an axis orthogonal to the sagittal plane is defined as the X axis.
- the Z axis corresponds to the body axis of the subject.
- the adding unit 4 adds a pixel value of each pixel on each coronal plane along the projection direction (Y axis direction) orthogonal to the coronal plane, with the direction orthogonal to the coronal plane (Y axis direction) as the projection direction.
- projection image data representing the distribution of the added value of the pixel values is generated.
- This projection image data is image data obtained by adding and projecting a low pixel value region in each coronal plane onto a plane parallel to the coronal plane.
- FIG. 3 shows an example of the projected image.
- the processing of the adding unit 4 described above can also be described as follows. Projection is performed by fixing the x coordinate and the z coordinate at a certain coordinate (x, y, z), and using the sum of the pixel values of all the pixels in the y coordinate as the pixel value of each pixel on the xz plane (coronal plane). Let it be image data.
- Mask (x, y, z) is a pixel value of each pixel in the low pixel value region.
- a projected image 200 shown in FIG. 3 is an image corresponding to Map (x, z) in Expression (1).
- the first specifying unit 5 specifies a first feature point from the projection image data. Specifically, the first specifying unit 5 sets the pixel having the maximum addition value (pixel value) among the pixels of the projection image data as the first feature point.
- FIG. 4 shows the projection image and the first feature point.
- the first specifying unit 5 specifies the pixel (x m , z m ) having the maximum added value as the first feature point 210 in the projection image 200. This means that the pixel having the maximum addition value in the projection image 200 is highly likely to be a pixel in the lung field region. Therefore, in this embodiment, the pixel having the maximum addition value is specified in the projection image data.
- the first specifying unit 5 divides the projection image 200 at the center in the X-axis direction, and the addition value is maximized for the projection image included in one region.
- a pixel may be specified. That is, the first specifying unit 5 may treat each of the right and left lungs as separate processing targets with the center in the X-axis direction in between. In this case, the first specifying unit 5 specifies the pixel having the maximum addition value in the right lung as the first feature point in the right lung, and the addition value is maximum in the left lung. The pixel is identified as the first feature point in the left lung. The first specifying unit 5 may specify the first feature point for only the right lung portion or only the left lung portion, or each first feature point for both lung portions. May be specified.
- the operator may specify the right lung portion or the left lung portion by an operation unit (not shown).
- the pixel having the maximum addition value (pixel value) is set as the first feature point, the present invention is not limited to this, and a pixel having an addition value within a range lower than the maximum value by the predetermined value is set as the first feature point. It is good.
- the linear region calculation unit 6 obtains a linear region that passes through the first feature point and extends along the projection direction described above. That is, the linear region calculation unit 6 obtains a linear region by back-projecting the coordinates (x m , z m ) of the first feature point into the original three-dimensional space along the projection direction. In this embodiment, as an example, the linear region calculation unit 6 performs back projection onto the original three-dimensional space along the projection direction (Y-axis direction) through the first feature point.
- the position of the linear region is represented by a coordinate group (x m , y i , z m ).
- FIG. 5 shows a low pixel value region and a linear region.
- FIG. 5 shows a low pixel value region 300 on the axial plane.
- the linear region 310 passes through the coordinates (x m , z m ) of the first feature point and is represented by a coordinate group (x m , y i , z m ).
- the coordinate (x m , z m ) of the pixel having the maximum value in the projection image 200 is 3
- the coordinate (x m , z m ) of the pixel having the maximum value in the projection image 200 is 3
- one pixel in the lung field region exists in the linear region (coordinate group (x m , y i , z m )) projected onto the dimensional space.
- the second specifying unit 7 obtains a point on the linear region as the second feature point in the low pixel value region. Specifically, the second specifying unit 7 obtains an intersection (x, y, z) between the linear region and the outline of the low pixel value region, and defines the intersection as a second feature point. For example, as illustrated in FIG. 6, the second specifying unit 7 obtains an intersection 320 between the outline of the low pixel value region 300 and the linear region 310. Since this intersection 320 is likely to be in the lung field region in the low pixel value region 300, the pixel at this intersection 320 is defined as a pixel (second feature point) in the lung field region.
- the second specifying unit 7 outputs coordinate information indicating the coordinates (x, y, z) of the second feature point to the display control unit 10.
- the outline of the low pixel value region 300 and the linear region 310 intersect at two intersections (intersection 320 and intersection 321).
- the second specifying unit 7 sets any one of the intersections as the second feature point.
- the operator may designate one intersection from a plurality of intersections using an operation unit (not shown).
- the display control unit 10 to be described later may display a mark representing the second feature point on the medical image on the display unit 11 and the operator may specify a desired intersection using the operation unit.
- the second specifying unit 7 may use an arbitrary point on the linear region as the second feature point in the low pixel value region.
- the point on the line passing through the first feature point and extending in the projection direction is specified as the second feature point, but the line for specifying the second feature point is not limited to the one extending in the projection direction. That is, a point on a line passing through the first feature point may be specified as the second feature point.
- the second extraction unit 8 extracts a lung field region from the volume data using, for example, a region expansion method.
- the second extraction unit 8 receives the coordinates (x, y, z) of the second feature point from the second specifying unit 7 and reads the volume data from the image storage unit 2. Then, the second extraction unit 8 extracts, from the volume data, a pixel that can be regarded as a lung field region by using the second feature point as a start point (seed point) of the region expansion method.
- the second extraction unit 8 outputs lung field region image data representing the lung field region image to the display control unit 10.
- the image generation unit 9 reads volume data from the image storage unit 2 and performs volume rendering on the volume data to generate 3D image data.
- the image generation unit 9 may generate image data (MPR image data) in an arbitrary cross section by performing MPR processing (Multi Planar Reconstruction) on the volume data.
- the image generation unit 9 outputs medical image data such as 3D image data and MPR image data to the display control unit 10.
- the display control unit 10 receives the medical image data from the image generation unit 9 and causes the display unit 11 to display a medical image based on the medical image data. Further, the display control unit 10 may receive the coordinate information of the second feature point from the second specifying unit 7 and cause the display unit 11 to display a mark representing the second feature point on the medical image. . 7 and 8 show examples of medical images. For example, as illustrated in FIG. 7, the display control unit 10 causes the display unit 11 to display a mark 410 representing the second feature point on the axial image 400. In addition, as shown in FIG. 8, the display control unit 10 may display a mark 510 representing the second feature point on the coronal image 500 on the display unit 11. When there are a plurality of second feature points, the display control unit 10 may cause the display unit 11 to display a mark representing each of the plurality of second feature points on the medical image.
- the display control unit 10 may receive the lung field region image data representing the lung field region image from the second extraction unit 8 and cause the display unit 11 to display the image representing the lung field region.
- the display unit 11 includes a monitor such as a CRT or a liquid crystal display.
- the display unit 11 displays a medical image, a lung field region, and the like.
- Each of the ten functions may be executed by a program.
- Each unit 10 may be configured by a processing device (not shown) such as a CPU, GPU, or ASIC, and a storage device (not shown) such as a ROM, RAM, or HDD.
- the storage device executes the first extraction program for executing the function of the first extraction unit 3, the addition program for executing the function of the addition unit 4, and the function of the first specifying unit 5.
- An image generation program for executing the function of the generation unit 9 and a display control program for executing the function of the display control unit 10 are stored.
- a processing device such as a CPU executes functions of each unit by executing each program stored in the storage unit.
- the first extraction program, the addition program, the first specific program, the linear region calculation program, and the second specific program constitute an example of a “medical image processing program”.
- step S01 the first extraction unit 3 reads volume data from the image storage unit 2.
- step S02 the first extraction unit 3 extracts a low pixel value region (air region) from the volume data by threshold processing.
- step S03 the adding unit 4 adds the pixel values of each pixel in the low pixel value region of the volume data along a predetermined projection direction, thereby obtaining projection image data representing the distribution of the added value of the pixel values.
- the adding unit 4 uses the direction orthogonal to the coronal plane (Y-axis direction) as the projection direction, and adds the pixel values of each pixel on each coronal plane along the projection direction (Y-axis direction), thereby Projection image data representing the distribution of the sum of values is generated.
- step S04 the first specifying unit 5 sets the pixel having the maximum addition value (pixel value) among the pixels of the projection image data as the first feature point (x m , z m ).
- step S05 the linear region calculation unit 6 performs back-projection on the coordinates (x m , z m ) of the first feature points in the original three-dimensional space along the projection direction (Y-axis direction). A region (x m , y i , z m ) is obtained. That is, the linear region calculation unit 6 obtains a linear region that passes through the first feature point and extends along the projection direction.
- step S06 the second specifying unit 7 obtains an intersection (x, y, z) between the low pixel value region and the linear region, and defines the intersection as a second feature point. Since there is a high possibility that this intersection point is in the lung field region in the low pixel value region, the pixel at this intersection point is defined as a pixel (second feature point) in the lung field region.
- step S07 the image generation unit 9 reads volume data from the image storage unit 2 and generates medical image data based on the volume data.
- the display control unit 10 causes the display unit 11 to display a mark representing the second feature point on the medical image.
- the display control unit 10 causes the display unit 11 to display a mark representing the second feature point on the axial image or the coronal image.
- step S08 the second extraction unit 8 reads the volume data from the image storage unit 2, and uses the second feature point (x, y, z) as the start point (seed point) of the region expansion method. Pixels that can be considered to be extracted from the volume data.
- step S09 the display control unit 10 causes the display unit 11 to display an image representing the extracted lung field region.
- step S07 and the process of step S08 may be reversed, or may be performed simultaneously.
- the process of step S08 and step S09 may be performed without performing the process of step S07, and the process of step S07 may be performed without performing the process of step S08 and step 09.
- the medical image processing apparatus 1 it is possible to automatically determine one pixel (second feature point) included in the lung field region. Therefore, it is possible to automatically extract the lung field region. As a result, it is possible to save the trouble of visually determining one pixel included in the lung field region, and it is possible to shorten the diagnosis time.
- the reproducibility for specifying one pixel (second feature point) is higher than that of visual observation.
- the reproducibility of extracting the lung field region is also high, the same region can be extracted anatomically, which may be useful for the diagnosis of the prognosis.
- one pixel (second feature point) included in the lung field region can be automatically determined, it is possible to reduce errors between the observers and errors within the observers. In this way, diagnosis can be supported without depending on the experience of the observer.
- the case of extracting a lung field region has been described.
- the region such as the large intestine or stomach can be extracted by executing the above-described processing. Can do.
- the medical image photographing apparatus 90 may have the function of the medical image processing apparatus 1.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Radiology & Medical Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- Optics & Photonics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pulmonology (AREA)
- General Physics & Mathematics (AREA)
- Physiology (AREA)
- Computer Graphics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
投影画像データの各画素値Map(x,z)は、以下の式(1)で定義できる。
式(1):Map(x,z)=ΣMask(x,yi,z)
ここで、Mask(x,y,z)は、低画素値領域内の各画素の画素値である。
加算の範囲(yiの範囲)は、一例としてi=1~512とする。すなわち、512枚のコロナル面について加算投影することにより投影画像データを生成する。
図3に示す投影画像200が、式(1)のMap(x,z)に対応する画像である。
なお、第1の特徴点を通り投影方向に沿って延びる線上の点を第2の特徴点として特定したが、第2の特徴点を特定するための線は投影方向に延びるものに限定されない。すなわち、第1の特徴点を通る線上の点を第2の特徴点として特定すればよい。
2 画像記憶部
3 第1の抽出部
4 加算部
5 第1の特定部
6 線状領域特定部
7 第2の特定部
8 第2の抽出部
9 画像生成部
10 表示制御部
11 表示部
Claims (6)
- 注目臓器を含む領域を表すボリュームデータを受けて、前記ボリュームデータから空気領域を抽出する第1の抽出手段と、
前記ボリュームデータのうち前記空気領域内の各画素の画素値を所定の投影方向に沿って加算することにより、前記画素値の加算値の分布を表す投影画像データを生成する加算手段と、
前記投影画像データから第1の特徴点を特定する第1の特定手段と、
前記空気領域において前記第1の特徴点を通る線上の点を第2の特徴点として特定する第2の特定手段と、
を有する医用画像処理装置。 - 前記第2の特定手段は、前記線と前記空気領域の輪郭との交点を前記第2の特徴点として求める、
請求項1に記載の医用画像処理装置。 - 前記第1の特定手段は、前記投影画像データの各画素のうち前記加算値が最大となる画素を前記第1の特徴点として求める、
請求項1又は請求項2に記載の医用画像処理装置。 - 前記第2の特徴点を開始点として、領域拡張法により前記ボリュームデータから前記注目臓器を特定する第2の抽出手段を更に有する、
請求項1から請求項3のいずれかに記載の医用画像処理装置。 - 前記ボリュームデータに基づく医用画像に前記第2の特徴点を表すマークを重ねて表示手段に表示させる表示制御手段を更に有する、
請求項1から請求項4のいずれかに記載の医用画像処理装置。 - コンピュータに、
注目臓器を含む領域を表すボリュームデータから空気領域を抽出する第1の抽出機能と、
前記ボリュームデータのうち前記空気領域内の各画素の画素値を所定の投影方向に沿って加算することにより、前記画素値の加算値の分布を表す投影画像データを生成する加算機能と、
前記投影画像データから第1の特徴点を特定する第1の特定機能と、
前記空気領域において前記第1の特徴点を通る線上の点を第2の特徴点として特定する第2の特定機能と、
を実行させる医用画像処理プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/497,413 US8655072B2 (en) | 2010-10-06 | 2011-10-06 | Medical image processing apparatus and medical image processing program |
CN201180002827.0A CN102573640B (zh) | 2010-10-06 | 2011-10-06 | 医用图像处理装置及医用图像处理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-226187 | 2010-10-06 | ||
JP2010226187A JP5835881B2 (ja) | 2010-10-06 | 2010-10-06 | 医用画像処理装置、及び医用画像処理プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012046451A1 true WO2012046451A1 (ja) | 2012-04-12 |
Family
ID=45927456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/005628 WO2012046451A1 (ja) | 2010-10-06 | 2011-10-06 | 医用画像処理装置、及び医用画像処理プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US8655072B2 (ja) |
JP (1) | JP5835881B2 (ja) |
CN (1) | CN102573640B (ja) |
WO (1) | WO2012046451A1 (ja) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9730655B2 (en) * | 2013-01-21 | 2017-08-15 | Tracy J. Stark | Method for improved detection of nodules in medical images |
CN103315764A (zh) | 2013-07-17 | 2013-09-25 | 沈阳东软医疗系统有限公司 | 一种ct定位图像获取方法和ct设备 |
JP7311250B2 (ja) * | 2018-08-31 | 2023-07-19 | 株式会社小松製作所 | 作業機械の運搬物特定装置、作業機械、作業機械の運搬物特定方法、補完モデルの生産方法、および学習用データセット |
KR102271614B1 (ko) * | 2019-11-27 | 2021-07-02 | 연세대학교 산학협력단 | 3차원 해부학적 기준점 검출 방법 및 장치 |
CN111563876B (zh) * | 2020-03-24 | 2023-08-25 | 北京深睿博联科技有限责任公司 | 一种医学影像的获取方法、显示方法 |
KR20230089658A (ko) * | 2021-12-14 | 2023-06-21 | 사회복지법인 삼성생명공익재단 | 심층강화학습을 이용한 두부 계측 방사선 영상의 계측점 검출 방법 및 분석장치 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005124895A (ja) * | 2003-10-24 | 2005-05-19 | Hitachi Medical Corp | 画像診断支援装置 |
JP2005161032A (ja) * | 2003-11-10 | 2005-06-23 | Toshiba Corp | 画像処理装置 |
JP2008067851A (ja) * | 2006-09-13 | 2008-03-27 | Toshiba Corp | コンピュータ支援診断装置、x線ct装置及び画像処理装置 |
JP2008220416A (ja) * | 2007-03-08 | 2008-09-25 | Toshiba Corp | 医用画像処理装置及び医用画像診断装置 |
JP2010110544A (ja) * | 2008-11-10 | 2010-05-20 | Fujifilm Corp | 画像処理装置および方法並びにプログラム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08336503A (ja) | 1995-06-15 | 1996-12-24 | Ge Yokogawa Medical Syst Ltd | 医用画像診断装置 |
US7596258B2 (en) * | 2003-11-10 | 2009-09-29 | Kabushiki Kaisha Toshiba | Image processor |
WO2006011545A1 (ja) * | 2004-07-30 | 2006-02-02 | Hitachi Medical Corporation | 医用画像診断支援方法、装置及び画像処理プログラム |
JP2007167152A (ja) | 2005-12-20 | 2007-07-05 | Hitachi Medical Corp | 磁気共鳴イメージング装置 |
JP2008253293A (ja) | 2007-03-30 | 2008-10-23 | Fujifilm Corp | Ct画像からの肺野領域抽出方法 |
JP5301197B2 (ja) * | 2008-04-11 | 2013-09-25 | 富士フイルム株式会社 | 断面画像表示装置および方法ならびにプログラム |
JP2010167067A (ja) * | 2009-01-22 | 2010-08-05 | Konica Minolta Medical & Graphic Inc | 医用画像処理装置及びプログラム |
WO2010132722A2 (en) * | 2009-05-13 | 2010-11-18 | The Regents Of The University Of California | Computer tomography sorting based on internal anatomy of patients |
DE102010013360B4 (de) * | 2010-03-30 | 2017-01-19 | Siemens Healthcare Gmbh | Verfahren zur Rekonstruktion von Bilddaten eines zyklisch sich bewegenden Untersuchungsobjektes |
-
2010
- 2010-10-06 JP JP2010226187A patent/JP5835881B2/ja active Active
-
2011
- 2011-10-06 CN CN201180002827.0A patent/CN102573640B/zh active Active
- 2011-10-06 US US13/497,413 patent/US8655072B2/en active Active
- 2011-10-06 WO PCT/JP2011/005628 patent/WO2012046451A1/ja active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005124895A (ja) * | 2003-10-24 | 2005-05-19 | Hitachi Medical Corp | 画像診断支援装置 |
JP2005161032A (ja) * | 2003-11-10 | 2005-06-23 | Toshiba Corp | 画像処理装置 |
JP2008067851A (ja) * | 2006-09-13 | 2008-03-27 | Toshiba Corp | コンピュータ支援診断装置、x線ct装置及び画像処理装置 |
JP2008220416A (ja) * | 2007-03-08 | 2008-09-25 | Toshiba Corp | 医用画像処理装置及び医用画像診断装置 |
JP2010110544A (ja) * | 2008-11-10 | 2010-05-20 | Fujifilm Corp | 画像処理装置および方法並びにプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP2012075806A (ja) | 2012-04-19 |
CN102573640B (zh) | 2015-11-25 |
CN102573640A (zh) | 2012-07-11 |
US8655072B2 (en) | 2014-02-18 |
JP5835881B2 (ja) | 2015-12-24 |
US20120263359A1 (en) | 2012-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5835881B2 (ja) | 医用画像処理装置、及び医用画像処理プログラム | |
JP6039903B2 (ja) | 画像処理装置、及びその作動方法 | |
US9563978B2 (en) | Image generation apparatus, method, and medium with image generation program recorded thereon | |
US20110245660A1 (en) | Projection image generation apparatus and method, and computer readable recording medium on which is recorded program for the same | |
JP6040193B2 (ja) | 3次元方向設定装置および方法並びにプログラム | |
JP6824845B2 (ja) | 画像処理システム、装置、方法およびプログラム | |
JP2006519631A (ja) | 仮想内視鏡法の実行システムおよび実行方法 | |
JP2017102927A (ja) | 3d画像から2d画像へのマッピング | |
JP5826082B2 (ja) | 医用画像診断支援装置および方法並びにプログラム | |
JP6738631B2 (ja) | 医用画像処理装置、医用画像処理方法、及び医用画像処理プログラム | |
JP5624336B2 (ja) | 医用画像処理装置、及び医用画像処理プログラム | |
JP5595207B2 (ja) | 医用画像表示装置 | |
CN109389577B (zh) | X射线图像处理方法和系统及计算机存储介质 | |
US9585569B2 (en) | Virtual endoscopic projection image generating device, method and program | |
JP6487999B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
JP6263248B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
JP6820805B2 (ja) | 画像位置合わせ装置、その作動方法およびプログラム | |
JP5991731B2 (ja) | 情報処理装置及び情報処理方法 | |
US20170316619A1 (en) | Three-dimensional data processing system, method, and program, three-dimensional model, and three-dimensional model shaping device | |
US10636196B2 (en) | Image processing apparatus, method of controlling image processing apparatus and non-transitory computer-readable storage medium | |
JP2016101225A (ja) | 医用画像処理装置 | |
CN108113690B (zh) | 解剖腔的改善的可视化 | |
CN106887017B (zh) | 图像处理装置、图像处理系统及图像处理方法 | |
JP6598459B2 (ja) | 医用画像処理装置及び医用画像処理方法 | |
JP6847011B2 (ja) | 3次元画像処理装置および方法並びにプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180002827.0 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13497413 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11830385 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11830385 Country of ref document: EP Kind code of ref document: A1 |