WO2014041789A1 - 医用画像表示装置および方法並びにプログラム - Google Patents
医用画像表示装置および方法並びにプログラム Download PDFInfo
- Publication number
- WO2014041789A1 WO2014041789A1 PCT/JP2013/005326 JP2013005326W WO2014041789A1 WO 2014041789 A1 WO2014041789 A1 WO 2014041789A1 JP 2013005326 W JP2013005326 W JP 2013005326W WO 2014041789 A1 WO2014041789 A1 WO 2014041789A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tubular tissue
- cross
- image
- unit
- dimensional image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 32
- 210000002429 large intestine Anatomy 0.000 claims description 77
- 210000004204 blood vessel Anatomy 0.000 claims description 8
- 210000000621 bronchi Anatomy 0.000 claims description 4
- 210000000813 small intestine Anatomy 0.000 claims description 4
- 210000001072 colon Anatomy 0.000 abstract description 4
- 210000001519 tissue Anatomy 0.000 description 52
- 238000003745 diagnosis Methods 0.000 description 12
- 238000002591 computed tomography Methods 0.000 description 9
- 238000000926 separation method Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 210000004351 coronary vessel Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
- A61B6/463—Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5223—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data generating planar views from image data, e.g. extracting a coronal view from a 3D image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/68—Analysis of geometric attributes of symmetry
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/31—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the rectum, e.g. proctoscopes, sigmoidoscopes, colonoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
- A61B6/466—Displaying means of special interest adapted to display 3D data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5205—Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30172—Centreline of tubular or elongated structure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/028—Multiple view windows (top-side-front-sagittal-orthogonal)
Definitions
- the present invention relates to a medical image display apparatus, method, and program for extracting a tubular tissue region from a three-dimensional image of a subject and displaying the three-dimensional image of the tubular tissue region.
- tubular tissues such as the large intestine, small intestine, bronchus, and blood vessels of a patient are extracted from a three-dimensional image taken by a modality such as a CT (Computed Tomography) apparatus, and the extracted three-dimensional image of the tubular tissue is used for image diagnosis. Things have been done.
- CT Computer Planar Tomography
- colon colonography based on a three-dimensional image of the large intestine region, the path of the endoscope passing through the inside of the large intestine region is determined, and the viewpoint is actually moved while moving the viewpoint along the determined path.
- a technique has been proposed in which a virtual endoscopic image similar to an image photographed by an endoscope is generated, and a route to a target point is navigated by displaying the virtual endoscopic image.
- the CT value of that part becomes different from the CT value of the large intestine region, that is, the CT value of the air.
- the large intestine region is separated into a plurality of regions as shown in FIG. 8 without being extracted.
- the portion between the points P4 and P5 and the point P2 and the point P3 are separated portions.
- the endoscope path is also separated, and thus a virtual endoscopic image in the separated portion cannot be generated and displayed. Therefore, for example, it is conceivable to connect the two separated large intestine regions with straight lines or curves according to a predetermined rule to form an endoscope path.
- the large intestine is a soft tissue, it depends on the patient's posture when photographing. The deformation is large, and it is unlikely that the above-described straight line or curve matches the actual path of the large intestine region.
- Patent Document 1 it is disclosed that when extracting a blood vessel core line, the user automatically corrects the core line once extracted on the two-dimensional image.
- the correction of the core line makes it difficult to accurately correct the core line because it is difficult to grasp the actual three-dimensional structure of the blood vessel.
- Patent Document 1 proposes a method in which when a blood vessel core line is extracted and the blood vessel has a blockage, the user newly adds a passing point of the core line and re-extracts the core line including the passing point. Has been.
- the present invention provides a medical image capable of easily and accurately editing the route of a separated portion even when a tubular tissue region such as a large intestine region is separated into a plurality of regions and extracted.
- An object of the present invention is to provide a display device, a method, and a program.
- the medical image display device of the present invention includes a three-dimensional image acquisition unit that acquires a three-dimensional image of a subject, and a tubular tissue that acquires a tubular tissue region in the subject from the three-dimensional image acquired by the three-dimensional image acquisition unit.
- the end point specifying unit that specifies the end points of the two tubular tissue regions connected to the separated portion
- the end point specifying unit A cross-sectional image generation unit that generates a cross-sectional image including the two specified end points, a display control unit that displays the cross-sectional image generated by the cross-sectional image generation unit and a three-dimensional image of the tubular tissue region, and the two tubular tubes
- a route receiving unit that receives an input of a route connecting the tissue areas.
- the cross-sectional image generation unit may be configured to generate a cross-sectional image that maximizes the inner product of the normal vector of the projection plane of the three-dimensional image of the tubular tissue region and the normal vector of the cross-sectional image. Can be generated.
- the end point specifying unit can specify the points input by the user using the input device as the two end points.
- the end point specifying unit can automatically detect and specify the two end points.
- the display control unit can display the cross-sectional image and the three-dimensional image of the tubular tissue region in an overlapping manner.
- the display control unit can display the cross-sectional image and the three-dimensional image of the tubular tissue region side by side.
- the cross-sectional image generation unit can generate a CPR image (Curved Planer Reformation) as a cross-sectional image using the route received by the route receiving unit as a core line.
- CPR image Chemical Planer Reformation
- the path receiving unit can receive a path of a line connecting two tubular tissue regions a plurality of times, and the cross-sectional image generating unit can generate a CPR image for each input of the path.
- tubular tissue can be the large intestine, small intestine, bronchi, or blood vessel.
- the medical image display method of the present invention acquires a three-dimensional image of a subject, acquires a tubular tissue region representing the shape of the tubular tissue in the subject from the acquired three-dimensional image, and the acquired tubular tissue region
- end points of two tubular tissue regions connected to the separated portion are respectively specified, a cross-sectional image including the two specified end points is generated, and the generated cross-sectional image and the three-dimensional of the tubular tissue region are generated.
- An image is displayed and an input of a path connecting the two tubular tissue regions is received.
- the medical image display program of the present invention uses a computer to obtain the shape of a tubular tissue in a subject from a three-dimensional image acquisition unit that acquires a three-dimensional image of the subject and the three-dimensional image acquired by the three-dimensional image acquisition unit.
- a computer uses a computer to obtain the shape of a tubular tissue in a subject from a three-dimensional image acquisition unit that acquires a three-dimensional image of the subject and the three-dimensional image acquired by the three-dimensional image acquisition unit.
- Display control for displaying an end point specifying unit, a cross-section generating unit that generates a cross-sectional image including two end points specified by the end point specifying unit, and a cross-sectional image generated by the cross-section generating unit and a three-dimensional image of the tubular tissue region And a route receiving unit that receives an input of a route connecting the two tubular tissue regions.
- a tubular tissue region representing the shape of a tubular tissue in a subject is acquired from a three-dimensional image of the subject, and the acquired tubular tissue region is separated.
- the end points of the two tubular tissue regions connected to the separated portion are respectively specified, a cross-sectional image including the two specified end points is generated, and the generated cross-sectional image and a three-dimensional image of the tubular tissue region are displayed. Since the input of the path connecting the two tubular tissue regions is accepted, the user grasps the three-dimensional structure of the tubular tissue region while viewing the three-dimensional image, and the above-described separation by the cross-sectional image. Since it is possible to input the path by grasping the structure of the part, it is possible to easily and accurately edit the path of the separation part.
- a CPR image is generated as a cross-sectional image using a route input by the user as a core line, and the CPR image is regenerated every time the route is input, While selecting the CPR image, the CPR image most suitable for input can be selected and displayed on the route.
- the block diagram which shows schematic structure of the endoscopic image-diagnosis assistance system using one Embodiment of the medical image display apparatus of this invention.
- route The figure for demonstrating the production
- FIG. 1 is a block diagram showing a schematic configuration of an endoscopic image diagnosis support system using the first embodiment.
- This endoscopic image diagnosis support system includes an endoscopic image diagnosis support device 1, a three-dimensional image storage server 2, a display 3, and an input device 4, as shown in FIG.
- the endoscopic image diagnosis support apparatus 1 is a computer in which the medical image display program of this embodiment is installed in a computer.
- the endoscopic image diagnosis support apparatus 1 includes a central processing unit (CPU) and a semiconductor memory, and a storage device such as a hard disk or an SSD (Solid State Drive) in which the medical image display program of this embodiment is installed.
- CPU central processing unit
- SSD Solid State Drive
- a three-dimensional image acquisition unit 10 a tubular tissue region acquisition unit 11, an endoscope route acquisition unit 12, an end point specification unit 13, a cross-sectional image generation unit 14, a route reception unit as shown in FIG. 15, a virtual endoscope image generation unit 16 and a display control unit 17 are configured.
- the above-described units operate by the medical image display program of the present embodiment installed in the hard disk being executed by the central processing unit.
- the three-dimensional image acquisition unit 10 acquires a three-dimensional image 5 of a subject imaged in advance before an operation using an endoscope apparatus or before an examination.
- the three-dimensional image 5 for example, volume data reconstructed from slice data output from a CT apparatus or an MRI (Magnetic Resonance Imaging) apparatus, or output from an MS (Multi Slice) CT apparatus or a cone beam CT apparatus.
- the three-dimensional image 5 is stored in advance in the three-dimensional image storage server 2 together with the identification information of the subject, and the three-dimensional image acquisition unit 10 corresponds to the identification information of the subject input in the input device 4.
- the image 5 is read from the three-dimensional image storage server 2.
- the tubular tissue region acquisition unit 11 receives the three-dimensional image 5 acquired by the three-dimensional image acquisition unit 10 and acquires the tubular tissue region in the subject from the input three-dimensional image 5.
- the tubular tissue include blood vessels such as the large intestine, the small intestine, the bronchus, and the coronary artery, but are not limited to this and may be other tubular tissues.
- the shape of the large intestine is extracted and acquired.
- a method for extracting the large intestine region first, based on the three-dimensional image 5, a plurality of axially disconnected images having a section perpendicular to the body axis (axial) are generated. A processing for separating the external region and the internal region is performed on the cut image by a known method based on the body surface. For example, a binarization process is performed on the input axial position discontinuity image, a contour is extracted by a contour extraction process, and the inside of the extracted contour is extracted as a body (human body) region. Next, a binarization process using a threshold value is performed on the axial dislocation image of the in-vivo region, and a colon region candidate in each axial dislocation image is extracted.
- a threshold value for example, ⁇ 600 or less
- binarization processing is performed.
- colon region candidates are extracted.
- a large intestine region is acquired by extracting only a portion where the extracted large intestine region candidates in the body are connected between the respective axial dislocation image data.
- the method for acquiring the large intestine region is not limited to the above method, and other known methods such as RegionRegGrowing method and Level Set method may be used.
- the endoscope route acquisition unit 12 extracts the tree structure of the large intestine by thinning the three-dimensional image of the large intestine region acquired as described above and estimating the center line of the large intestine, and this is extracted from the endoscope route. Is what you get as For thinning processing, a known method can be used.
- the endoscope path acquisition unit 12 outputs information on the endoscope path acquired as described above to the display control unit 17, and the endoscope path is displayed on the display 3 by the display control unit 17. It is what is displayed.
- the end point specifying unit 13 specifies the end points of the two large intestine regions connected to the separated portion when the large intestine region and the endoscope path acquired as described above are separated. Specifically, in the present embodiment, when the large intestine region displayed on the display 3 is separated, the user selects two points in the large intestine region near the separated portion using the input device 4. . Specifically, for example, the end point of the endoscope path displayed on the display 3 is selected. The end point specifying unit 13 acquires the position information of the point selected by the input device 4 to specify the two end points of the two separated large intestine regions. Then, the end point specifying unit 13 outputs position information of the two end points to the cross-sectional image generation unit 14.
- the cross-sectional image generation unit 14 generates a cross-sectional image including two end points output from the end point specifying unit 13 based on the three-dimensional image acquired by the three-dimensional image acquisition unit 10.
- the cross-sectional image generation unit 14 outputs the generated cross-sectional image to the display control unit 17, and the cross-sectional image is displayed on the display 3 by the display control unit 17. A specific method for generating a cross-sectional image will be described in detail later.
- the route accepting unit 15 accepts an input of a route connecting two separated large intestine regions displayed on the display 3. Specifically, in the present embodiment, on the cross-sectional image displayed on the display 3, a route connecting the two large intestine regions separated by the user using the input device 4 is input, and the route receiving unit 15 Information on the inputted route is acquired. The route receiving unit 15 outputs the input route information to the display control unit 17, and the route is displayed on the display 3 by the display control unit 17.
- the virtual endoscopic image generation unit 16 receives the three-dimensional image of the large intestine region acquired by the tubular tissue region acquisition unit 11 and receives the endoscope path and route reception acquired by the endoscope route acquisition unit 12. The route input by the user in the unit 15 is input.
- the virtual endoscopic image generation unit 16 generates a virtual endoscopic image with a predetermined point on the line that combines the endoscope path and the path input by the user as a viewpoint.
- the virtual endoscopic image generation unit 16 generates a projection image based on a central projection obtained by projecting a three-dimensional image on a plurality of lines of sight extending radially from the viewpoint described above onto a predetermined projection plane. It is acquired as an image.
- a known volume rendering method or the like can be used.
- the angle of view (the range of the line of sight) and the center of the visual field (the center in the projection direction) of the virtual endoscopic image are set in advance by user input or the like.
- the virtual endoscopic image generation unit 16 outputs the virtual endoscopic image generated as described above to the display control unit 17, and the virtual endoscopic image is displayed on the display 3 by the display control unit 17. Is displayed.
- the display control unit 17 receives the three-dimensional image of the large intestine region acquired by the tubular tissue region acquisition unit 11, performs volume rendering or surface rendering on the three-dimensional image, and converts the three-dimensional image of the entire large intestine into a voxel. It is displayed on the display 3 by a model or a surface model. Further, the display control unit 17 receives the endoscope route acquired by the endoscope route acquisition unit 12 and the route acquired by the route reception unit 15, and stores these routes in the three-dimensional image of the entire large intestine. It is displayed on top of each other.
- the display control unit 17 causes the display 3 to display the cross-sectional image generated by the cross-sectional image generation unit 14 and the three-dimensional image of the entire large intestine. Further, the display control unit 17 displays the virtual endoscopic image generated by the virtual endoscopic image generation unit 16 on the display 3.
- the input device 4 accepts input of various information as described above by the user, and is constituted by a pointing device such as a keyboard and a mouse, for example.
- identification information of a subject is input in the input device 4, and the three-dimensional image acquisition unit 10 of the endoscopic image diagnosis support apparatus 1 generates three-dimensional images 5 corresponding to the input identification information of the subject. It is read out and acquired from the two-dimensional image storage server 2 (S10).
- the three-dimensional image acquired by the three-dimensional image acquisition unit 10 is input to the tubular tissue region acquisition unit 11, and the tubular tissue region acquisition unit 11 extracts and acquires the large intestine region based on the input three-dimensional image. (S12).
- the 3D image of the large intestine region acquired by the tubular tissue region acquisition unit 11 is output to the display control unit 17, and the display control unit 17 displays the 3D image of the entire large intestine region on the display 3 (S14).
- FIG. 3 shows an example of the large intestine region displayed on the display 3.
- the three-dimensional image of the large intestine region acquired by the tubular tissue region acquisition unit 11 is input to the endoscope path acquisition unit 12, and the endoscope path acquisition unit 12 is based on the input three-dimensional image of the large intestine region.
- the endoscope path is acquired as described above.
- the endoscope path acquired by the endoscope path acquiring unit 12 is output to the display control unit 17, and the display control unit 17 displays the input endoscope path on the display 3.
- the display control unit 17 displays the endoscope path so as to overlap the three-dimensional image of the large intestine region.
- FIG. 3 shows an example of the endoscope path displayed on the display.
- the large intestine region is separated into a plurality of regions as shown in FIG. In such a case, it is not possible to obtain an accurate endoscope path for the separated portion.
- the end point P4 and the end point P5 on the endoscope path are separated, and the end point P2 and the end point P3 on the endoscope path are separated.
- an input by the user of the endoscope path in the separation portion shown in FIG. 3 is accepted. Specifically, the following processing is performed.
- a user who sees a three-dimensional image of the large intestine area displayed on the display 3 confirms the separated part, and the user inputs the end points of the two large intestine areas near the separated part using the input device 4, respectively.
- the end point P4 and the end point P5, the end point P2, and the end point P3 on the endoscope path shown in FIG. the end point P4 and the end point P5 selected using the input device 4 and the position information of the end point P2 and the end point P3 are acquired by the end point specifying unit 13, and the end point specifying unit 13 separates the input position information.
- the position information of the end points of the two large intestine regions is specified (S20).
- step S22 a cross-sectional image including the position information of the input end point is generated.
- the cross-sectional image generation unit 14 acquires the position information of the end point P4 and the end point P5 illustrated in FIG. 3, the cross-sectional image includes the end point P4 and the end point P5.
- a cross-sectional image in which the inner product of the normal vector and the normal vector of the projection plane of the three-dimensional image of the large intestine region is maximized is generated.
- FIG. 4 shows a case where a straight line connecting the end point P4 and the end point P5 exists on the above-described projection plane.
- the cross section L and the projection plane are shown among the cross sections L including the end point P4 and the end point P5, the cross section L and the projection plane are shown.
- An image of the cross-section L in which each of the ⁇ s is minimum is generated.
- an image of the cross section L in which ⁇ is 0 °, that is, the cross section L in which the normal vector of the projection plane and the normal vector of the cross section L are parallel is generated.
- FIG. 5 is a view of the image of the large intestine region shown in FIG. 4 as viewed from above.
- the cross-sectional image generation unit 14 is a cross-sectional image including two end points, and the inner product of the normal vector of the cross-sectional image and the normal vector of the projection surface of the three-dimensional image of the large intestine region is The maximum cross-sectional image is generated.
- FIG. 6 shows an example in which the cross-sectional image including the end points P4 and P5 is displayed so as to be superimposed on the three-dimensional image of the large intestine region as described above.
- a route connecting the two large intestine regions separated by the user using the input device 4 is input, and the route reception unit 15 displays the input route information.
- the route input by the user is not limited to once, but the route can be input again after erasing the input route. That is, the user repeatedly inputs the route until the desired route is determined. It is possible to go and edit.
- the route reception unit 15 uses the virtual route image generation unit to obtain information on the final route. 16 is output.
- the virtual endoscopic image generation unit 16 acquires the endoscope path acquired from the endoscope path acquisition unit 12 and the path output from the path reception unit 15, and connects these two paths.
- a final endoscope path is acquired, and a virtual endoscope image is generated based on the endoscope path (S30).
- the viewpoint may be moved along the final endoscope path to sequentially generate the virtual endoscopic image at each viewpoint.
- the endoscope path may be displayed on the display 3, and a virtual endoscopic image of the viewpoint designated by the user using the input device 4 may be generated on the displayed endoscope path.
- the virtual endoscopic image generated by the virtual endoscopic image generation unit 16 is output to the display control unit 17, and the display control unit 17 displays the input virtual endoscopic image on the display 3 (S32). ).
- the endoscopic image diagnosis support system of the above-described embodiment when a large intestine region is extracted and acquired from a three-dimensional image of a subject and the acquired large intestine region is separated, it leads to the separated portion 2 A path connecting between the two large intestine areas by identifying the end points of each large intestine area, generating a cross-sectional image including the two identified end points, displaying the generated cross-sectional image and a three-dimensional image of the large intestine area Since the user can understand the three-dimensional structure of the large intestine region while viewing the three-dimensional image, the user can input the path by grasping the structure of the above-described separation portion by the cross-sectional image. Therefore, the route of the separation part can be edited easily and accurately.
- the case where the end point P4 and the end point P5 are selected by the user and a cross-sectional image including the end point P4 and the end point P5 is generated has been described.
- the end point P2 and the end point P3 are determined by the user. Is selected, a cross-sectional image including the end point P2 and the end point P3 is generated, and the cross-sectional image is displayed superimposed on the three-dimensional image of the large intestine region.
- the method for generating the cross-sectional image is the same as described above, and is a cross-sectional image including the end point P2 and the end point P3, and the normal vector of the cross-sectional image and the normal vector of the projection surface of the three-dimensional image of the large intestine region A cross-sectional image with the largest inner product is generated. Then, on the cross-sectional image, a route connecting two separated large intestine regions is input by the user, and one new endoscope route is determined.
- the end point P4 and end point P5, and the end point P2 and end point P3 are selected by the user. However, this may be automatically detected.
- the end point specifying unit 13 may automatically detect the end points of the endoscope path acquired by the endoscope path acquiring unit 12.
- a route connecting two separated large intestine regions is first input by the user before the display. For example, in the example of the above embodiment, a route connecting the end point P4 and the end point P5 is appropriately input by the user.
- a CPR image is generated in the cross-sectional image generation unit 14 using the path input by the user as a core line in this way, and this CPR image is displayed on the display 3 by the display control unit 17.
- the CPR image generation method is already known as described in, for example, Japanese Patent Application Laid-Open No. 2012-24517, and therefore detailed description thereof is omitted here.
- the user who observed the CPR image displayed on the display 3 does not clearly show the separated portion of the large intestine region in the currently displayed CPR image, and is too much to input the correct endoscope path. If it is not suitable, a route connecting the end point P4 and the end point P5 is input again by the user, a CPR image having the route as a core line is generated again, and the regenerated CPR image is the previous one. The display is switched from the CPR image. In this way, the route input by the user and the display of the CPR image are repeated with the route as a core line, and the final route is determined.
- the three-dimensional image of the large intestine region and the cross-sectional image are displayed in an overlapping manner.
- the present invention is not limited to this, and the three-dimensional image of the large intestine region and the cross-sectional image are displayed side by side. Also good.
- a cross-sectional image of the plurality of separated portions for example, a cross-sectional image including the end points P4 and P5, and the end points P2 and P3 are obtained.
- the cross-sectional image when generating a cross-sectional image, includes two end points, and the inner product of the normal vector of the cross-sectional image and the normal vector of the projection surface of the three-dimensional image of the large intestine region.
- the method of determining the cross-sectional image is not limited to this.
- a plurality of cross-sectional images including two end points are generated, and the plurality of cross-sectional images are displayed on the display 3.
- the user may select one appropriate image from the plurality of cross-sectional images, and the cross-sectional image selected by the user may be displayed on the three-dimensional image of the large intestine region.
- a plurality of cross-sectional images including two end points are generated, and as shown in FIG. 7, the plurality of cross-sectional images are switched while being rotated about a straight line connecting the two end points, and the switching is performed.
- the user may select one appropriate image from the plurality of cross-sectional images displayed. Note that the cross-sectional image may be switched by the user using a mouse or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Description
そして、内視鏡画像診断支援装置1は、中央処理装置(CPU)および半導体メモリや、本実施形態の医用画像表示プログラムがインストールされたハードディスクやSSD(Solid State Drive)等のストレージデバイスを備えており、これらのハードウェアによって、図1に示すような3次元画像取得部10、管状組織領域取得部11、内視鏡経路取得部12、端点特定部13、断面画像生成部14、経路受付部15、仮想内視鏡画像生成部16および表示制御部17が構成されている。そして、ハードディスクにインストールされた本実施形態の医用画像表示プログラムが中央処理装置によって実行されることによって上記各部がそれぞれ動作する。
管状組織領域取得部11は、3次元画像取得部10によって取得された3次元画像5が入力され、その入力された3次元画像5から被検体内の管状組織領域を取得するものである。上記管状組織としては、たとえば大腸、小腸、気管支または冠動脈などの血管があるが、これに限らずその他の管状組織でもよい。なお、本実施形態においては大腸の形状を抽出して取得するものとする。
Claims (11)
- 被検体の3次元画像を取得する3次元画像取得部と、
該3次元画像取得部によって取得された3次元画像から前記被検体内の管状組織領域を取得する管状組織領域取得部と、
該管状組織領域取得部よって取得された管状組織領域が分離している場合に、該分離部分に繋がる2つの前記管状組織領域の端点をそれぞれ特定する端点特定部と、
該端点特定部によって特定された2つの端点を含む断面画像を生成する断面画像生成部と、
該断面画像生成部によって生成された断面画像と前記管状組織領域の3次元画像とを表示させる表示制御部と、
前記2つの管状組織領域間を結ぶ経路の入力を受け付ける経路受付部とを備えたことを特徴とする医用画像表示装置。 - 前記断面画像生成部が、前記管状組織領域の3次元画像の投影面の法線ベクトルと前記断面画像の法線ベクトルとの内積が最大となる前記断面画像を生成するものであることを特徴とする請求項1記載の医用画像表示装置。
- 前記端点特定部が、入力装置を用いてユーザによって入力された点を前記2つの端点として特定するものであることを特徴とする請求項1または2記載の医用画像表示装置。
- 前記端点特定部が、前記2つの端点を自動的に検出して特定するものであることを特徴とする請求項1または2記載の医用画像表示装置。
- 前記表示制御部が、前記断面画像と前記管状組織領域の3次元画像とを重ねて表示させるものであることを特徴とする請求項1から4いずれか1項記載の医用画像表示装置。
- 前記表示制御部が、前記断面画像と前記管状組織領域の3次元画像とを並べて表示させるものであることを特徴とする請求項1から4いずれか1項記載の医用画像表示装置。
- 前記断面画像生成部が、前記経路受付部において受け付けられた経路を芯線として用いて前記断面画像としてCPR画像(Curved Planer Reformation)を生成するものであることを特徴とする請求項1から6いずれか1項記載の医用画像表示装置。
- 前記経路受付部が、前記2つの管状組織領域間を結ぶ線の経路を複数回受け付けるものであり、
前記断面画像生成部が、前記経路の入力毎に前記CPR画像を生成するものであることを特徴とする請求項9記載の医用画像表示装置。 - 前記管状組織が、大腸、小腸、気管支または血管であることを特徴とする請求項1から8いずれか1項記載の医用画像表示装置。
- 被検体の3次元画像を取得し、
該取得した3次元画像から前記被検体内の管状組織の形状を表す管状組織領域を取得し、
該取得した管状組織領域が分離している場合に、該分離部分に繋がる2つの前記管状組織領域の端点をそれぞれ特定し、
該特定した2つの端点を含む断面画像を生成し、
該生成した断面画像と前記管状組織領域の3次元画像とを表示させ、
前記2つの管状組織領域間を結ぶ経路の入力を受け付けることを特徴とする医用画像表示方法。 - コンピュータを、
被検体の3次元画像を取得する3次元画像取得部と、
該3次元画像取得部によって取得された3次元画像から前記被検体内の管状組織の形状を表す管状組織領域を取得する管状組織領域取得部と、
該管状組織領域取得部よって取得された管状組織領域が分離している場合に、該分離部分に繋がる2つの前記管状組織領域の端点をそれぞれ特定する端点特定部と、
該端点特定部によって特定された2つの端点を含む断面画像を生成する断面生成部と、
該断面生成部によって生成された断面画像と前記管状組織領域の3次元画像とを表示させる表示制御部と、
前記2つの管状組織領域間を結ぶ経路の入力を受け付ける経路受付部として機能させるための医用画像表示プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2884341A CA2884341A1 (en) | 2012-09-12 | 2013-09-09 | Medical image display apparatus, method, and program |
AU2013317199A AU2013317199A1 (en) | 2012-09-12 | 2013-09-09 | Medical image display device, method, and programme |
EP13836430.2A EP2896367B1 (en) | 2012-09-12 | 2013-09-09 | Medical image display device, method, and programme |
US14/637,547 US9558589B2 (en) | 2012-09-12 | 2015-03-04 | Medical image display apparatus, method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012200429A JP5930539B2 (ja) | 2012-09-12 | 2012-09-12 | 医用画像表示装置および方法並びにプログラム |
JP2012-200429 | 2012-09-12 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/637,547 Continuation US9558589B2 (en) | 2012-09-12 | 2015-03-04 | Medical image display apparatus, method, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014041789A1 true WO2014041789A1 (ja) | 2014-03-20 |
Family
ID=50277931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/005326 WO2014041789A1 (ja) | 2012-09-12 | 2013-09-09 | 医用画像表示装置および方法並びにプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US9558589B2 (ja) |
EP (1) | EP2896367B1 (ja) |
JP (1) | JP5930539B2 (ja) |
AU (1) | AU2013317199A1 (ja) |
CA (1) | CA2884341A1 (ja) |
WO (1) | WO2014041789A1 (ja) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6026932B2 (ja) * | 2013-03-22 | 2016-11-16 | 富士フイルム株式会社 | 医用画像表示制御装置および方法並びにプログラム |
JP6659501B2 (ja) * | 2016-09-14 | 2020-03-04 | 富士フイルム株式会社 | 軟骨定量化装置、方法およびプログラム |
JP7553381B2 (ja) * | 2021-02-22 | 2024-09-18 | ザイオソフト株式会社 | 医用画像処理装置、医用画像処理方法、及び医用画像処理プログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007501675A (ja) * | 2003-08-14 | 2007-02-01 | シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド | 仮想内視鏡画像の登録方法および仮想内視鏡画像の登録装置 |
WO2007129616A1 (ja) * | 2006-05-02 | 2007-11-15 | National University Corporation Nagoya University | 内視鏡挿入支援システム及び内視鏡挿入支援方法 |
JP2011024913A (ja) * | 2009-07-28 | 2011-02-10 | Toshiba Corp | 医用画像処理装置、医用画像処理プログラム、及びx線ct装置 |
JP2012024517A (ja) | 2010-07-28 | 2012-02-09 | Fujifilm Corp | 診断支援装置、診断支援プログラムおよび診断支援方法 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6694163B1 (en) * | 1994-10-27 | 2004-02-17 | Wake Forest University Health Sciences | Method and system for producing interactive, three-dimensional renderings of selected body organs having hollow lumens to enable simulated movement through the lumen |
US5611025A (en) * | 1994-11-23 | 1997-03-11 | General Electric Company | Virtual internal cavity inspection system |
US6343936B1 (en) * | 1996-09-16 | 2002-02-05 | The Research Foundation Of State University Of New York | System and method for performing a three-dimensional virtual examination, navigation and visualization |
US5891030A (en) * | 1997-01-24 | 1999-04-06 | Mayo Foundation For Medical Education And Research | System for two dimensional and three dimensional imaging of tubular structures in the human body |
US6928314B1 (en) * | 1998-01-23 | 2005-08-09 | Mayo Foundation For Medical Education And Research | System for two-dimensional and three-dimensional imaging of tubular structures in the human body |
US7477768B2 (en) * | 1999-06-29 | 2009-01-13 | The Research Foundation Of State University Of New York | System and method for performing a three-dimensional virtual examination of objects, such as internal organs |
US7324104B1 (en) * | 2001-09-14 | 2008-01-29 | The Research Foundation Of State University Of New York | Method of centerline generation in virtual objects |
JP4421203B2 (ja) | 2003-03-20 | 2010-02-24 | 株式会社東芝 | 管腔状構造体の解析処理装置 |
JP4343723B2 (ja) * | 2004-01-30 | 2009-10-14 | オリンパス株式会社 | 挿入支援システム |
US7711163B2 (en) * | 2005-05-26 | 2010-05-04 | Siemens Medical Solutions Usa, Inc. | Method and system for guided two dimensional colon screening |
US9968256B2 (en) * | 2007-03-08 | 2018-05-15 | Sync-Rx Ltd. | Automatic identification of a tool |
CN102782719B (zh) * | 2009-11-27 | 2015-11-25 | 道格微系统有限公司 | 用于确定管状结构的拓扑支撑的评估的方法和系统及其在虚拟内窥镜检查中的使用 |
US9401047B2 (en) * | 2010-04-15 | 2016-07-26 | Siemens Medical Solutions, Usa, Inc. | Enhanced visualization of medical image data |
JP5675227B2 (ja) * | 2010-08-31 | 2015-02-25 | 富士フイルム株式会社 | 内視鏡画像処理装置および作動方法、並びに、プログラム |
-
2012
- 2012-09-12 JP JP2012200429A patent/JP5930539B2/ja not_active Expired - Fee Related
-
2013
- 2013-09-09 CA CA2884341A patent/CA2884341A1/en active Pending
- 2013-09-09 EP EP13836430.2A patent/EP2896367B1/en not_active Not-in-force
- 2013-09-09 AU AU2013317199A patent/AU2013317199A1/en not_active Abandoned
- 2013-09-09 WO PCT/JP2013/005326 patent/WO2014041789A1/ja active Application Filing
-
2015
- 2015-03-04 US US14/637,547 patent/US9558589B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007501675A (ja) * | 2003-08-14 | 2007-02-01 | シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド | 仮想内視鏡画像の登録方法および仮想内視鏡画像の登録装置 |
WO2007129616A1 (ja) * | 2006-05-02 | 2007-11-15 | National University Corporation Nagoya University | 内視鏡挿入支援システム及び内視鏡挿入支援方法 |
JP2011024913A (ja) * | 2009-07-28 | 2011-02-10 | Toshiba Corp | 医用画像処理装置、医用画像処理プログラム、及びx線ct装置 |
JP2012024517A (ja) | 2010-07-28 | 2012-02-09 | Fujifilm Corp | 診断支援装置、診断支援プログラムおよび診断支援方法 |
Non-Patent Citations (3)
Title |
---|
M. YASUE ET AL.: "Thinning Algorithms for Three-Dimensional Gray Images and Their Application to Medical Images with Comparative Evaluation of Performance", JOURNAL OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. J79-D-11, no. 10, 1996, pages 1664 - 1674 |
See also references of EP2896367A4 |
T. SAITO ET AL.: "An Improvement of Three Dimensional Thinning Method Using a Skeleton Based on the Euclidean Distance Transformation-A Method to Control Spurious Branches", JOURNAL OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. J84-D-II, no. 8, 2001, pages 1628 - 1632 |
Also Published As
Publication number | Publication date |
---|---|
CA2884341A1 (en) | 2014-03-20 |
AU2013317199A1 (en) | 2015-04-30 |
EP2896367B1 (en) | 2017-11-29 |
EP2896367A4 (en) | 2016-04-20 |
US20150178989A1 (en) | 2015-06-25 |
JP2014054359A (ja) | 2014-03-27 |
JP5930539B2 (ja) | 2016-06-08 |
US9558589B2 (en) | 2017-01-31 |
EP2896367A1 (en) | 2015-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5918548B2 (ja) | 内視鏡画像診断支援装置およびその作動方法並びに内視鏡画像診断支援プログラム | |
US20170296032A1 (en) | Branching structure determination apparatus, method, and program | |
JP5835680B2 (ja) | 画像位置合わせ装置 | |
CN107980148B (zh) | 对图像进行融合以考虑运动补偿的系统和方法 | |
JP5369078B2 (ja) | 医用画像処理装置および方法、並びにプログラム | |
US20160228075A1 (en) | Image processing device, method and recording medium | |
JP5947707B2 (ja) | 仮想内視鏡画像表示装置および方法並びにプログラム | |
JP5785120B2 (ja) | 医用画像診断支援装置および方法並びにプログラム | |
JP2006246941A (ja) | 画像処理装置及び管走行トラッキング方法 | |
JP5777070B2 (ja) | 領域抽出装置、領域抽出方法および領域抽出プログラム | |
US20150187085A1 (en) | Image processing apparatus, method and program | |
US20120026162A1 (en) | Diagnosis assisting apparatus, diagnosis assisting program, and diagnosis assisting method | |
US9198603B2 (en) | Device, method and program for searching for the shortest path in a tubular structure | |
JP5826082B2 (ja) | 医用画像診断支援装置および方法並びにプログラム | |
US10398286B2 (en) | Medical image display control apparatus, method, and program | |
US9501709B2 (en) | Medical image processing apparatus | |
JP5930539B2 (ja) | 医用画像表示装置および方法並びにプログラム | |
JP5554028B2 (ja) | 医用画像処理装置、医用画像処理プログラム、及びx線ct装置 | |
US11056149B2 (en) | Medical image storage and reproduction apparatus, method, and program | |
US20170018079A1 (en) | Image processing device, method, and recording medium having stored therein program | |
JP6026357B2 (ja) | 仮想内視鏡画像生成装置および方法並びにプログラム | |
JP5918734B2 (ja) | 医用画像処理装置およびその作動方法並びに医用画像処理プログラム | |
JP6775294B2 (ja) | 画像処理装置および方法並びにプログラム | |
Rigas et al. | Methodology for micro-CT data inflation using intravascular ultrasound images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13836430 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2884341 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2013836430 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013836430 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2013317199 Country of ref document: AU Date of ref document: 20130909 Kind code of ref document: A |