WO2007026598A1 - Processeur d’images médicales et procédé de traitement d’images - Google Patents

Processeur d’images médicales et procédé de traitement d’images Download PDF

Info

Publication number
WO2007026598A1
WO2007026598A1 PCT/JP2006/316595 JP2006316595W WO2007026598A1 WO 2007026598 A1 WO2007026598 A1 WO 2007026598A1 JP 2006316595 W JP2006316595 W JP 2006316595W WO 2007026598 A1 WO2007026598 A1 WO 2007026598A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blood vessel
head
image processing
processing apparatus
Prior art date
Application number
PCT/JP2006/316595
Other languages
English (en)
Japanese (ja)
Inventor
Hiroshi Fujita
Yoshikazu Uchiyama
Toru Iwama
Hiromichi Ando
Hitoshi Futamura
Original Assignee
Gifu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gifu University filed Critical Gifu University
Priority to JP2007533204A priority Critical patent/JP4139869B2/ja
Publication of WO2007026598A1 publication Critical patent/WO2007026598A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/504Clinical applications involving diagnosis of blood vessels, e.g. by angiography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the present invention relates to a medical image processing apparatus and an image processing method for performing image analysis and image processing of a head image obtained by imaging a patient's head.
  • MRI Magnetic Resonance Imaging
  • Detection of an unruptured cerebral aneurysm by a doctor is performed using an MRA image (Magnetic Resonance Angiography) in which the blood flow in the blood vessel is imaged by MRI.
  • MRA image Magnetic Resonance Angiography
  • a 2D image is used by MIP processing (Maximum Intensity Projection). Because the unruptured cerebral aneurysm that occurs in the blood vessel is small Therefore, it is necessary to distinguish from the surrounding blood vessel images that are displayed in an overlapping manner, and the doctor's fatigue is severe. There is also the possibility of oversight due to this fatigue.
  • the blood vessel image with the larger depth information that is, the blood vessel part on the rear side in the line-of-sight direction.
  • a technique for displaying by reducing the luminance of the intersection is also disclosed (for example, see Patent Document 2). According to this method, it is possible to give a sense of perspective to the blood vessel image, and it becomes easy to observe the blood vessel site on the near side.
  • Patent Document 1 Japanese Patent Laid-Open No. 2002-112986
  • Patent Document 2 JP-A-5-277091
  • Non-patent document 1 Norio Hayashi et al., Head MRI image using morphological processing, Automatic extraction of cerebellum and brain affected area, Journal of Medical Image Information Society, vol21.nol.ppl09-115,200 4
  • Non-Patent Document 2 Ryuro Yokoyama et al., Automatic detection of lacunar infarct area in brain MR images, Journal of Japanese Society of Radiological Technology, 58 (3), 399-405, 2002
  • Patent Document 2 makes it easier to observe a blood vessel portion closer to the observer. Therefore, in the MIP image created according to the line-of-sight direction that the doctor wants to observe, if another blood vessel part intersects the front side of the blood vessel part that the doctor is interested in, it cannot be removed. It is not always possible to observe the desired blood vessel site in detail from the desired observation direction.
  • An object of the present invention is to detect a lesion with high accuracy in the head image force. Also specific It is possible to observe while paying attention to the blood vessel site.
  • the invention described in claim 1 is a medical image processing apparatus
  • reconstructing means for reproducing the original image only in a candidate region of a lesion part where the calculated vector concentration degree is a predetermined value;
  • Delete means for deleting a false positive candidate region that is a normal blood vessel from candidate regions of a lesion portion reproduced in the head image using
  • the invention according to claim 8 is the medical image processing apparatus according to claim 1, wherein:
  • Image control means for discriminating one or a plurality of blood vessel sites included in the extracted blood vessel image and attaching blood vessel site information relating to the discriminated blood vessel site to the head image.
  • the calculated feature value force can also exclude image elements other than the candidate region, and the feature value for the candidate region can be calculated accurately. Become. Therefore, it is possible to further improve the accuracy of the detection process itself.
  • the region in which the vector concentration degree is substantially calculated is the blood vessel region in which the lesion part of the aneurysm exists. This makes it possible to reduce the calculation time.
  • the blood vessel part information attached to the target image by referring to the blood vessel part information attached to the target image, one or a plurality of blood vessel portions included in the blood vessel image of the target image It is possible to easily determine the position. If the blood vessel part can be identified, for example, each blood vessel part is identified and displayed when the target image is displayed, and each blood vessel part included in the target image is specified when the target image is used, and the information is provided to the doctor. It becomes possible. Therefore, the doctor can observe the target image while paying attention to a specific blood vessel site, and can improve the interpretation efficiency.
  • the position and position of each blood vessel part included in the blood vessel image of the target image are referred to.
  • the name can be easily identified.
  • the position and name of each blood vessel part included in the target image can be specified when the target image is used, and the information can be provided to the doctor. Therefore, the doctor can easily grasp the position and name of a specific blood vessel site in the target image.
  • the target image has individual differences in the form of the blood vessel image depending on the subject (patient), but the blood vessel image is substantially reduced by affine transformation. By aligning them so that they match, the reference image and the blood vessel image of the target image can be associated with each other with high accuracy. Therefore, it is possible to distinguish a blood vessel site regardless of individual differences of subjects, and the versatility is high.
  • the doctor can easily identify each of one or a plurality of blood vessel portions included in the blood vessel image of the target image.
  • the doctor can easily grasp the names of one or a plurality of blood vessel parts included in the blood vessel image of the target image.
  • ⁇ 1] A diagram showing an internal configuration of the medical image processing apparatus according to the embodiment.
  • FIG. 3A is a diagram showing an example of an MRA image.
  • FIG. 3B is a diagram showing an example of an extracted image obtained by extracting a blood vessel region.
  • FIG. 4 is a diagram showing a vector concentration filter.
  • FIG. 5 A diagram showing a cerebral aneurysm model and a blood vessel region model.
  • FIG. 6 is a diagram showing an output image example of a vector concentration filter.
  • FIG. 7A is a diagram showing a filtered image before threshold processing.
  • FIG. 7B is a diagram showing a filtered image after threshold processing.
  • FIG. 8 is a diagram for explaining a method of calculating sphericity.
  • FIG. 9A is a diagram for explaining an identification method based on a rule-based method.
  • FIG. 9B is a diagram illustrating an identification method based on a rule-based method.
  • FIG. 10 is a diagram showing an output example of a detection result of a cerebral aneurysm candidate.
  • ⁇ 12A] is a diagram showing an example of a reference image.
  • FIG. 12B is a diagram showing an original image used to create a reference image.
  • FIG. 13A is a diagram showing a target image and a histogram of the target image before and after performing normalization processing.
  • FIG. 13B is a diagram showing a target image of a subject different from that in FIG. 13A and a histogram of the target image before and after performing normal image processing.
  • FIG. 14A is a diagram showing a target image.
  • FIG. 14B is a diagram showing a blood vessel extraction image obtained by extracting blood vessels from the target image shown in FIG. 14A.
  • FIG. 15A is a diagram showing a blood vessel extraction image and a reference image.
  • FIG. 15B is a diagram in which the blood vessel extraction image and the reference image shown in FIG. 15A are superimposed.
  • FIG. 16A is a diagram showing landmarks in a reference image.
  • FIG. 16B is a diagram showing corresponding points in a blood vessel extraction image.
  • FIG. 17A is a diagram showing a blood vessel extraction image and a discrimination result of the blood vessel site.
  • FIG. 17B is a diagram showing a blood vessel extraction image and a discrimination result of the blood vessel site.
  • FIG. 18 is a diagram showing an example of identification display of each blood vessel part discriminated in the target image.
  • FIG. 19 is a flowchart showing a detection process according to the second embodiment.
  • FIG. 20 is a diagram showing an analysis bank of a GC filter bank.
  • FIG. 21 is a diagram showing a filter bank A (z j ).
  • FIG. 22 is a diagram showing a reconfiguration bank of a GC filter bank.
  • FIG. 2 is a diagram showing a filter bank S (z j ).
  • FIG. 1 shows the configuration of the medical image processing apparatus 10 in the present embodiment.
  • the medical image processing apparatus 10 detects a candidate region of the medical image force lesion by performing image analysis on a medical image obtained by examination imaging.
  • this medical image processing device 10 is attached to an image generation device that generates medical images and a medical doctor that stores and manages medical images. It is good also as providing in a medical image system to which various apparatuses, such as an image interpretation terminal which obtains and displays on a display means, are connected via the network.
  • various apparatuses such as an image interpretation terminal which obtains and displays on a display means, are connected via the network.
  • an example in which the present invention is realized by a single medical image processing apparatus 10 will be described.
  • the functions of the medical image processing apparatus 10 are distributed to each component of the medical image system, and the entire medical image system is used. As a realization of the present invention.
  • the medical image processing apparatus 10 includes a control unit 11, an operation unit 12, a display unit 13, a communication unit 14, a storage unit 15, and a lesion candidate detection unit 16.
  • the control unit 11 includes a CPU (Central Processing Unit) and a RAM (Random Access Memory).
  • CPU Central Processing Unit
  • RAM Random Access Memory
  • Etc. which reads various control programs stored in the storage unit 15 and performs various calculations, and comprehensively controls processing operations in each unit 12-16
  • the operation unit 12 includes a keyboard, a mouse, and the like. When these are operated by an operator, an operation signal corresponding to the operation is generated and output to the control unit 11. Note that a touch panel configured integrally with the display in the display unit 13 may be provided.
  • the display unit 13 includes display means such as an LCD (Liquid Crystal Display), and various operation screens, medical images, and medical image forces are detected on the display means in response to instructions from the control unit 11.
  • display means such as an LCD (Liquid Crystal Display)
  • various operation screens, medical images, and medical image forces are detected on the display means in response to instructions from the control unit 11.
  • Various display information such as the detection result of the lesion candidate and its detection information is displayed.
  • the communication unit 14 includes a communication interface, and transmits and receives information to and from an external device on the network. For example, the communication unit 14 performs a communication operation such as receiving a medical image generated from the image generation device and transmitting detection information of a lesion candidate in the medical image processing device 10 to an interpretation terminal.
  • the storage unit 15 is a control program used in the control unit 11, various processing programs such as detection processing used in the lesion candidate detection unit 16, and data such as parameters necessary for the execution of each program and processing results thereof. Is remembered.
  • the storage unit 15 stores medical images that are candidates for detection of lesion candidates, information on detection results, and the like.
  • the lesion candidate detection unit 16 cooperates with the processing program stored in the storage unit 15.
  • the image to be processed is subjected to various image processing (gradation conversion processing, sharpness adjustment processing, dynamic range compression processing, etc.) as necessary. Further, the lesion candidate detection unit 16 executes detection processing and outputs the detection result. The contents of the detection process will be described later.
  • an example of detecting a lesion candidate for an unruptured cerebral aneurysm from an MRA image (three-dimensional image) obtained by imaging the patient's head using MRI and imaging the blood flow in the brain is explained.
  • the A cerebral aneurysm is a bulge (expansion) that forms in the wall of an artery, and is caused by the blood pressure exerted on the artery wall. If a thrombus occurs inside the cerebral aneurysm or this cerebral aneurysm ruptures, serious diseases such as subarachnoid hemorrhage may develop.
  • FIG. 2 is a flowchart for explaining the flow of detection processing. As described above, this detection process is a process executed when the lesion candidate detection unit 16 reads a detection processing program stored in the storage unit 15.
  • MRA 3D image data is first input (step Sl). Specifically, the MRA image to be processed stored in the storage unit 15 is read by the lesion candidate detection unit 16.
  • MRI is a method for obtaining an image using nuclear magnetic resonance (hereinafter referred to as NMR) in a magnetic field.
  • NMR nuclear magnetic resonance
  • an object In NMR, an object is placed in a static magnetic field, and then an RF pulse (radio wave) at the resonance frequency of the atomic nucleus to be detected in the object is irradiated.
  • the resonance frequency of the hydrogen atom that constitutes the water that is abundant in the human body is usually used.
  • the object When the object is irradiated with RF noise, an excitation phenomenon occurs, the nuclear spins of the atoms that resonate with the resonance frequency are aligned, and the nuclear spins absorb the energy of the RF pulse.
  • the irradiation of the RF pulse is stopped in this excited state, a relaxation phenomenon occurs and the phase of the nuclear spin becomes nonuniform, and the nuclear spin releases energy.
  • the phase relaxation time constant is T2
  • the energy relaxation time constant is T1.
  • MRI magnetic resonance imaging
  • T1-weighted image is used to detect anatomical structures
  • ⁇ 2-weighted image is used to detect lesions.
  • images taken by the FLAIR method are ⁇ 2 weighted images in which signals from water are attenuated, but are particularly called FLAIR images.
  • MRA is a blood vessel imaging method in MRI! In MRI, the direction of the subject's foot to head
  • MR A is a method of imaging a blood vessel with blood flow by imaging this high signal.
  • FIG. 3A shows an example of MRA image.
  • the blood vessel region with blood flow has a high signal, so the blood vessel region appears white in the MRA image.
  • the 3D image data is preprocessed as a preparation stage for detecting candidates (step S2).
  • preprocessing normalization processing and gradation conversion processing of image data are performed. Normalization is performed by converting by linear interpolation so that all the edges that make up the botacell become 3D image data of the same size.
  • the density gradation conversion process is performed on the three-dimensional image data converted into the equal-sized button cells, and the signal value of each button cell is linearly converted into a density gradation of 0 to 1024. At this time, the higher the signal value, the closer to the density value 1024, and the lower the signal value, the closer the density value to 0.
  • the density gradation range is not limited to 0 to 24 and can be set as appropriate.
  • a blood vessel image region is extracted from the three-dimensional MRA image (step S3).
  • threshold processing is performed, and the MRA image is binarized.
  • the blood vessel region is whitened and the other regions appear black.
  • the blood vessel region has a different value from the other regions. Therefore, the blood vessel region is extracted by the region expansion method.
  • the starting botasel (the whitest Determine the density value (botacel), and examine the 26-botel in the vicinity of the determined botasel in the 3D MRA image before binarization, and determine certain judgment conditions (for example, the density value is 500 or more) Neighboring bocellels satisfying the condition are determined as blood vessel regions. Then, the same processing as described above is repeated for the neighboring buttonacell determined to be the blood vessel region. In this way, the blood vessel region can be extracted by sequentially extracting the botacell satisfying the determination condition while expanding the region.
  • Fig. 3B shows the MRA image force extracted from Fig. 3A. The extracted blood vessel region is white (concentration value 1024) and the other regions are black (concentration value 0).
  • the extracted 3D MRA image of the blood vessel region is subjected to filter processing using a vector concentration filter as shown in Fig. 4, and the cerebral aneurysm is output from the processed image output by the filter processing.
  • a primary candidate area is detected (step S4).
  • the vector concentration degree filter calculates the vector concentration degree in each botacell, and images and outputs the calculated vector concentration value as the botacell value of the botacell.
  • the vector concentration degree focuses on the direction of the gradient vector of density change and evaluates how much the gradient vector in the neighboring area is concentrated at a certain point of interest.
  • FIG. 5 shows a cerebral aneurysm model and a blood vessel model.
  • the extracted blood vessel region is within the range of a sphere having a radius R centered on the attention button cell P. If it exists, the vector concentration is calculated.
  • the vector concentration is calculated by the following formula 1.
  • the angle ⁇ indicates the angle between the direction vector from the target button cell ⁇ to the peripheral button cell Qj and the direction of the vector in the peripheral voxel Qj
  • M indicates the number of peripheral button cells Qj to be operated.
  • a filtered image as shown in Fig. 6 is output with the vector concentration degree as the botacell value. Since the vector concentration is output in the range of 0–1, in Fig. 6, the higher the vector concentration (closer to 1), the more white the image will appear! /
  • FIG. 7A is a diagram showing a part of the filtered image shown in FIG.
  • a binary key image as shown in FIG. 7B is obtained.
  • the binary image appears in white! /, That is, the vector concentration degree is large! /, And the image area is output as the primary candidate area.
  • the threshold for binarization is determined using a teacher image in which the presence of a cerebral aneurysm is known in advance.
  • a threshold value is obtained so that only the image area of the cerebral aneurysm whose existence has already been identified can be extracted. It is good also as obtaining by analyzing statistically.
  • a density histogram is obtained, and a density value at a certain area ratio p% in the density histogram is obtained as a threshold value.
  • the density histogram of the filtered image is obtained, and the density value at the maximum density value side force area ratio p% is determined as a threshold value.
  • a feature amount indicating the feature of the primary candidate region is calculated (step S5). Since a cerebral aneurysm has a certain size and a spherical shape, in this embodiment, the size of the candidate region, the sphericity, and the average value of the vector concentration in each botacell in the region are used as feature quantities. It will be calculated. However, if the cerebral aneurysm can be characterized, there is no particular limitation on what kind of feature is used. The maximum value of the vector concentration may be calculated, or the standard deviation of the density value of each botasel may be calculated.
  • the volume of each of the botacels constituting the candidate area is calculated.
  • the number of botasels that are not the actual volume is calculated and used as the index value indicating the volume in the subsequent calculations.
  • the sphericity feature amount is obtained when a sphere having the same volume as the primary candidate area is arranged so that the centroid of the primary candidate area and the centroid of the sphere coincide with each other. Is obtained from the ratio of the volume of the primary candidate region portion that coincides with the total volume of the primary candidate region.
  • step S6 secondary detection is performed based on the feature amount.
  • a cerebral aneurysm is distinguished from a blood vessel that is a normal tissue.
  • Discrimination is a force that shows an example of a discriminator using the rule-based method. The method is not limited to this, and any method can be used as long as discrimination is possible, for example, artificial-eural network, support vector machine, discriminant analysis.
  • a profile indicating the relationship between the sphericity with respect to the size and the average of the vector concentration with respect to the size is created.
  • the range to be detected as the secondary candidate (the range surrounded by the solid line in FIGS. 9A and 9B) is determined in advance, and the above profile file is used as the variable data for each feature quantity of the primary candidate to be identified.
  • it is created, it is identified as a true positive candidate if the variable data of each feature quantity of the primary candidate is within the detection range, and a false positive candidate if it does not exist. In other words, only the primary candidate in which the variable data of the feature quantity is distributed within the detection range is secondarily detected.
  • the detection range is determined using a teacher image that is previously known to be a cerebral aneurysm or a normal blood vessel.
  • the average of size, sphericity, and vector concentration is obtained from cerebral aneurysm and normal blood vessel teacher images, and the sphericity and size of the vector with respect to size are calculated from these features.
  • a profile of the average value relationship is created.
  • the variable data indicated by the ⁇ marker is true positive, that is, teacher data that is known as a cerebral aneurysm, and the variable data indicated by the ⁇ marker is false positive, that is, it is known as a blood vessel. Teacher data.
  • the range surrounded by four dotted lines indicating the threshold is the detection range, and “size sphericity”, “ The detection range is determined for each of “average vector density”.
  • the primary candidate the variable data is indicated by a ⁇ marker located within the detection range is secondarily detected as a cerebral aneurysm candidate.
  • tertiary detection is further performed on the secondary detection candidates (step S7).
  • discriminant analysis is performed using three feature quantities.
  • any of Mahalanobis distance, principal component analysis, linear discriminant function, etc. can be applied, but a method different from the method at the time of secondary detection is applied.
  • the detection result is output assuming that the candidate region that has been thirdarily detected is the final cerebral aneurysm candidate (step S8).
  • FIG. 10 shows an example of a detection result displayed on the display unit 13 as a detection result.
  • marker information (marked by an arrow in FIG. 10) indicating a candidate region of the third detected cerebral aneurysm in the MIP image created from the 3D MRA image Is displayed.
  • a MIP image is a 2D image created by applying MIP processing to 3D MRA image data, and the structure in the image can be displayed in 3D.
  • MIP processing is called the maximum brightness projection method, and projection is performed with parallel rays from a certain direction, and the maximum brightness (signal value) in the button cell is reflected on the projection surface to enable 3D observation. This is a process for creating an image.
  • Information regarding detection of cerebral aneurysm candidates may be output and used as reference information at the time of diagnosis by a doctor.
  • the color of the marker information may be changed according to the degree of vector concentration. For example, change the color of the arrow marker to red if the vector concentration is 0.8 or higher, yellow if it is 0.7 to 0.8, blue if it is 0.7 to 0.5, etc.
  • the vector concentration The doctor can easily grasp visually that the degree of the spherical shape of the aneurysm is strong.
  • FIG. 10 shows an example in which the position of a cerebral aneurysm candidate can be identified by marker information.
  • the cerebral aneurysm candidate region is displayed so as to be distinguishable from other regions, such a case is displayed.
  • the volume rendering method is a method of performing three-dimensional display by giving color information and opacity to each botacell for each partial area. For the attention area, the opacity is set high, and other than that By setting the area low, the area of interest can be raised. Therefore, at the time of display, opacity is set for each area, and color information setting processing corresponding to the opacity is performed.
  • the detection result may be shown using a filter processing image (see FIG. 6) obtained by a vector concentration filter that is not used for MIP images or the like.
  • a filter processing image obtained by a vector concentration filter that is not used for MIP images or the like.
  • the vector concentration degree is low
  • the value area is colored blue
  • the high value area is colored red, etc., so that the doctor can visually grasp the calculated vector concentration degree. You may do it.
  • a filtered image in which the color of the blood vessel region is changed according to the vector concentration degree may be displayed superimposed on the corresponding position of the MIP image. In this way, it is possible to provide vector concentration information as reference information for detecting a cerebral aneurysm by a doctor.
  • This blood vessel part discrimination process is a software process realized in cooperation with the control program 11 and the blood vessel part discrimination process processing program stored in the storage unit 15.
  • the blood vessel part discrimination process one or a plurality of blood vessel parts included in the blood vessel image are discriminated from the blood vessel image appearing on the 3D MRA image obtained by photographing the head.
  • MRA is a kind of MRI blood vessel imaging method.
  • energy can be absorbed only in a specific slice (fault) by applying a gradient magnetic field in the direction from the foot of the subject to the head (this direction is called the body axis).
  • the blood vessels in the slice are saturated with RF pulses. Since blood vessels always flow, the signal intensity in the slice increases when non-saturated blood flows over time.
  • MRA is a method for imaging blood vessels with blood flow by imaging this high signal.
  • the reference image is a blood vessel image on the three-dimensional MRA image in which the positions and names of one or a plurality of blood vessel parts are preset.
  • the vascular part refers to the anatomical classification of blood vessels
  • the position of the vascular part refers to the position of the botacel belonging to the vascular part.
  • FIG. 12A eight blood vessel sites included in the blood vessel image (anterior cerebral artery, right middle cerebral artery, left middle cerebral artery, right internal carotid artery, left internal carotid artery, right posterior cerebral artery, left posterior cerebral artery) , The position of the basilar artery) and the name of the blood vessel part are shown.
  • the names of the three vascular sites (right middle cerebral artery, anterior cerebral artery, and basilar artery) are shown, but all eight vascular sites are shown. The name is set.
  • the reference image g2 is generated as a three-dimensional data force of the head MRA image gl selected for the reference image as shown in FIG. 12B.
  • an axial image (a two-dimensional tomographic image obtained by cutting out a botasel in a plane perpendicular to the body axis) is created at intervals of this three-dimensional data force.
  • the botacell belonging to each vascular site is designated by manual operation, and the name of the vascular site is further designated.
  • landmark botasels are set in the reference image g2 at characteristic points such as the inflection point of the blood vessel, the end point, and the intersection of the blood vessel portions.
  • the landmark is used for alignment between the target image and the reference image, and will be described in detail later.
  • Landmarks are also set according to manual operations based on doctors' indications.
  • the reference image g2 may be created by the control unit 11 of the medical image processing apparatus 10, or an externally created image may be stored in the storage unit 15.
  • each blood vessel part is identified and displayed in FIG. 12A, but the actual reference image g2 has a black background (low signal value) and a white blood vessel image (high signal value). ) And binarized image.
  • the position information, name information, and landmarks of the botasels belonging to each blood vessel site The position information of the button cell that is a key is attached to the reference image or stored in the storage unit 15 as a separate file in association with the reference image.
  • a normalization process is first performed by the control unit 11 on the three-dimensional MRA image (hereinafter referred to as the target image and V ⁇ ⁇ ) to be determined (step S11).
  • the botacell may be a rectangular parallelepiped, or the maximum and minimum values of the voxel may vary. It occurs. Therefore, normalization processing is performed to unify the preconditions regarding the target image.
  • the target image is converted by the linear interpolation method so that all the sides constituting the botacell have the same size.
  • a histogram is created for all the botacell values of the target image, and all the botacel values of the target image are 0, with the top 5% or more of the histograms having a value of 1024 for the top 5% or more and 0 for the minimum. Linear conversion to 1024 tones.
  • the density gradation range is not limited to 0 to 1024 and can be set as appropriate.
  • FIG. 13A and FIG. 13B show an example of normalization processing.
  • the target image g3 shown in FIG. 13A and the target image g4 shown in FIG. 13B are obtained by using different patients as subjects. For this reason, the histogram hi (see Fig. 13A) obtained from the target image g3 and the histogram h3 (see Fig. 13B) obtained from the target image g4 share the common feature that there are two local maxima. It can be seen that there is a considerable difference in the range of values, and the histogram characteristics as a whole are different. If the target images g3 and g4 having such a histogram characteristic are subjected to the above-mentioned regularity processing and then created again, the histogram h2 shown in FIG. 13A and the histogram h4 shown in FIG. 13B are obtained. . Histogram h2 and h4 force As shown by the component force, the histogram characteristics of the target images g3 and g4 are almost the same by the normalization process.
  • the control unit 11 extracts a blood vessel image from the normalized target image (step S12).
  • threshold processing is performed on the target image, and binarization is performed.
  • the blood vessel image appears white and other tissue parts appear black. Therefore, in the binary image, the blood vessel image has a different value from the other regions. Therefore, the region having the same signal value as the blood vessel image is extracted by the region expansion method.
  • a binary cell is used to determine the starting botasel (the whitest and highest density voxel), and is determined to be the starting point in the target image before the binary image processing.
  • the starting botasel the whitest and highest density voxel
  • 26 Botacels are examined, and a neighboring Botacell that satisfies a certain determination condition (for example, a density value of 500 or more) is determined as a blood vessel image.
  • a certain determination condition for example, a density value of 500 or more
  • FIG. 14B shows a blood vessel extraction image g6 obtained by extracting blood vessel images.
  • the blood vessel extraction image g6 is obtained by extracting a blood vessel image from the target image g5 after normal input shown in FIG. 14A, and the region of the blood vessel image is white (density value 1024) and the other regions are black (density value 0). It is a valuation.
  • step S 13 In the control unit 11, in order to make the position of the blood vessel image of the blood vessel extraction image substantially coincide with the position of the blood vessel image of the reference image, alignment is performed based on the position of the center of gravity of each image (step S 13).
  • the position of the center of gravity is the position of the button cell that is the center of gravity of all the button cells belonging to the blood vessel image.
  • FIGS. 15A and 15B Specific description will be given with reference to FIGS. 15A and 15B.
  • FIG. 15A is a diagram in which the blood vessel extraction image and the reference image before alignment are superimposed. From FIG. 15A, it can be seen that the positions of the respective blood vessel images coincide with each other simply by combining the blood vessel extraction image and the reference image.
  • the control unit 11 obtains the positions of the centroid P (xl, yl, zl) of the blood vessel extraction image and the centroid Q (x2, y2, y3) of the reference image as shown in FIG. 15A.
  • the blood vessel extraction image or the reference image is translated so that the barycentric positions P and Q match.
  • FIG. 15B is a diagram showing the resultant force obtained by matching the center of gravity positions P and Q by translation.
  • Fig. 15B Force The fact that the blood vessel image in the blood vessel extraction image and the blood vessel image in the reference image are roughly coincident with each other is divided. [0072] Further, in order to perform alignment with high accuracy, the control unit 11 performs rigid deformation on the blood vessel extraction image (step S14).
  • a corresponding point search using a cross-correlation coefficient is performed as preprocessing for rigid body deformation. This is because a plurality of corresponding points are set for each of the two images to be aligned, and one of the images is rigidly deformed so that the corresponding points set in the two images match each other.
  • a land mark button that is determined in advance in the reference image and a blood vessel extracted image that has locally similar image characteristics are set as corresponding points. The similarity of image characteristics is determined based on the cross-correlation coefficient obtained for the blood vessel extraction image and the reference image.
  • corresponding points corresponding to 12 landmarks set in advance in the blood vessel image of the reference image g7 are searched from the blood vessel extraction image.
  • the start point is the button cell at the position corresponding to each landmark of the reference image g7, and the start point in the blood vessel extraction image g8 and the reference image g7.
  • a search is made for a button cell in the range of 10 to +10 (in a cubic region of 21 X 21 X 21) in the X-axis, Y-axis, and Z-axis directions and the landmarks.
  • Correlation coefficient C (hereinafter referred to as correlation value C) is calculated.
  • Equation 2 A (i, j, k) represents the botacell position of the reference image g7, and B (i, j, k) represents the botacell position of the blood vessel extraction image g8.
  • 8 is the average value of the voxel values in the search region in the reference image g7 and the blood vessel extraction image g8, and is represented by the following expressions 3 and 4.
  • ⁇ and ⁇ are reference images, respectively.
  • the correlation value C has a value range of 1.0 to 1.0, and the closer to the maximum value 1.0, the more similar the image characteristics of the reference image g7 and the blood vessel extraction image g8! / Show.
  • the position force of the botel cell having the largest correlation value C is set as the corresponding point of the blood vessel extraction image g8 corresponding to the landmark of the reference image g7.
  • the control unit 11 performs a rigid body deformation on the blood vessel extraction image g8 based on the corresponding point, and thereby the blood vessel image of the blood vessel extraction image g8 and the blood vessel image of the reference image g7. And are aligned.
  • Rigid body deformation is one of the affine transformations, in which coordinate transformation is performed by rotation and translation.
  • the alignment is performed so that the corresponding point of the blood vessel extraction image g8 matches the landmark of the reference image g7 by an ICP (Iterative Closest Point) algorithm that repeats rigid body deformation using the least squares method multiple times.
  • the square of the Euclidean distance from a certain target vessel cell in the blood vessel extraction image is obtained for all the vessel cells belonging to each blood vessel part of the reference image (this is called the target button cell). Then, it is determined that the blood vessel part to which the target botacell having the shortest Euclidean distance belongs is the blood vessel part to which the target botacell belongs. At this time, the name of the blood vessel part of the target button cell is determined from the name of the blood vessel part set in the target button cell.
  • Blood vessel part information indicating the position of the botacel to which it belongs and the name of the blood vessel part are generated by the control unit 11 and attached to the target image (step S16). For example, if it is determined that the button cell at the position (x3, y3, z3) is a blood vessel part of the anterior cerebral artery, the button cell at the position “(x3, y3, z3)” has the blood vessel name “anterior cerebral artery”. The blood vessel part information indicating that this is a blood vessel part is appended to the header area of the target image.
  • FIG. 17A and FIG. 17B show the results of determining the blood vessel site.
  • the blood vessel extraction image g9 shown in FIG. 17A and the blood vessel extraction image gl l shown in FIG. 17B are images obtained from different subjects, respectively, and the image glO shown in FIG. 17A and the image gl 2 shown in FIG.
  • Each blood vessel part is discriminated from the blood vessel extraction images g9 and gl l, and is an image that is identified and displayed by changing the color for each blood vessel part.
  • the blood vessel images of the images glO and gl2 are the same regardless of their different forms (blood vessel position, size, extension direction, etc.). It can be seen that the blood vessel site can be identified.
  • the above is a flow from determining a blood vessel part in the target image to attaching the blood vessel part information.
  • the control unit 11 When a display instruction operation is performed on such a target image via the operation unit 12, the control unit 11 performs MIP processing on the target image to generate a MIP image and displays it on the display unit 13.
  • MIP display the display of MIP images is referred to as MIP display.
  • a certain directional force is projected by parallel rays and is on this projection line.
  • This is a method of creating a two-dimensional image by reflecting the maximum luminance (botacel value) in the botacel on the projection plane.
  • This projection direction is the line-of-sight direction that the doctor desires to observe.
  • the doctor can freely operate the observation direction, and the control unit 11 creates a MIP image from the target image according to the observation direction instructed through the operation unit 12, and displays the MIP image. It shall be displayed in part 13.
  • control unit 11 In the state where the target image is displayed in MIP, when the instruction operation force S for identifying and displaying the vascular part is further performed, the control unit 11 refers to the vascular part information attached to the target image, and Based on this, display control is performed so that each blood vessel part can be identified in the blood vessel image in the MIP image (step S17).
  • the control unit 11 displays blue for the voxels belonging to the blood vessel part of the anterior cerebral artery in the MIP image.
  • the color of each blood vessel part is set to the button cell located at the position determined for each blood vessel part, such as green for the botacel belonging to the blood vessel part of the basilar artery. Then, the set color is reflected in the MIP image of the target image.
  • an annotation image indicating the name of the blood vessel part is created and synthesized with the corresponding blood vessel part of the MIP image.
  • FIG. 18 shows an example of identification display.
  • the MIP image gl3 is the target image displayed with the head upward force MIP.
  • the identification display image g 14 is displayed.
  • the identification display image gl4 is obtained by identifying and displaying each blood vessel part by, for example, assigning different colors to the eight kinds of blood vessel parts in the blood vessel extraction image.
  • the identification display image gl4 when a doctor selects a blood vessel part, an annotation indicating the name of the blood vessel part such as “basal artery” is displayed in association with the selected blood vessel part by the display control of the control unit 11. Is displayed.
  • the MIP image gl5 corresponding to the side direction is created by the control unit 11 and displayed.
  • the blood vessel image on the front side overlaps the blood vessel image on the rear side, making it difficult to observe.
  • Even in such MIP image gl5 it is possible to identify and display blood vessel sites. Noh.
  • the blood vessel part information it is possible to determine which botacell corresponds to which blood vessel part, and therefore, it is possible to identify the botacell to be identified and displayed regardless of the change in the observation direction of the MIP display. is there.
  • the identification display image corresponding to the MIP image g15 is the image g16.
  • this identification display image g 16 it is possible to extract and display only one of the blood vessel sites.
  • the MIP image in which only the luminance of the selected vessel vessel is projected that is, the MIP image of the target image.
  • a blood vessel selection image gl7 in which only the blood vessel site selected from gl5 is extracted is displayed.
  • the blood vessel selection image gl7 only the selected blood vessel part is displayed in MIP and the other blood vessel parts are not displayed, so that the doctor can observe only the blood vessel part of interest.
  • these display images g 13 ⁇ Gl7 also by stone as be displayed side by side on the same screen, it is also possible to switch the display as one screen first image.
  • the MIP display image gl3, the identification display image gl4, and the blood vessel selection image gl7 can be compared and observed.
  • each image gl3 to gl7 can be observed in full screen display. This makes it easier to observe details.
  • Three-dimensional MRA image data for 20 patients were obtained. These image data have a matrix size force S256 X 256, a spatial resolution of 0.625 to 0.78 mm, and a slice thickness of 0.5 to 1.2 mm. These image data are divided into seven unruptured cerebral aneurysms. An unruptured cerebral aneurysm has been determined by an experienced neurosurgeon.
  • these three-dimensional MRA image data were converted to image data having an equal botacell size using a linear interpolation method, and normalization was performed. As a result of this normalization, all image data became equal-botacel image data with a matrix size power of S400 X 400 X 200 and a spatial resolution of 0.5 mm.
  • the size (volume), sphericity, and velocity of the cerebral aneurysm region (true positive) determined by the doctor are determined.
  • Each feature amount of the degree of concentration of tuttle is calculated, and the same feature amount is calculated for a normal blood vessel region (false positive).
  • These feature quantities were used to determine the detection range of the rule-based method, which is a discriminator for secondary detection, and as teacher data for discriminant analysis. Similarly, the discriminator was trained as a teacher image in the discriminator at the third detection.
  • a cerebral aneurysm candidate having a characteristic that a gradient vector concentrates in the central portion is accurately detected using a vector concentration filter, and the detected information is obtained by a doctor. Can be provided. Therefore, it is possible to prevent fatigue and oversight during doctor's interpretation work, and it is expected to improve diagnosis accuracy.
  • the vector concentration filter is applied not to the MRA image itself but to the extracted image obtained by extracting the blood vessel region, it is possible to shorten the processing time required for the filter processing.
  • a cerebral aneurysm candidate is detected using a 3D MRA image. Detection may be performed using an MRA image.
  • the size of the cerebral aneurysm candidate is the number of pixels, and the sphericity is a circularity, and a two-dimensional feature value is calculated.
  • an MRI image obtained by other imaging methods such as a contrast MRA image obtained by imaging a blood vessel region using a contrast agent may be used.
  • a contrast MRA image obtained by imaging a blood vessel region using a contrast agent may be used.
  • an image in which a blood vessel region is imaged by another imaging device such as CTA (Computed Tomography Angiography) or D3 ⁇ 4A (Digital Subtraction Angiography).
  • the detection target can be detected by applying the present invention as long as it is not only an aneurysm but also a lesion having a spherical aneurysm.
  • each blood vessel part included in the blood vessel image on the target image is determined. Since the position and name information is attached to the target image as blood vessel part information, when performing MIP display of the target image, each blood vessel part can be easily determined based on the blood vessel part information. The identification display of each blood vessel site is possible. Therefore, the doctor can observe the target image while paying attention to a specific blood vessel site in the target image, and can improve the interpretation efficiency.
  • each blood vessel part can be identified and displayed, the doctor can easily identify the position and name of each blood vessel part, and can perform the interpretation work. Efficiency can be improved.
  • a target MIP image in which only the selected blood vessel part is extracted and displayed in response to a selection operation of an arbitrary blood vessel part among the identified blood vessel parts is generated and displayed.
  • the doctor can observe only the blood vessel site to be noticed. Therefore, duplication of a plurality of blood vessel sites can be eliminated, and observation can be performed by identifying a specific blood vessel site such as a site where multiple aneurysms occur.
  • the morphology of the major blood vessel sites (the length of the blood vessels, the direction of extension, the thickness, etc.) is generally the same for different subjects (patients), but the shape of the minor blood vessels that are not major varies depending on the individual. Therefore, the shape of the blood vessel part varies depending on the subject.
  • the covered The blood vessel part can be identified uniformly regardless of the subject, and is highly versatile.
  • the alignment is performed in two stages (based on the position of the center of gravity and based on rigid body deformation), it is possible to improve the determination accuracy for determining the blood vessel site.
  • the processing time for the rigid body deformation can be shortened, and the processing efficiency is good.
  • an MRI image obtained by another imaging method such as a contrast MRA image obtained by imaging a blood vessel using a contrast agent may be used.
  • an image obtained by imaging a blood vessel with another imaging apparatus such as CTA (Computed Tomograpny Angiography) or DSA (Digital subtraction Angiography) may be used.
  • the medical image processing apparatus according to the second embodiment has the same configuration as the medical image processing apparatus 10 according to the first embodiment, and only the operation is different. Therefore, the same components as those in the medical image processing apparatus 10 (see FIG. 1) according to the first embodiment are denoted by the same reference numerals, and the operation of the medical image processing apparatus 10 in the second embodiment will be described below.
  • FIG. 19 is a flowchart showing the detection process according to the second embodiment.
  • the MRA 3D image data is first input (step S101), and the 3D image data is preprocessed (step S102).
  • step S102 the image region of the 3D image data force blood vessel is extracted (step S 103). Since steps S101 to S103 are the same processing as steps Sl to 3 described with reference to FIG. 2 in the first embodiment, detailed description thereof is omitted here.
  • the primary candidate region of the cerebral aneurysm is detected by the GC filter bank using the extracted three-dimensional MRA image of the blood vessel region (step S104).
  • the GC filter bank is a combination of various filter processes and is divided into an analysis bank and a reconstruction bank.
  • the analysis bank is the original image (3D MRA image of the blood vessel region) Multi-resolution analysis is performed to create images with different resolution levels (hereinafter referred to as partial images! And weight images are created from these partial images.
  • the reconstruction bank weights each partial image with a weighted image, and then reconstructs the weighted partial image force original image.
  • FIG. 20 shows an analysis bank
  • the analysis bank uses a 3D MRA image of the blood vessel region as the original image S
  • Filter processing is performed in filter bank A (z j ), and partial images of each resolution level j are sequentially created.
  • filter bank A z j
  • partial images of each resolution level j are sequentially created.
  • Filter bank A (z j ) decomposes image S into partial images S, Wz, Wy, and Wx through filter processing with filters H (z j ) and G (z j ) as shown in FIG. is there.
  • S is
  • a smoothing filter H (z j ) is applied to each of x, y, and z! Smoothing
  • the filter H (z j ) is expressed by the following formula 7.
  • Equation 7 indicates z conversion (hereinafter, the same applies to Equations 8 to 10 indicating filters!).
  • the partial image Wz, Wy, Wx at each resolution level is filtered by the vector concentration filter GC, and the vector concentration at each resolution level is calculated. Is done. Since the method for calculating the vector concentration has been described above, a description thereof is omitted here.
  • the calculated vector concentration is input to the -Eural network NN.
  • the neural network NN outputs the output value in the range of 0 to 1 so that the higher the possibility of a cerebral aneurysm, the higher the possibility of a cerebral aneurysm, the lower the value is 0. Designed to be.
  • the determination unit NM When the output value is obtained from the neural network NN, the determination unit NM generates and outputs a weighted image V based on the output value.
  • the botacell value is set to 1 for a botacell having an output value larger than a certain threshold value (here, 0.8), and the botacell value is set to 0 for a botacell having an output value less than or equal to a threshold value 0.8.
  • a botacel for which a vector concentration degree exceeding the threshold value of 0.8 is calculated is a botacell constituting a region of a cerebral aneurysm.
  • the weighted image is created by binarizing the botacell values with the threshold as the boundary.
  • the weight image V is input to the reconstruction bank.
  • FIG. 22 is a diagram showing a reconfiguration bank.
  • the reconstruction bank weights each partial image s, Wz, Wy, Wx,
  • the original image S is reconstructed through the ruta bank S (z j ).
  • each partial image S, Wz, Wy, Wx is multiplied by the weight image V.
  • the value of 1 or 0 set for each botacell in the weighted image V is used as a weighting coefficient in the weighting process.
  • Each partial image multiplied by the weight image V is input to the filter bank S (z j ).
  • Filter bank S (z j ) is filtered by filters L (z j ), K (z j ), H (z j ) L (z j ) as shown in Fig. 23, and S, Wz , Wy, Wx and others reconstruct S.
  • S is subjected to filtering by the filter L (z j ) in the x, y, and z directions, respectively.
  • filter K (z j ) is applied in the X direction
  • filter H (z j ) L (z j ) is applied in the y and z directions.
  • Wz filter K (z j ) and z direction are applied in the y direction
  • the filter H (z j ) L (z j ) is applied, and for Wx, it is in the z direction! And filter) apply.
  • S is further filtered by filter bank S (z j_1 ).
  • the original image S is re-created by repeating such filtering.
  • the reconstruction of the original image S consists of the filters H (z j ) and G (z j o) in the filter banks A (z j ) and S (z j ).
  • the output image S that is output is only the image area that is likely to be a cerebral aneurysm.
  • the feature amount is calculated using the output image S (steps). S105).
  • the feature amount is calculated by calculating the average value of the size of the candidate region, the sphericity, and the vector concentration degree in each of the botasels in the region.
  • secondary detection is performed using the calculated feature quantity (step S106)
  • step S107 further tertiary detection is performed (step S107)
  • step S108 the final detection result is output (step S108). Since the processing of steps S105 to S108 is the same as steps S5 to S8 described with reference to FIG. 2, detailed description thereof is omitted.
  • the blood vessel part discrimination process is also performed in the second embodiment, but since the process contents are the same as those in the first embodiment, description of the process contents and effects is omitted.
  • the vector concentration degree is calculated for each resolution level from the partial image obtained by performing the multi-resolution analysis on the head image by the GC filter bank.
  • Each partial image is generated and multiplied to be weighted, and the original image is reconstructed from the weighted partial images. Therefore, only the region having a high vector concentration degree, that is, the region that is highly likely to be an aneurysm is reconstructed, and a reconstructed image in which only the candidate region of the aneurysm is displayed can be obtained.
  • the present invention can be used in the field of image processing, and can be applied to a medical image processing apparatus that performs image analysis and image processing of a head image obtained by a medical imaging apparatus.

Abstract

Selon la présente invention, une lésion est détectée de manière précise à partir d’une image de la tête. Une partie spécifique d’un vaisseau sanguin peut être observée de manière poussée. Un processeur d’images médicales (10) extrait une région d’un vaisseau sanguin cérébral à partir d’une image ARM créée par l’imagerie d’une tête, et une banque de filtres GC réalise une première détection en utilisant l’image à partir de laquelle la région de vaisseau sanguin cérébral est extraite. Une banque d’analyse de la banque de filtres GC effectue une analyse multirésolution pour calculer l’indice de concentration vectoriel à partir d’une image partielle pour chaque niveau de résolution. L’invention concerne également une image pondérée dans laquelle des voxels ayant un indice de concentration vectoriel au-dessus d’un seuil reçoivent une pondération de 1 et ceux ayant un indice de concentration vectoriel inférieur ou égal au seuil reçoivent une pondération de 0. Une banque de reconstruction réalise une pondération en multipliant chaque image partielle par l’image pondérée et reconstruit l’image d’origine en utilisant les images partielles pondérées. Les moyennes de la taille (volume), de la sphéricité et de l’indice de concentration vectoriel de la région du vaisseau sanguin reproduite dans l’image reconstruite sont calculées et une détection secondaire est réalisée par le procédé de la base des règles. Une détection tertiaire de l’élément de détection secondaire est réalisée par l’analyse de l’évaluation. Le résultat de la détection est sorti et affiché sur une partie d’affichage (13).
PCT/JP2006/316595 2005-08-31 2006-08-24 Processeur d’images médicales et procédé de traitement d’images WO2007026598A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007533204A JP4139869B2 (ja) 2005-08-31 2006-08-24 医用画像処理装置

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2005250915 2005-08-31
JP2005-250915 2005-08-31
JP2006083950 2006-03-24
JP2006-083950 2006-03-24

Publications (1)

Publication Number Publication Date
WO2007026598A1 true WO2007026598A1 (fr) 2007-03-08

Family

ID=37808692

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/316595 WO2007026598A1 (fr) 2005-08-31 2006-08-24 Processeur d’images médicales et procédé de traitement d’images

Country Status (2)

Country Link
JP (1) JP4139869B2 (fr)
WO (1) WO2007026598A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009039446A (ja) * 2007-08-10 2009-02-26 Fujifilm Corp 画像処理装置、画像処理方法、および画像処理プログラム
JP2009106443A (ja) * 2007-10-29 2009-05-21 Toshiba Corp 医用画像撮影装置、医用画像処理装置および医用画像処理プログラム
CN102984990A (zh) * 2011-02-01 2013-03-20 奥林巴斯医疗株式会社 诊断辅助装置
WO2013058114A1 (fr) * 2011-10-17 2013-04-25 株式会社東芝 Système de traitement d'image médicale
JP2014000483A (ja) * 2007-07-24 2014-01-09 Toshiba Corp X線コンピュータ断層撮影装置及び画像処理装置
JP2014124269A (ja) * 2012-12-25 2014-07-07 Toshiba Corp 超音波診断装置
JP2014237005A (ja) * 2007-04-20 2014-12-18 メディシム・ナムローゼ・フエンノートシャップ 形状情報を引き出すための方法
JP2015084936A (ja) * 2013-10-30 2015-05-07 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー 磁気共鳴装置およびプログラム
KR101821353B1 (ko) * 2016-08-30 2018-01-23 삼성전자주식회사 자기 공명 영상 장치
CN108496205A (zh) * 2015-12-30 2018-09-04 皇家飞利浦有限公司 身体部分的三维模型
JP2018201569A (ja) * 2017-05-30 2018-12-27 国立大学法人九州大学 地図情報生成方法、判定方法、及びプログラム
US10467750B2 (en) 2017-07-21 2019-11-05 Panasonic Intellectual Property Management Co., Ltd. Display control apparatus, display control method, and recording medium
JP2020014712A (ja) * 2018-07-26 2020-01-30 株式会社日立製作所 医用画像処理装置及び医用画像処理方法
JP2020171480A (ja) * 2019-04-10 2020-10-22 キヤノンメディカルシステムズ株式会社 医用画像処理装置及び医用画像処理システム
WO2021075026A1 (fr) * 2019-10-17 2021-04-22 株式会社ニコン Procédé de traitement d'images, dispositif de traitement d'images et programme de traitement d'images
CN112842264A (zh) * 2020-12-31 2021-05-28 哈尔滨工业大学(威海) 多模态成像中数字滤波方法、装置和多模态成像技术系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101373563B1 (ko) * 2012-07-25 2014-03-12 전북대학교산학협력단 Tof-mra를 이용한 혈류특성 및 mr-신호강도구배(전단율) 유도방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6214590A (ja) * 1985-07-12 1987-01-23 Toshiba Corp 画像診断装置
JPH1094538A (ja) * 1996-09-25 1998-04-14 Fuji Photo Film Co Ltd 異常陰影候補の検出方法および装置
JP2002109510A (ja) * 2000-09-27 2002-04-12 Fuji Photo Film Co Ltd 異常陰影候補検出処理システム
JP2002515772A (ja) * 1995-11-10 2002-05-28 ベス・イスラエル・デイーコネス・メデイカル・センター 対象の運動を相殺する画像装置および方法
JP2002203248A (ja) * 2000-11-06 2002-07-19 Fuji Photo Film Co Ltd 画像を幾何学的に計測するための計測処理装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6214590A (ja) * 1985-07-12 1987-01-23 Toshiba Corp 画像診断装置
JP2002515772A (ja) * 1995-11-10 2002-05-28 ベス・イスラエル・デイーコネス・メデイカル・センター 対象の運動を相殺する画像装置および方法
JPH1094538A (ja) * 1996-09-25 1998-04-14 Fuji Photo Film Co Ltd 異常陰影候補の検出方法および装置
JP2002109510A (ja) * 2000-09-27 2002-04-12 Fuji Photo Film Co Ltd 異常陰影候補検出処理システム
JP2002203248A (ja) * 2000-11-06 2002-07-19 Fuji Photo Film Co Ltd 画像を幾何学的に計測するための計測処理装置

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ARIMURA H. ET AL.: "Tobu MRA ni Okeru Nodomyakuryu Kenshutsu no CAD System", INNERVISION, vol. 19, no. 10, 25 September 2004 (2004-09-25), pages 22 - 25, XP003009849 *
MASUMOTO T.: "MR Angiography o Riyo shita No Domyakuryu no Computer Shien Gazo Shindan (CAD) no Kenkyu", INNERVISION, vol. 20, no. 8, 25 June 2005 (2005-06-25), pages 36, XP003009851 *
NAKAYAMA R. ET AL.: "Iyo Gazo ni Okeru Enkei.Senjo Pattern Kenshutsu no Tame no Filter Bank no Kochiku", THE TRANSACTIONS OF THE INSTITUTE OF ELECTROS, vol. J-87-D-II, no. 1, 1 January 2004 (2004-01-01), pages 176 - 185, XP003009850 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9439608B2 (en) 2007-04-20 2016-09-13 Medicim Nv Method for deriving shape information
JP2014237005A (ja) * 2007-04-20 2014-12-18 メディシム・ナムローゼ・フエンノートシャップ 形状情報を引き出すための方法
JP2014000483A (ja) * 2007-07-24 2014-01-09 Toshiba Corp X線コンピュータ断層撮影装置及び画像処理装置
JP2009039446A (ja) * 2007-08-10 2009-02-26 Fujifilm Corp 画像処理装置、画像処理方法、および画像処理プログラム
JP2009106443A (ja) * 2007-10-29 2009-05-21 Toshiba Corp 医用画像撮影装置、医用画像処理装置および医用画像処理プログラム
CN102984990A (zh) * 2011-02-01 2013-03-20 奥林巴斯医疗株式会社 诊断辅助装置
JP2013085652A (ja) * 2011-10-17 2013-05-13 Toshiba Corp 医用画像処理システム
US9192347B2 (en) 2011-10-17 2015-11-24 Kabushiki Kaisha Toshiba Medical image processing system applying different filtering to collateral circulation and ischemic blood vessels
CN103327899A (zh) * 2011-10-17 2013-09-25 株式会社东芝 医用图像处理系统
CN103327899B (zh) * 2011-10-17 2016-04-06 株式会社东芝 医用图像处理系统
WO2013058114A1 (fr) * 2011-10-17 2013-04-25 株式会社東芝 Système de traitement d'image médicale
JP2014124269A (ja) * 2012-12-25 2014-07-07 Toshiba Corp 超音波診断装置
JP2015084936A (ja) * 2013-10-30 2015-05-07 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー 磁気共鳴装置およびプログラム
JP2019500146A (ja) * 2015-12-30 2019-01-10 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 体部の3次元モデル
CN108496205B (zh) * 2015-12-30 2023-08-15 皇家飞利浦有限公司 身体部分的三维模型
CN108496205A (zh) * 2015-12-30 2018-09-04 皇家飞利浦有限公司 身体部分的三维模型
US11200750B2 (en) 2015-12-30 2021-12-14 Koninklijke Philips N.V. Three dimensional model of a body part
WO2018043878A1 (fr) * 2016-08-30 2018-03-08 삼성전자주식회사 Dispositif d'imagerie par résonance magnétique
KR101821353B1 (ko) * 2016-08-30 2018-01-23 삼성전자주식회사 자기 공명 영상 장치
JP2018201569A (ja) * 2017-05-30 2018-12-27 国立大学法人九州大学 地図情報生成方法、判定方法、及びプログラム
US10467750B2 (en) 2017-07-21 2019-11-05 Panasonic Intellectual Property Management Co., Ltd. Display control apparatus, display control method, and recording medium
JP2020014712A (ja) * 2018-07-26 2020-01-30 株式会社日立製作所 医用画像処理装置及び医用画像処理方法
JP2020171480A (ja) * 2019-04-10 2020-10-22 キヤノンメディカルシステムズ株式会社 医用画像処理装置及び医用画像処理システム
CN111820898A (zh) * 2019-04-10 2020-10-27 佳能医疗系统株式会社 医用图像处理装置以及医用图像处理系统
JP7271277B2 (ja) 2019-04-10 2023-05-11 キヤノンメディカルシステムズ株式会社 医用画像処理装置及び医用画像処理システム
CN111820898B (zh) * 2019-04-10 2024-04-05 佳能医疗系统株式会社 医用图像处理装置以及医用图像处理系统
WO2021075026A1 (fr) * 2019-10-17 2021-04-22 株式会社ニコン Procédé de traitement d'images, dispositif de traitement d'images et programme de traitement d'images
CN112842264A (zh) * 2020-12-31 2021-05-28 哈尔滨工业大学(威海) 多模态成像中数字滤波方法、装置和多模态成像技术系统

Also Published As

Publication number Publication date
JP4139869B2 (ja) 2008-08-27
JPWO2007026598A1 (ja) 2009-03-26

Similar Documents

Publication Publication Date Title
JP4139869B2 (ja) 医用画像処理装置
JP4823204B2 (ja) 医用画像処理装置
US10593035B2 (en) Image-based automated measurement model to predict pelvic organ prolapse
EP3367331A1 (fr) Codeur-décodeur convolutionnel profond pour la détection et la classification du cancer de la prostate
CN110036408B (zh) 活动性出血和血液外渗的自动ct检测和可视化
US7058210B2 (en) Method and system for lung disease detection
US7283652B2 (en) Method and system for measuring disease relevant tissue changes
US7620225B2 (en) Method for simple geometric visualization of tubular anatomical structures
EP1728213B1 (fr) Procede et appareil permettant d'identifier une maladie dans une image du cerveau
EP2116973B1 (fr) Procédé pour déterminer interactivement une surface liante pour segmenter une lésion dans une image médicale
EP2846310A2 (fr) Procédé et appareil d'enregistrement d'images médicales
US20030208116A1 (en) Computer aided treatment planning and visualization with image registration and fusion
US7684602B2 (en) Method and system for local visualization for tubular structures
US20080171932A1 (en) Method and System for Lymph Node Detection Using Multiple MR Sequences
JP2008515466A (ja) 対象物のクラスの画像表現を識別する方法及びシステム
JP2008073338A (ja) 医用画像処理装置、医用画像処理方法及びプログラム
JP2009226043A (ja) 医用画像処理装置及び異常陰影検出方法
US7747051B2 (en) Distance transform based vessel detection for nodule segmentation and analysis
JP2006520233A (ja) ボリュームデータ中のオブジェクトオブインタレストをシグナルする3次元画像化装置および方法
US7961923B2 (en) Method for detection and visional enhancement of blood vessels and pulmonary emboli
CN1836258B (zh) 采用结构张量来检测肺结节和结肠息肉的方法和系统
CN111862014A (zh) 一种基于左右侧脑室分割的alvi自动测量方法及装置
Marconi et al. Image Processing and Display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007533204

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06782999

Country of ref document: EP

Kind code of ref document: A1