WO2015099426A1 - Procédé de segmentation de région d'infarctus cérébral - Google Patents

Procédé de segmentation de région d'infarctus cérébral Download PDF

Info

Publication number
WO2015099426A1
WO2015099426A1 PCT/KR2014/012760 KR2014012760W WO2015099426A1 WO 2015099426 A1 WO2015099426 A1 WO 2015099426A1 KR 2014012760 W KR2014012760 W KR 2014012760W WO 2015099426 A1 WO2015099426 A1 WO 2015099426A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
region
brain
dwi
infarct
Prior art date
Application number
PCT/KR2014/012760
Other languages
English (en)
Korean (ko)
Inventor
김남국
강동화
장용준
이상민
정계삼
Original Assignee
재단법인 아산사회복지재단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020130161295A external-priority patent/KR20150073519A/ko
Priority claimed from KR1020140034846A external-priority patent/KR101578483B1/ko
Priority claimed from KR1020140034851A external-priority patent/KR101634334B1/ko
Application filed by 재단법인 아산사회복지재단 filed Critical 재단법인 아산사회복지재단
Publication of WO2015099426A1 publication Critical patent/WO2015099426A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • the present disclosure relates to a method of dividing a cerebral infarction, and more particularly, to a method of dividing a cerebral infarct into a brain based on a threshold value.
  • the present disclosure relates to a method of dividing the cerebral infarction, and more particularly, to a method of dividing the cerebral infarction in order to distinguish recoverable points for each of the points of the cerebral infarction.
  • the present disclosure relates to a method for automatically analyzing a stroke, and more particularly, to a method for automatically analyzing a stroke for analyzing an infarct region and a peripheral region using a template image and a brain image together.
  • the present disclosure relates to a method of extracting a representative image from a medical image, and more particularly, to a method of extracting a representative image from a medical image representing characteristics of a medical image such as a lesion.
  • an infarct region is assessed by MRI.
  • DWI perfusion-weighted imaging
  • DWI diffusion-weighted imaging
  • the areas of cerebral infarction in PWI and those in DWI are related to the time of stroke and often do not coincide within a certain time. Therefore, quantitative assessment of PWI-DWI mismatch in acute ischemic stroke is important. This is because the PWI-DWI mismatch region is an ischemic penumbra that is affected by ischemia and is evaluated as a region that can be revived by treatment. Therefore, it is important to accurately divide the core of the infarct region in the DWI, and for example, an adaptive threshold algorithm may be used as the method of segmentation.
  • US Patent No. 8,125,223 discloses a method of dividing an infarction region on a DWI basis. However, the method disclosed in this document is directed to a special method of segmenting infarct areas in individual DWI images and does not disclose how to set threshold values to automatically segment infarct areas.
  • Conditions for treatment such as thrombolysis for acute cerebral infarction are recommended on an international basis to be implemented within 4.5 hours from onset time of stroke.
  • a method for estimating the time of occurrence the paper Cho AH, Sohn SI, Han MK, et al. Safety and efficacy of MRI-based thrombolysis in unclear-onset stroke.
  • a preliminary report. Cerebrovasc Dis 2008; 25: 572-579. Discloses a method for estimating the incidence of infarct area using FLRI (fluid-attenuated inversion recovery) images of patients with uncertain incidence of cerebral infarction.
  • the MRI-based conditions used as the basis for the timing of occurrence in this paper include the conditions of positive perfusion-diffusion mismatch (PWI-DWI mismatch) and absence of well-developed fluid-attenuated inversion recovery changes of acute diffusion lesions (FLAIR CHANGE). do.
  • the region to be recovered by thrombolysis may be more preferable.
  • a representative image (eg, thumbnail) is generated by storing and showing thumbnails of the entire image like a digital camera, so that a user can easily view, sort, and select an image without loading all the images.
  • a representative image eg, thumbnail
  • thumbnails of the reduced medical images information is very insufficient to easily obtain the characteristics of the medical images from the images. Therefore, it is required to generate a thumbnail by extracting a high quality image that most characteristically reveals the contents of the medical image. In addition, since a large amount of medical images are handled, it is necessary to develop a convenient generation process of representative images such as thumbnails.
  • US Patent Publication No. 2011/0286647 discloses a method for browsing using an image cube storing image piles (thumbnails, etc.) of a medical image along three axes representing body part, modality, and date of image. However, this document does not disclose a method of automatically or semi-automatically extracting features of a medical image and searching and managing the medical image using the same.
  • the method of segmenting a cerebral infarction region generating a template histogram of a brain image using a plurality of brain images. ; Matching the histogram of the brain image to be divided with the template histogram; And dividing the cerebral infarction region based on a threshold value selected based on the template histogram in the matched histogram.
  • the brightness information of each point in the infarct region included in the at least one brain image Extracting a quantitative value set comprising a; And estimating recoverability from infarction of each of the points based on the difference in the quantitative value set between the points in the infarct area.
  • a method for automatically analyzing stroke comprising: automatically dividing an infarct using a DWI; Automatically dividing a penumbra using PWI; Registering a DWI image obtained by dividing the infarct region and a PWI image obtained by dividing the peripheral region with a template brain image; And analyzing the DWI-PWI mismatch in the determined image.
  • a method for automatically analyzing the stroke is provided.
  • a method of extracting a representative image from a medical image for search and management comprising: selecting a location of interest in the medical image; Dividing a region of interest including the center of focus by image processing a medical image according to an image processing instruction according to a type of lesion; And generating a representative image by assigning a safety margin to the divided ROI, thereby providing a method of extracting a representative image from a medical image.
  • FIG. 1 is a view illustrating an example of a method of segmenting an infarct region according to the present disclosure
  • FIG. 2 is a diagram illustrating an example of a DWI brain image
  • 3 is a view for explaining an example of a brain image segmented in the DWI brain image
  • FIG. 4 is a view illustrating an example of the mean and standard deviation of a histogram in DWI brain images of a plurality of patients before histogram matching;
  • FIG. 5 is a diagram for explaining an example of a histogram matching method
  • FIG. 6 is a diagram for explaining an example of a mathematical technique of histogram matching.
  • FIG. 7 is a diagram illustrating an example of the mean and standard deviation of a histogram in DWI brain images of a plurality of patients after histogram matching.
  • FIG. 8 is a view for explaining an example of a method of matching a segmentation histogram to a template histogram and a method of correcting an infarct area;
  • ventricle 9 is a view for explaining an example of division of the ventricle (ventricle),
  • FIG. 10 is a diagram for explaining an example of a histogram for ROI Vs NonROI in an ADC image.
  • FIG. 11 illustrates an example of a mean and standard deviation of a histogram for ROI Vs NonROI in ADC images of a plurality of patients.
  • FIG. 12 is a view for explaining an example of an FROC analysis graph using the threshold value, the size of the infarct region, and the ADC threshold value as parameters.
  • FIG. 13 is a view for explaining an example of a method for dividing an infarction region according to the present disclosure
  • 15 is a diagram illustrating an example of a brain image segmented in a DWI brain image
  • 16 is a diagram illustrating an example of a method in which a quantitative value set of each of the voxels of an infarct region is extracted
  • FIG. 17 is a diagram illustrating an example of a process in which a brain image is filtered by a Gaussian low pass filter
  • FIG. 18 is a diagram illustrating an example of a change in brightness of an infarct region in a DWI image according to an occurrence time of infarction
  • FIG. 19 is a view for explaining an example of a method of understanding a quantitative value set as a point in a multi-space relationship corresponding to an infarct occurrence time point;
  • 20 is a view for explaining an example of a classifier generation method
  • 21 is a view for explaining an example of the correspondence relationship generated by the regression analysis
  • FIG. 22 illustrates a reduced infarct region excluding infarct recoverable voxels
  • 24 is a view for explaining another example of the automatic analysis method for stroke according to the present disclosure.
  • 25 is a diagram for explaining an example of an atlas template.
  • FIG. 27 is a diagram for explaining a feature of a method of dividing an infarct area from a DWI;
  • 28 to 30 are diagrams for explaining characteristics of a method of dividing a peripheral region from a PWI.
  • FIGS. 29 to 30 are diagrams showing the analysis results of FIGS. 29 to 30.
  • 35 is a view for explaining an example of automatically showing an analysis result incorporating DWI-PWI-FLAIR, a template
  • 36 is a diagram illustrating the aforementioned methods as an example of a user interface
  • FIG. 37 is a view for explaining an example of a method of extracting a representative image from a medical image according to the present disclosure
  • 38 is a view illustrating an example of a process of selecting a lesion through a pop-up window
  • 39 is a view for explaining an example of a method of selecting a view in a 3D medical image
  • FIG. 40 is a view showing an example of an image of a lung
  • 41 is a view for explaining an example of a process of generating a thumbnail
  • FIG. 42 is a view for explaining an example of a method of capturing an image when generating a thumbnail
  • 43 is a view for explaining an example of an image that changes with time
  • 44 is a view for explaining an example of a manner in which thumbnails are stored and retrieved.
  • FIG. 1 is a view illustrating an example of a method of dividing an infarct region according to the present disclosure.
  • a template histogram of the brain image is generated using a plurality of brain images (S30). For example, a template histogram is obtained at DWI B0 and B1000.
  • a process of extracting a brain from a BET extraction process (S10) and a B1000 may be performed on a brain image acquired by MRI (S30).
  • the method of dividing the cerebral infarction is described based on the DWI of the MRI, but it may be considered to apply the method of dividing the cerebral infarct using brain images having different modalities.
  • a threshold value Th may be selected based on the template histogram (S40).
  • the threshold value may be selected using histograms of a plurality of brain images.
  • the histogram of the brain image to be divided to match the template histogram (S60).
  • the cerebral infarction region is divided based on the threshold value in the matched histogram (S70).
  • the infarct region can be automatically divided by the selected threshold value.
  • the non-infarct region may be partially included in the region divided into the infarct region.
  • the process of correcting such an error may be performed.
  • the threshold value may be corrected after the ventricles are divided in the DWI B1000 (S51), and the mean and the deviation of the ventricles and the non-infarct regions are modeled (S53).
  • an error correction process may be performed using an ADC that is a reverse phase of the DWI (S80).
  • the cerebral infarction can be automatically found and divided.
  • FIG. 2 is a diagram illustrating an example of a DWI brain image
  • FIG. 3 is a diagram illustrating an example of a brain image segmented in a DWI brain image.
  • the DWI is taken as a pre-process before generating the histogram of the brain image, it includes the area where the stroke does not occur, such as an area other than the brain (eg, noise, CSF).
  • Such areas can be extracted with open software such as BET.
  • 3 (b) shows an inversion image in DWI B0. Thresholding with B1000 among B0 and B1000 can help the brain find better.
  • DWI is used to find the core of the infarction.
  • FIG. 4 is a diagram illustrating an example of the mean and standard deviation of histograms in DWI brain images of a plurality of patients before histogram matching.
  • the horizontal axis is the intensity of pixels in the brain image (I), and the vertical axis is the number of pixels (N) having some intensity in the brain image. Indicates. That is, the histogram shows the probability distribution of intensity values in some DWI brain images.
  • template histograms based on a plurality of initial brain images divided into ventricle, infarction region, and non-infarction region. It includes the process of generating.
  • the infarct region in the plurality of initial brain images is a region determined by the doctor as an infarct region (for example, 10 of FIG. 2), and may be manually determined by the knowledge and experience of the doctor as described above.
  • the average and standard deviation of the intensity values of pixels constituting the infarct region, the non-infarct region, and the ventricles evaluated by the doctor can be obtained, respectively.
  • the horizontal axis represents the number of patients
  • the vertical axis represents the intensity value.
  • each graph represents the average values of the infarct region, the non-infarct region, and the ventricle in 19 brain images, and standard deviations are displayed in a range up and down. In this manner, histograms of a plurality of brain images can be displayed at the same time.
  • FIG. 5 is a diagram illustrating an example of a histogram matching method.
  • the method of directly determining the threshold value in the plurality of histograms itself has a difficulty in selecting the threshold value because the accuracy is low and the deviation is severe for each patient.
  • a histogram matching process is performed and a template histogram can be made using a plurality of matched histograms.
  • all remaining histograms may be matched based on the first histogram.
  • the histogram matching process is a process of transforming the intensity range of each horizontal axis of the remaining histograms to be the same as the histogram number 1, and simultaneously converts the probability distributions of the remaining histograms, that is, the sameness of the shape of the histogram. It is a process.
  • FIG. 6 is a diagram for explaining an example of a mathematical technique of histogram matching.
  • FIG. 6 (a) shows an input image and a histogram
  • FIG. 6 (b) shows a relationship between an input and an output
  • FIG. 6 (c) shows an output image and a histogram.
  • T (r) may be obtained from the PDF intensity of the input image
  • G (z) may be obtained as s.
  • FIG. 7 is a diagram illustrating an example of the mean and standard deviation of histograms in DWI brain images of a plurality of patients after histogram matching.
  • the histogram matching process is performed on the histogram to be divided as described with reference to FIG. 7. If a template histogram is used as the reference for performing histogram matching, more reliable matching is possible, and thus, the reliability of the threshold value can be improved.
  • Template histograms can be created based on a plurality of matched histograms.
  • the template histogram can change as you add the number of underlying matching histograms.
  • matching the split histogram to the template histogram is a process of transforming the intensity range of the horizontal axis so that it is the same in the template histogram and the split histogram. It is a process that is converted to maintain identity.
  • the template histogram may be made by averaging a plurality of matched histograms as described above.
  • the selection bias can be significantly reduced as compared with the case where any histogram of the plurality of histograms is used as the matching criterion.
  • This template histogram serves as a criterion for histogram matching for the histogram on which the division of the infarct region is to be performed.
  • FIGS. 10 and 11 The process of thrashing the histograms again based on the generated template histogram and removing false positives using the ADC image may be added (see FIGS. 10 and 11). This is further described below.
  • a filtering process through an infarct size or the like may be added (see FIG. 12). This will be further described later.
  • infarct areas can be detected and extracted automatically by software.
  • FIG. 8 is a diagram illustrating an example of a method of matching a segmentation histogram with a template histogram and a method of correcting an infarct area.
  • FIG. 8A illustrates an example of the template histogram illustrated in FIG. 7, and FIG. 8B illustrates an example of a segmentation histogram.
  • FIG. 8 (c) shows a matched histogram as a result of matching the histogram of FIG. 8 (b) with the template histogram of FIG. 8 (a).
  • 8 (d) shows the infarct region 20 detected in the DWI brain image. Thresholding the matched histogram, the pixels corresponding to the infarct region may be reversed, and the found pixels may be divided into the infarct region 20 in the DWI image, as shown in FIG. 8 (d). Can be.
  • the infarct region 20 may be displayed in a different color from the periphery but is shown in darker gray in FIG. 8 (d).
  • Intensities of the pixels constituting the infarct region vary, and thus, pixels in the non-infarct region may be divided into the infarct region even when the threshold is held at the predetermined threshold value as described above.
  • the process of correcting this error can be added.
  • the correction may be a method of correcting the threshold value itself in the process of selecting the threshold value and a method of removing pixels of the non-infarct region from the thresholded infarct region.
  • FIG. 9 is a view for explaining an example of division of the ventricles.
  • the mean intensity and standard deviation of the pixels constituting the ventricle and the ventricle are described in FIG. 4. It can be obtained together.
  • the averages of the ventricles are different from each other in the histograms. In particular, it can be seen that the average of the non-infarct region is almost changed to the change of the average of the ventricles.
  • the threshold value when segmenting infarcts by applying a threshold value to a histogram of a particular patient, if the ventricular mean is higher or lower than other histograms or template histograms, the threshold value is added or subtracted. It is possible to change to a new threshold value.
  • the changed threshold value can be used to reduce an error in which pixels in the non-infarct region are divided into the infarct region.
  • FIG. 10 is a diagram illustrating an example of a histogram for ROI Vs NonROI in an ADC image
  • FIG. 11 is a diagram illustrating an example of the mean and standard deviation of the histogram with respect to ROI Vs NonROI in an ADC image of a plurality of patients. .
  • FIG. 10 is a histogram of an ADC image, in which FIG. 10 (a) is an example of a histogram of a non-infarct region, and FIG. 10 (b) is an example of a histogram of an infarct region.
  • FIG. 11 histograms of a plurality of patients may also be generated as shown in FIG. 11.
  • the graph indicated by the square at the upper side is the ventricular region
  • the graph at the bottom of the graph is the infarct region
  • the graph indicated by the circular dot at the center represents the histogram of the ADC of the non-infarct region.
  • the ADC is an image generated by calculation based on the DWI.
  • the ADC (apparent diffusion coefficient) is a function of temperature as a diffusion coefficient. Because there are cell walls in the body and the temperature is uneven, the ADC can be calculated using DWI. DWI and ADC are reversed. Infarct zones reduce the diffusion of water outside the cells due to the expansion of the cells. The area where the diffusion is reduced becomes a small signal decrease area when the DWI is taken on the B1000, and the DWI image is bright. On the other hand, areas with reduced diffusion appear darker than normal in the ADC. Water, such as cerebrospinal fluid (CSF), is a free-diffusion region with bright ADCs and dark DWIs.
  • CSF cerebrospinal fluid
  • the infarct region may be divided by determining the ADC threshold value with reference to the graph of FIG. 11. Therefore, only pixels satisfying the pixel divided into the infarct region and the template histograms described in FIGS. 7 and 8 and then being threshold-held and divided into the infarct region in the ADC image are divided into the infarct region. Therefore, the error can be compensated for in the threshold holding.
  • FIG. 12 is a diagram for explaining an example of a FROC analysis graph using the threshold value, the size of the infarct region, and the ADC threshold value as parameters.
  • FIG. 12 is a graph in which the threshold value of the template histogram is fixed to 200, the threshold value of the infarct region and the ADC image are changed to parameters, and FROC (Free Response Operating Characteristic) analysis for various combinations of parameters is shown. Is shown. The horizontal axis is FPs (false points), and the vertical axis represents sensitivity (probability of dividing pixels in the infarct area into the infarct areas). In FIG. 12, when the ADC threshold value is increased based on one graph, the FPs increase.
  • the user may set the conditions of the segmentation method of the infarct region to predict the reliability of the method.
  • the sensitivity is about 89.47% and the number of FPs is about 1.47.
  • At least one example of segmenting the infarct region by matching the histogram generated using the histograms of the brain images acquired in the DWI B1000 and the DWI B0, and using the ADC image or the like, may be used.
  • this stepwise method is valid, but it is of course possible to simultaneously consider the ADC and the DWI B1000 and DWI B0 in any order.
  • FIG. 13 is a view for explaining an example of a method for dividing the cerebral infarction according to the present disclosure.
  • a quantitative value set including brightness information of each point in the infarct region included in the plurality of brain images is extracted (S41).
  • the possibility of recovery from infarction of each point is estimated based on the difference in the quantitative value set between the points in the infarct area (S51).
  • a plurality of brain images may be acquired (S11), an infarct region may be divided in at least one of the plurality of brain images (S21), and the plurality of brain images may be registered (S31). ).
  • the method of distinguishing infarct regions according to the present disclosure uses a single brain image (eg, DWI) to determine the possibility of infarct recovery of each point of the infarct region. Includes sorting.
  • DWI single brain image
  • the intensity of each point is extracted from each infarct region included in the plurality of brain images, and from the reference point of the infarct region of at least one brain image of the plurality of brain images to each point.
  • the distance of can be extracted.
  • the point may be a voxel or a pixel of a 2D image extracted from a 3D brain image.
  • points are described by voxels. Therefore, the description in this example covers the case of two-dimensional pixels.
  • image smoothing may be performed using a low pass filter to extract the intensity of the voxel.
  • a distance map of the infarct region may be generated, and the distance map may be used to extract the distance from the center of the infarct region 3 (see FIG. 16) to each point.
  • each of the points can be mapped to an infarct point using a classifier that classifies the point of infarction of each point based on the difference in the set of quantitative values between the points in the infarct area.
  • a classifier that classifies the point of infarction of each point based on the difference in the set of quantitative values between the points in the infarct area.
  • the classifiers can be classified into the infarct recoverable point and the nonrecoverable point using a classifier that has learned the points to recover from the infarction.
  • the classifier uses a multiple regression method, a Support Vector Regression method, and curve fitting to determine a set of quantitative values of the points of infarct regions accumulated from brain images of different humans with known or unknown incidence of infarction. Can be classified using at least one of the group consisting of methods.
  • the method of dividing the cerebral infarction does not determine the method of treatment or diagnosis, such as whether thrombolysis is performed, but merely reveals that the voxels in the infarct have different characteristics at the time of occurrence or the possibility of infarction recovery. A more accurate determination of the infarct area is disclosed.
  • the method of dividing the cerebral infarction provides a basis for quantitatively and objectively determining each voxel about a region that may survive the overestimated infarct region.
  • FIG. 14 is a diagram for explaining examples of brain images generated by MRI.
  • Brain images are generated by MRI.
  • Brain images may include a DWI image, an ADC image, a PWI image, a FLAIR image, a T1 image, a T2 image, and the like.
  • the brain image is not limited to the MRI, but may include a CT image or other medical images.
  • the brain images are generated using pulse sequences combined according to the needs of each image. Therefore, the brain images may have different modalities.
  • the ADC is an image generated by calculation based on the DWI.
  • TR repetition time
  • TE time to echo
  • TR and TE can be controlled by the examiner. By appropriately adjusting TR and TE, T1 or T2 images can be obtained for clinical applications.
  • Diffusion weighted imaging is an image that maps the diffusion motion of molecules, particularly water molecules, in living tissue.
  • the diffusion of water molecules in tissues is not free.
  • DWI reflects the impact of water molecules on fibrous tissue or membranes. Therefore, the diffusion pattern of the water molecules indicates the normal or abnormal state of the tissue.
  • DWI may well indicate the normal and abnormal states of the fiber structure of the white matter or gray matter of the brain.
  • the ADC (apparent diffusion coefficient) is a function of temperature as a diffusion coefficient. Because there are cell walls in the body and the temperature is uneven, the ADC can be calculated using DWI. DWI and ADC are reversed. Infarct zones reduce the diffusion of water outside the cells due to the expansion of the cells. The area where the diffusion is reduced becomes a small signal decrease area when the DWI is taken on the B1000, and the DWI image is bright. On the other hand, areas with reduced diffusion appear darker than normal in the ADC. Water, such as cerebrospinal fluid (CSF), is a free-diffusion region with bright ADCs and dark DWIs.
  • CSF cerebrospinal fluid
  • Perfusion weighted imaging is a perfusion image showing blood flow.
  • parameters such as blood flow rate, blood flow rate, mean transit time (MTT), and time to peak (TTP) may be obtained.
  • MTT mean transit time
  • TTP time to peak
  • the FLAIR image is an image that nulls the signal coming from the fluid.
  • FLAIR images are used to suppress the effects of cerebrospinal fluid in the images when acquiring brain images by MRI.
  • FLAIR images show the anatomy of the brain well. Depending on the tissue, a good choice of inversion time can suppress the signal from a particular tissue.
  • T1 and T2 images control TR and TE to emphasize T1 or T2 effects of specific tissues.
  • the protons of the tissue is rearranged in the direction of the external magnetic field (B0) (Z-axis direction) while releasing the absorbed energy to the surrounding tissue.
  • T1 is the time constant of the proton spindle realignment along the longitudinal Z axis, that is, the time constant of the curve in which the Z-axis magnetization is restored.
  • T1 is referred to as the longitudinal axis relaxation time or spin-lattice relaxation time as the time constant of magnetization recovery.
  • the RF pulse is blocked, the XY component of magnetization collapses.
  • T2 is a time constant of the XY component decay curve of magnetization, and is called a lateral relaxation time or spin-spin relaxation time.
  • T1 and T2 are intrinsic values of tissue and have different values for water, solids, fats, and proteins.
  • lengthening TR decreases T1 effect.
  • shortening TR increases the T1 effect (contrast), i.e., obtains a T1-weighted image.
  • Shortening TE reduces the T2 effect, while increasing TE increases the T2 effect, i.e., obtains a T2-weighted image.
  • MRA images are blood images taken by MRI using contrast medium.
  • 15 is a diagram illustrating an example of a brain image segmented in a DWI brain image.
  • the infarct region 10 may be segmented in at least one of the acquired brain images (S21 of FIG. 13).
  • a peripheral region 20 (penumbra; see FIG. 16) of the infarct region may be divided in at least one brain image of the brain images.
  • the peripheral area is an area around the infarct area or surrounding the infarct area. The area is affected by the infarction and causes a problem in blood supply.
  • an infarct region or an ischemic penumbra may be segmented through an image processing process (eg, adaptive threshold holding) in the above-described brain images.
  • 15 (b) shows an inversion image in DWI B0. Thresholding with B1000 among B0 and B1000 can help the brain find better.
  • brain images including at least one brain image obtained by dividing the infarct region or the peripheral region are registered (S31).
  • the method of dividing the cerebral infarction region according to the present disclosure may be performed using one brain image such as DWI, and in this case, the matching process may not be included.
  • the anatomical image FLAIR or T1, T2 image, the DWI image segmented infarct region or ADC image, PWI image and the remaining brain images are matched.
  • brain images by CT may be matched together.
  • two or more, preferably two or more brain images having different modalities are matched.
  • the remaining brain images are matched based on the DWI image in which the infarct region is divided.
  • Rigid registration methods can be used for matching.
  • the brain has little motion, but in some cases, non-rigid registration may be used.
  • Atlas having template information of a brain image may be used as a basis of registration. Atlas is the epitome of brain imaging from brain images of many individuals.
  • FIG. 16 is a diagram illustrating an example of a method in which a quantitative value set of each of the voxels of an infarct region is extracted.
  • the quantitative set of values may include x, y, and z location information matched to voxels of the infarct region 10, intensity information in each of a plurality of brain images, and reference points of the infarct region 10. Distance information from a center of inertia to each voxel may be included.
  • the set of quantitative values may include some of the illustrated quantitative values (position, brightness, distance). Since the brightness information may change according to the occurrence time of the infarct region 10, the brightness information may also include time information on which brightness information is obtained.
  • the quantitative value is a value matched in a voxel unit, and a process of obtaining an average quantitative value of the infarct region 10 using the quantitative value in the voxel unit is not necessarily excluded.
  • the present disclosure determines whether recovery from infarction is possible for each voxel.
  • the method of distinguishing the cerebral infarction region according to the present disclosure focuses on having heterogeneity rather than treating the infarct region as homogeneous. Therefore, more detailed and precise determination is possible than the method of determining infarct area on average.
  • the set of quantitative values of each voxel in the infarct region is the location of the voxels in the brain image (x, y, z), the brightness of the voxels DI DWI , DI ADC , DI FLAIR , DI T1 And DI T2 With at least one of Distance from center of infarct area (3) D DWI , D ADC , D FLAIR , D T1 and D T2 It may include at least one of.
  • the location information may include a location (x, y, z) on a coordinate system that defines a space in the DWI, as shown in FIG. 16.
  • the brightness information of the voxel is matched with the brain images, for example, for the voxel of the infarct region divided by the DWI , not only the brightness of the DWI (DI DWI ) but also the brightness information of the corresponding voxel in other images (DI ADC , DI FLAIR) , DI T1 , DI T2 Etc.) may be matched. Since the voxel brightness (DI DWI , DI ADC , DI FLAIR , DI T1 , DI T2 ) of the infarct region in each brain image is considered to be related to the occurrence time of the infarct region, it is preferable to extract the brightness information.
  • 17 is a diagram illustrating an example of a process in which a brain image is filtered by a Gaussian low pass filter.
  • the image smoothing technique such as Gaussian low pass filter is applied to the brain image or infarct region to reduce the noise due to the voxel unit.
  • Signal-to-noise ratio That is, the SNR is raised using the information around the voxel.
  • the infarct region is blurred as shown in FIG. 17 (b). )do.
  • the infarct region is imaged so that the brightness of a voxel is related to the surrounding information, thereby reducing errors caused by noise.
  • the distance from the center of the infarct region eg, the center of inertia
  • the quantitative value set preferably also includes distance information.
  • a distance map of the infarct region may be used to extract distance information of the voxel.
  • the distance map may be generated based on the 3D image of the infarct region. Since the two-dimensional image in the desired direction can be obtained at any time from the three-dimensional infarct region image, the three-dimensional distance map can cover the two-dimensional distance map.
  • the generation of the distance map may include a process of extracting distance information from the boundary of the infarct region to each voxel for all the voxels of the infarct region.
  • various methods such as an Euclidean distance map, may be used as a method of generating the distance map.
  • the distance from the center of the infarct region to each voxel can be obtained from the distance information provided by the distance map.
  • FIG. 18 is a diagram illustrating an example of a change in brightness of an infarct region in a DWI image according to a time point of occurrence of infarction.
  • the brightness of the ADC is shown as the inverse of the brightness of the DWI.
  • the average brightness value of the infarct region may change with time. Since the average value is related to the brightness of each voxel, the brightness of each voxel may also change with time. The degree of change in brightness of the voxel in the infarct region in the DWI image depends on factors such as the location of the voxel, the age of the patient, the gender, and the like. In addition, since the brightness change of the voxel in the DWI image that changes with time shows a spectral distribution, uniformly evaluating the infarct region from a specific DWI brightness value may cause overestimation of the infarct region.
  • the average brightness of the infarct region divided by the threshold holding changes with time, and the volume of the infarct region may increase to the peripheral region.
  • Each of the voxels has a different point of occurrence of infarction, and may be divided into a voxel that is incapable of recovering and an infarct even in a voxel in the infarct region.
  • the present disclosure is a method for estimating the infarct recovery possibility in voxel unit focusing on the fact that the infarct region divided into the infarct region by such uniform threholding is not homogeneous, and there is a difference in the time point and recovery possibility of infarction (heterogenity). Initiate.
  • FIG. 19 is a diagram for explaining an example of a method of understanding a quantitative value set as a point in a multi-space relationship corresponding to an infarct occurrence time point
  • FIG. 20 is a diagram for explaining an example of a method of generating a classifier.
  • the quantitative value set may include brightness information (DI DWI , DI FLAIR , DI T1 , DI pWI ) , location information (x, y, z), and distance information (D) from the center of a plurality of brain images.
  • a point in space P1 (P11, P12, P13, P14, P15, P16; e.g. DI DWI , D , DI FLAIR , DI T1 , (x, y, z), DI pWI , onset1) is a quantitative value.
  • Such a correspondence relationship corresponds to the occurrence of a quantitative value set through the classifier as shown in FIG. 20.
  • Such classifiers can be trained or learned to correspond, for example, to a set of quantitative values to the time point of infarction of the voxel. Classification of voxels by this classifier is based on differences in quantitative value sets.
  • Classifiers can create correspondences using, for example, statistical methods such as multiple regression or Support Vector Regression methods.
  • the classifier may curve-fit a relationship between a set of quantitative values and a time point of infarction of a voxel to prepare a correspondence relationship.
  • the accuracy and reliability of the correspondence relationship can be improved by training and learning to map the quantitative value set of voxels from the time of infarction to the time when the voxel is divided into infarct areas.
  • training and learning means that the correspondence is corrected, supplemented, or corrected as the incidence of infarct-quantitative value set data is continuously accumulated.
  • the classifier may learn to recover from infarction by classifying a set of quantitative values accumulated from voxels recovered from infarction, and classify the bouts into infarct and non-recoverable groups.
  • the voxels recovered from the infarction may be voxels extracted from brain images whose infarct time is unknown, and the corresponding relationship is actually generated as a result of repeating the process of mapping the quantitative value set of the voxels to the time of occurrence. Evaluate exactly how much time you estimate.
  • the validity of the correspondence relationship may be evaluated by matching the occurrence time of the infarct region with a set of quantitative values of the voxels known to the corresponding time.
  • 21 is a diagram for explaining an example of the correspondence relationship generated by the regression analysis.
  • multi-modality brain images may be obtained from patients whose timing of occurrence of infarct area is known (clear onset). From these multi-modality brain images, quantitative value sets of voxels in the infarct region can be accumulated. In addition, animal experiments can be used to accumulate a set of quantitative values that change over time from the time of infarction of voxels in the infarct region for the same subject.
  • Onset time f (DI DWI , D , DI FLAIR , DI T1 , (x, y, z), DI pWI , ...) can be defined.
  • the classifier may be trained to classify the occurrence time of the voxel, but may directly classify the quantitative value set of the voxel into the infarct-recovered and unrecovered voxels.
  • FIG. 22 illustrates a reduced infarct region excluding infarct recoverable voxels.
  • the classifier may classify the time points of infarction of each voxel or classify the infarct recoverable voxel and the unrecoverable voxel.
  • the estimated infarct region 30 may be smaller than the initially divided infarct region 10.
  • the overestimated infarct region 20 may be more accurately evaluated by estimating infarct recovery on a voxel basis, thereby providing more objective and quantitative information about the PWI-DWI mismatch.
  • 13 to 22 may be automatically performed by an application made for each process or may be performed together with a user interface.
  • FIGS. 23 to 36 are diagrams for explaining examples of a method for automatically analyzing a stroke according to the present disclosure, in which a stroke onset time (stroke occurrence or onset time) of a patient who comes to an emergency room is within 4 hours and a half; There is a standard to perform thrombolysis. However, most strokes are not cleared and the causes are various. Therefore, there is no evidence yet, except that it is within four and a half hours from the present occurrence.
  • stroke onset time stroke occurrence or onset time
  • Examples of the method for automatically analyzing the strokes disclosed in FIGS. 23 to 36 do not provide a direct therapy, but provide a means of providing information that can be referred to the practitioner as well as the practitioner. For example, even if you are not a specialist, or if your stroke onset time is less than or equal to four and a half hours, or less than, brain images can be used to determine the proportion of infarct and penumbra and areas in the brain. If there is no difference between the infarct area and the surrounding area within 4 hours and a half, the thrombolysis may not be performed. If the difference is greater than 4 hours and a half, the means may be provided to provide a standard for quantitative judgment. have.
  • the examples to be described provide a method of analyzing not only the ratio of the infarct and the peripheral area but also the area of the brain where the infarct area occurs, and is characterized by automating the analysis process.
  • the automatic analysis method of stroke according to the present example can also be used as a quantitative tool in the task of analyzing a person who has undergone thrombolysis and a person who has not had a thrombolysis as well as the prognosis and adverse events in each case.
  • FIG. 23 is a conceptual diagram illustrating the characteristics of the automatic stroke analysis method.
  • various images are generated while acquiring brain images by MRI.
  • the ADC map is created in the DWI, the ADC is thresholded to segment the infarct region, and the false positive is removed. Histogram matching may be applied to the division of the infarct region in addition to the threshold holding.
  • FLAIR fluid-attenuated inversion recovery
  • the FLAIR image is an image that nulls the signal coming from the fluid.
  • FLAIR images are used to suppress the effects of cerebrospinal fluid in the images when acquiring brain images by MRI.
  • FLAIR images show the anatomy of the brain well. Depending on the tissue, a good choice of inversion time can suppress the signal from a particular tissue.
  • This example provides information by integrating these images (DWI, PWI, FLAIR, etc.).
  • template images of the brain with atlas information are also integrated.
  • the size of the infarct and surrounding areas is also important, but it is more important where the infarct area of the brain occurs.
  • the template image is used to generate information about this.
  • the template image is a region divided (divided) by the standard brain according to the atlas information, for example, in the brain image, such as areas such as hyper camphor, amigdala, salamus, white meter, and gray meter are standard. It is decided.
  • We use these atlas maps template images because they vary from person to person, but not very differently.
  • the infarct area mask in which the infarct area is divided is overlaid on the template image of the brain, information on which part of the brain is generated and how much is generated. For example, when the brain image template is matched with the brain image of the patient, it is possible to determine which part of the brain and the size of the infarct region divided in the brain image of the patient. Similarly, the PWI in which the peripheral area is divided may also be matched with the template image to determine which part of the brain corresponds to and how much size the peripheral area corresponds to. In addition, by overlapping in this way, information about a DWI-PWI mismatch can be obtained. In particular, when FLAIR also overlaps, additional structural information of the brain can be added, and the damage or problem that has already existed before the infarct region in the infarct region or the peripheral region can be detected from FLAIR.
  • FIG. 24 is a diagram illustrating an example of an automatic analysis method of stroke according to the present disclosure.
  • the DICOM data is taken and histogram matching is performed to segment the infarct region. do.
  • the volume of the divided infarct area is measured. After that, the FROC optimization process is performed.
  • An example of partitioning the surrounding area from the PWI is to take DICOMs data, check the phases, and realignment. Then, spatial and temporal smoothing, and parallelized gamma fitting are performed, TTP is calculated to find the MCA, and AIF is determined. Thereafter, deconvolution is performed to calculate PTT penumbra segmentation by calculating MTT, CBF, CVB, and Tmax.
  • DWI-PWI mismatches involves analyzing DWI-PWI mismatches and merging with FLAIR.
  • DWI, PWI, and FLAIR can be integrated into the Atlas template, controlled, and visually displayed.
  • FIG. 25 is a diagram for explaining an example of an atlas template.
  • a template is a standard image that is already developed and used to create a common one in which people have different brains. This example provides a method for automatically analyzing strokes using four of these: DWI, PWI, FLAIR, and Standard templates.
  • the brain is divided so that the patient's brain is segmented with reference to the standard template where the visual cortex is located and the back is divided (template segmentation).
  • image types to be used include DWI, PWI, and FLAIR.
  • T1 images may be used.
  • infarct areas or peripheral areas may be analyzed for each part of the brain in each image. Matching may be used to transfer from template to DWI, PWI, FLAIR, or vice versa.
  • the patient's brain is segmented with the labeling intact, and then reversed. . You can then see where the infarct area is located in the brain (atlas).
  • the brain regions corresponding to the list may be labeled as the cursor moves on the display.
  • FIG. 26 is a view for explaining a feature of a method of dividing an infarct region from a DWI.
  • a template for histogram matching is generated, and the ADCs are read by reading dicoms b0 and b1000. Perform histogram matching. After that, the noise is removed from the ADC map (False positive reduction based on ADC), FROC analysis is performed, and final accuracy and false positive are checked.
  • FIG. 27 is a view for explaining the characteristics of the method of dividing the infarct area from the DWI. Acquisition of acute acute patient data using DWI b0 and b1000 is performed to prepare a gold standard.
  • the upper and middle figures of FIG. 27 are examples of the gold standard, in which a neurologist selects an infarct region. This can be used as a criterion for verification.
  • the lower figure of FIG. 27 shows the result of performing Volumetry analysis.
  • various methods may be applied. For example, it is possible to perform histogram matching, threshold-holding, or use an ADC map with DWI.
  • FIGS. 28 to 30 are diagrams for explaining characteristics of a method of dividing a peripheral area from a PWI.
  • the PWIs of the brain are aligned over time and smoothed with respect to time and space.
  • A, a, and B coefficients are obtained through the Gamma curve fitting, and the normalized root-mean-square error (NRMSE) of the gamma curve fitting is about 2.67%, 11118 voxels.
  • NPMSE normalized root-mean-square error
  • AIF arterial input function
  • AIF is required to implement various analysis maps (MTT, CBF, CBV, Tmax, etc.).
  • MCA middle cerebral artery
  • the upper left figure of FIG. 29 shows an example of a Gamma curve fit, with a black open circle indicating a value (y-axis) reflecting the concentration of contrast agent over time (x-axis) seen at any voxel in the PWI. do.
  • the solid curve is a gamma curve fitted to the PWI.
  • AIF is determined in the upper left graphs of FIG. 29 to determine the MCA.
  • TTP time-to-pick
  • MTT mean transit time
  • CBF cerebral blood flow
  • CBV cerebral blood volume
  • TTP time-to-pick
  • deconvolution can be viewed as a kind of dividing method, and the standard is AIF.
  • AIF the process of determining the aforementioned MCA is preceded.
  • TTP in the deconvolution state, i.e., Tmax.
  • the curve is generated separately depending on how far the delay or spread is as compared to the curve or profile that provides the best blood flow. In other words, dividing a poor blood supply curve into the best curve yields a distinct curve with a peak between the best and the poor curve. You can see how many delays are based on the best curves for these distinct curves.
  • FIGS. 29 to 30 are diagrams illustrating the analysis results of FIGS. 29 to 30, wherein peripheral regions may be divided as regions where there is a delay in reaching blood as a result of the analysis.
  • the infarct region is shown here, and as shown in FIG. 31, the region-specific PWI over time can be analyzed.
  • FIG. 32 the image of time passing from right to left, and the area indicated by the circle, contrast medium is well seen as the intensity difference.
  • the region where blood supply is delayed can be seen that this intensity difference is weak and slow with time.
  • FLAIR shows brain structure better. People who have previously damaged certain parts of the brain do not need to be treated. For example, if a surviving person is already injured after 5 or 6 years of treatment, the already broken part does not need to be treated. I can judge it as a fall.
  • FIG. 35 is a view for explaining an example of automatically showing an analysis result incorporating DWI-PWI-FLAIR, a template.
  • an infarct region and a peripheral region are shown, and a pie chart shows a ratio between them.
  • An example of DWI-PWI mismatch analysis is to segment areas where blood supply is not smooth but tissues are still alive (ischemic penumbra) from PWI analysis images. From the DWI analysis, the areas of damaged brain tissue (ischemic infarcts) are segmented. Comparing DWI and PWI, calculate how widely the ischemic semi-shaded area is distributed.
  • DWI-PWI mismatch is large, it is necessary to quickly pierce the blockage by thrombolysis, which provides a means to help quantitatively.
  • the surrounding area is small, if the area is very important to a person, that is, such information is provided by template segmentation, even if the surrounding area is small, it is better to perform the procedure. Automatically gives you quantitative information to help you make this decision.
  • 36 is a diagram illustrating the aforementioned methods as an example of a user interface, and is an example of a GUI for analyzing a stroke image biomarker. It can be designed to reflect the needs of users and can be continuously improved.
  • FIG. 37 is a diagram illustrating an example of a method of extracting a representative image from a medical image according to the present disclosure.
  • the location of interest is first selected from the medical image (S21). Thereafter, the medical image is processed by the image processing instruction according to the type of the lesion, and the region of interest including the ROI is extracted (S31). Next, a representative image is generated in consideration of a safety margin in the divided ROI (S41).
  • the type of lesion may be selected before the location of interest is selected. Once the type of lesion is selected, the medical image can be used to execute a classifier's learning routine for allogeneic diseases.
  • the location of interest may be selected by specifying a point or specific location on the medical image via the user interface.
  • the image processing instruction according to the type of the lesion may include an image processing condition of dividing or threshold-holding the lesion.
  • the representative image may be generated by giving a certain safety margin to the region of interest so that the extracted region of interest is included in the representative image.
  • Series information related to patient information including the sex and age of the individual, study information related to organs or parts of the body, types of lesions, etc., for retrieval and management during the generation of representative images.
  • At least one of the images may be matched to the representative image.
  • the representative image may be stored hierarchically. The user can use the matched information to search and manage the representative image.
  • the representative image includes a characteristic portion of the medical image through a feature of the medical image, that is, an image processing process according to a lesion. Therefore, a doctor searching a plurality of representative images at once can easily browse and browse the characteristics of a medical image, and can easily store, search, and manage hierarchically by patient, study, and series.
  • the process of generating the representative image may be automatically performed by a computer by an input by a simple user interface, and thus it is convenient to generate the representative image in the field of handling a plurality of medical images.
  • 38 is a diagram illustrating an example of a process of selecting a lesion through a pop-up window.
  • the type of lesion may be selected through a pop-up window before the location of interest is selected in the medical image.
  • the method of extracting the representative image from the medical image according to the present example may be applied to various medical images for lung, heart, brain, kidney, liver, etc., and the subject is not particularly limited.
  • the method of extracting the representative image from the medical image according to the present example may be performed by software, and has the feature of making the most impressive feature of the medical image into the representative image according to the type of lesion. Therefore, it is preferable to include the process of selecting the type of lesion through the pop-up window, as shown in FIG.
  • the pop-up window may include an add button and a delete button of the lesion or the medical image.
  • FIG. 39 illustrates an example of a method of selecting a view in a 3D medical image.
  • the location of interest is selected in the medical image (S21).
  • the selection of the location of interest may be done after the type of lesion has been selected, as described above, or when there is no selection of the lesion. However, when the type of lesion is selected, an image preprocessing process may be performed to assist in selecting a location of interest through information on a medical image or visually for selecting a location of interest.
  • the location of interest may be selected from the 2D medical image, but may include a process of selecting a view by rotating the 3D medical image.
  • the image acquired by the medical imaging apparatus such as CT and MRI may be 3D volume data 10.
  • the medical image is extracted from the 3D volume data 10 as needed.
  • the 3D volume data may be extracted in a manner in which the surface of the 3D image is projected according to the views 1 and 3 (viewing direction).
  • a medical image may be generated in various views using 3D volume rendering, surface rendering, MIP / MinIP, RaySum, and Virtual Endoscopy.
  • the two-dimensional cross section 20 may be extracted from the three-dimensional volume data.
  • Such views include, for example, representative orientations of the body such as axial, sagittal and coronal.
  • the extracted medical image is basically 2D, but also includes a 3D image (eg, 30 and 40).
  • two-dimensional medical images may be generated as three-dimensional, four-dimensional, and five-dimensional medical images.
  • the method of extracting a medical image may directly extract a portion of the 3D volume data including the lesion 15 from the 3D volume data 10.
  • the 3D representative image eg, a thumbnail
  • the extracted 3D volume data is included in the scope of the present disclosure.
  • the medical image may be used to allow a doctor or user to designate a specific location or point of interest. This is further described below.
  • 40 is a diagram illustrating an example of an image of a lung.
  • the location of interest is selected in the medical image obtained as described above.
  • the location of interest may be a specific location (eg, a center) of lung diseases such as Tumor, Emphysema, Honeycomb, GGO, and Micro-calcification shown in a lung image as shown in FIG. 40.
  • This selection of the location of interest may be specified by the physician via an interface means such as a mouse.
  • the lesions are divided and visualized in the lung image so that the doctor or user can conveniently select the location of interest.
  • a method of automatically classifying lung diseases using the classifier disclosed in Korean Patent Publication No. 998630 may be used.
  • the classifier's learning routine may be performed on the allogeneic disease using the medical image.
  • Lung diseases may be displayed on the lung image shown in FIG. 40 by using the automatic classifier.
  • the location of interest may be selected by a doctor designating a center or a specific location of a predetermined area of the lesions thus divided and displayed.
  • the computer may automatically select a specific location of the lesion.
  • the distance map may be used to establish a condition, such as distance from the boundary of the lesion, to capture points within the lesion. It will then be possible for the doctor or user to verify the automatically selected location of interest.
  • the method of extracting the representative image from the medical image according to the present example does not necessarily include the process of classifying the lesion by the automatic classifier.
  • 41 is a diagram illustrating an example of a process of generating a thumbnail.
  • the location of interest 61 is designated in the medical image 50.
  • the medical image 50 may be a portion of 3D volume data directly extracted to include the lesion 15 from the medical images 20, 30, and 40 or the 3D volume data 10 described with reference to FIG. 39.
  • the region of interest 70 may be extracted by image processing the medical image based on the location of interest 61 by an image processing instruction according to the type of lesion (S31).
  • a method of extracting a representative image from a medical image according to the present example is a case sensitive method.
  • a method of extracting a representative image from a medical image includes a predetermined region including the location of interest 61 using an image processing instruction that is preset or set according to the type of lesion. The image processing process extracts the region of interest 70 to be included in the representative image.
  • the image processing instruction may include conditions (eg, a threshold method, a threshold value, a lesion size filtering condition, etc.) for segmenting the lesion using the seed 61 of interest as a seed. do.
  • the region of interest 70 is segmented according to an image processing instruction for threshold-holding a predetermined region including the region of interest 61.
  • a representative image eg, thumbnail
  • a region of interest is divided by Tumor Segmentation, which is shown in FIG. 41 (c).
  • a thumbnail is generated as shown in FIG. 41 (d) by giving a safety margin 75.
  • a thumbnail as shown in FIG. 41 (e) is created so that the safety margin 75 enters the box 80.
  • the image processing instruction may be received in advance with a segmentation method (eg, an adaptive threshold) or other condition. Granting a safety margin 75 means that some thumbnails of the periphery of the region of interest are included in the thumbnail, for example, to include the segmented region of interest 70.
  • the size of the divided lesion is larger than the size of the thumbnail (for example, Fig. 41 (d) or 41 (e)), it is also possible to change the size of the thumbnail to include the entire divided lesion, In special cases, such as fixing the size of a nail, it is possible to thumbnail only part of the lesion.
  • the region of interest including the lesion is divided according to an image processing instruction such as a threshold holding of -950 HU or less, and as shown in FIG. 41 (c), the safety margin 75 is shown.
  • the thumbnails secured may be generated.
  • the image processing instruction may include texture based segmentation. For example, a region using a honeycomb pattern around a location of interest may be divided by a classification using an automatic classifier that automatically classifies lung diseases as described above. After that, a thumbnail with a safety margin is generated.
  • the autologous classifier learning routine can be performed. For example, you can train autoclassifiers by selecting multiple areas of interest, or thumbnails of the same kind, to indicate the same disease. This learning can then be used for searching.
  • the image processing instruction may be configured so that the normal portion is extracted to the region of interest.
  • FIG. 42 is a diagram for explaining an example of a method of capturing an image when thumbnails are generated.
  • the representative image may be generated as shown in, for example, FIG. 42 (a) or 42 (b).
  • a popup window for generating a representative image is displayed.
  • the line method shown in FIG. 42 (a) takes a center in a medical image such as MPR, VR, MIP, and sets a boundary by dragging, and a thumbnail capture of an ROI circumscribing a circle is generated.
  • the rect method shown in FIG. 42 (b) can dragging the start and end directly to an image such as MPR, VR, MIP, and the like.
  • the process of taking a center and dragging the center and dragging the start and end directly on the image can be performed by the computer's internal calculation process. It is of course also possible for the process of taking and dragging the center, dragging the start and end directly on the image, by the user interface.
  • the full method can generate full ROI thumbnails by picking on medical images.
  • the cine method stacks a plurality of extracted regions of interest and generates a thumbnail that is animated over time. In this case, the images should be selected.
  • the thumbnail can be generated by giving the start angle, the end angle, and the interval angle, and it may be necessary to query each angle.
  • 43 is a view for explaining an example of an image that changes with time.
  • the medical image may need to show a change over time, as described above. As illustrated in FIG. 43, the medical image may change along the time axis.
  • the representative image in this example may generate a plurality of representative images at time intervals, and thumbnail them into images that change over time.
  • a changing or moving thumbnail may be generated by dividing a region of interest at different times or in different conditions in a plurality of times in a medical image that changes according to additional conditions including at least one of time and drug administration.
  • 44 is a view for explaining an example of a manner in which thumbnails are stored and retrieved.
  • At least one of patient information, study information, and series information may be matched to the representative image for search and management.
  • the representative image is hierarchically stored according to the matched patient information, study information, and series information.
  • Figs. 44A and 44B For example, if a particular patient is selected, a particular study is selected, and a specific series is selected as shown in Figs. 44A and 44B, representative images are displayed, and a doctor or a user is visually and intuitively selected from the representative images. This enables you to get critical information about medical images, and to easily search and manage them. In addition, it is easy to browse representative images for each patient and series for each study, and if necessary, search using text information (patient, study, series, etc.) is possible.
  • the gray scale of the representative image may be changed to view another lesion using the representative image.
  • a medical image management browser for generating, storing, retrieving, and managing a representative image has a function of adjusting WWL to view calcification even if a representative image is generated at WWL (Window Width Level) during Lung Settting.
  • WWL Wind Width Level
  • the WWL can be adjusted using the pre-determined WWL while maintaining the original density level (usually 12 bit / voxel) of the original image.
  • the method of extracting a representative image from a medical image according to the present disclosure may be executed by a medical image manager program that generates, stores, retrieves, and manages the representative image.
  • a segmentation method of cerebral infarction regions comprising: generating a template histogram of a brain image using a plurality of brain images; Matching the histogram of the brain image to be divided with the template histogram; And dividing the cerebral infarction region based on a threshold value selected based on the template histogram in the matched histogram.
  • (2) generating the template histogram includes: matching another histogram with respect to any one of the plurality of DWI brain histograms; And calculating an average of the matched histograms.
  • (3) matching the template histogram may include: modifying the intensity range equally to the intensity range of the template histogram while maintaining the same probability distribution of the histogram of the brain image. Way.
  • (4) generating the template histogram includes: matching another histogram with respect to any one of the plurality of DWI brain image histograms; Obtaining a mean value and a standard deviation of intensities of pixels in the cerebral infarction region for each matched plurality of DWI brain images; And obtaining a mean value and a standard deviation of intensities of pixels in regions other than the cerebral infarction region, for each of the matched plurality of DWI brain images.
  • dividing the cerebral infarction region selecting an threshold value in consideration of the standard deviation of each of the plurality of DWI brain images so that the cerebral infarction region is divided into user-set reliability; Way of dividing.
  • (6) excluding a region that is not a cerebral infarction region among the regions divided into the cerebral infarction region by a threshold value; and dividing the cerebral infarction region, comprising:
  • the process of selecting the threshold value includes: obtaining an average value and standard deviation of intensity of pixels in the ventricle region for each of the plurality of DWI brain images; And modifying the threshold value in consideration of the average intensity value of the ventricle region.
  • Extract infarct region by applying template histogram from DWI image eg B1000, B0
  • extract infarct region by applying template histogram from ADC image and consider them together (eg B1000, B0, ADC image)
  • It is also possible to find a true infarct area eg, an area that overlaps with the infarct area).
  • the histogram is generated based on the DWI B1000 image, the ventricle region is generated based on the DWI B0 image segmentation method of the infarction region.
  • a method of distinguishing cerebral infarct regions included in a brain image comprising: extracting a quantitative value set including brightness information of each point in an infarct region included in at least one brain image; And estimating the likelihood of recovery from the infarction of each of the points based on the difference in the quantitative set of values between the points in the infarct area.
  • extracting a quantitative value set includes: extracting intensity of each point in each infarct region included in the plurality of brain images; And extracting a distance from the reference point of the infarct region of the at least one brain image of each of the plurality of brain images to each of the points.
  • the process of extracting the distance to the point includes: generating a distance map of the infarct region; And extracting a distance from the center of the infarct region to each point using a distance map.
  • the classifier uses the multiple regression method, the support vector regression method, and the curve fitting of a set of quantitative values of the points of infarct regions accumulated from brain images of different people whose incidences of infarction are known.
  • a method of dividing the cerebral infarction region characterized in that the classification using at least one of the group consisting of a method.
  • the plurality of brain images include two or more selected from the group consisting of DWI image, ADC image, PWI image, FLAIR image, T1 image and T2 image by MRI, and before extracting a quantitative value set, Segmenting the infarct region using a DWI image or an ADC image; Dividing a penumbra around the infarct region using a PWI image; And matching the brain images including the DWI image in which the infarct region is divided, the ADC image, or the PWI image in which the peripheral region is divided.
  • the set of quantitative values of a point consists of the position (x, y, z) of the point in the brain image, the brightness of the point DI DWI , DI pWI , DI ADC , DI FLAIR , DI T1 and DI T2 With at least one of Distance from center of infarct area D DWI , D pWI D ADC , D FLAIR , D T1 and D T2
  • a method of dividing the cerebral infarction region characterized in that it comprises at least one of.
  • a method for automatically analyzing stroke comprising: automatically dividing infarcts using DWI; Automatically dividing a penumbra using PWI; Registering a DWI image obtained by dividing the infarct region and a PWI image obtained by dividing the peripheral region with a template brain image; And analyzing the DWI-PWI mismatch in the determined image.
  • Automatic segmentation of infarcts using DWI includes: generating a template for histogram matching; Creating an ADC by reading the dicom b0 and b1000; Performing histogram matching; Removing noise from the ADC map (False positive reduction based on ADC); And performing a FROC analysis and checking the final accuracy and false positives.
  • Automatic segmentation of the penumbra using PWI includes: importing DICOMs data, checking for phases, and realignment; performing spatial and temporal smoothing, and parallelized gamma fitting; Calculating TTP to find MCA and determining AIF; And performing a deconvolution using AIF to calculate Tmax.
  • a method of extracting a representative image from a medical image for searching and managing comprising: selecting a location of interest in the medical image; Extracting a region of interest including an ROI by image processing the medical image according to an image processing instruction according to the type of lesion; And generating a representative image by assigning a safety margin to the extracted ROI.
  • the selecting of the location of interest in the medical image may include: specifying a location of interest on the medical image through a user interface, wherein the representative image is extracted from the medical image.
  • the extracting the ROI may include: segmenting the ROI according to an image processing instruction for segmenting a lesion including the ROI.
  • the extracted representative image of the medical image may include: How to produce.
  • the extracting the ROI may include: dividing the ROI according to an image processing instruction that thresholds the lesion including the ROI.
  • the extracted representative of the medical image may include: How to create an image.
  • Selecting a location of interest in the medical image comprises: selecting a view by rotating the 3D medical image; and extracting the representative image from the medical image.
  • extracting the region of interest includes: dividing the region of interest over a plurality of times by varying additional conditions in a medical image that varies according to additional conditions including at least one of time, contrast agent, and drug administration; Method for extracting a representative image from the medical image, characterized in that.
  • the extracting the ROI may include: extracting a plurality of ROIs by determining an angle range and an interval angle of the view, and generating the representative image:
  • the generating of the representative image includes: matching at least one of patient information, study information, and series information with the representative image for searching and managing; And storing the representative image hierarchically according to the matched patient information, study information, and series information.
  • the representative image is extracted from the medical image. Way.
  • the method may include: a process of designating at least one point on a lesion of a medical image through a user interface, wherein the region of interest is segmented: segmenting or threholding the lesion using a seed of a location of interest. And dividing the ROI according to an image processing instruction.
  • the representative image is generated as a thumbnail, and the thumbnail includes patient information and a study for searching and managing. at least one of the study information and the series information is matched, and a thumbnail is generated according to the matched patient information, the study information, and the series information.
  • the cerebral infarct region can be accurately and quantitatively divided based on the histogram.
  • the cerebral infarct region may be automatically divided by applying a threshold value to the histogram.
  • the voxels in the infarct region reveal the different time of occurrence or the possibility of infarct recovery, and disclose a method of determining the infarct region more accurately.
  • the present disclosure provides a basis for quantitatively and objectively determining the area likely to survive in the overestimated infarct area for each voxel.
  • a mismatch between infarct region and peripheral region is automatically analyzed to provide a means for judgment.
  • the method of automatically analyzing another stroke while providing a means of judgment by automatically analyzing mismatches of infarct area and peripheral area, it is also possible to determine the part of the brain together. to provide.
  • the representative image includes a characteristic portion of the medical image through an image processing process according to a feature of the medical image, that is, the study or thesis writing, For example, a doctor or a researcher who searches a plurality of representative images at once can easily browse and browse the characteristics of a medical image.
  • the process of generating the representative image may be automatically performed by a computer by input by a simple user interface, and thus it is convenient to generate the representative image in the medical imaging field covering a plurality of medical images.
  • the doctor or researcher since the representative image is stored hierarchically according to patient information, study information, and series information for retrieval and management while the representative image is generated, the doctor or researcher It is easy to search and manage the representative image of the medical image using the information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

La présente invention concerne un procédé de segmentation d'une région d'infarctus cérébral, et plus particulièrement un procédé de segmentation d'une région d'infarctus cérébral comportant les étapes consistant à: générer un histogramme-modèle d'images du cerveau en utilisant une pluralité d'images du cerveau; apparier un histogramme de l'image de cerveau à segmenter avec l'histogramme-modèle; et segmenter la région d'infarctus cérébral de l'histogramme apparié en se basant sur une valeur seuil, qui est déterminé d'après l'histogramme-modèle. La présente invention concerne également un procédé visant à distinguer une région d'infarctus cérébral pour estimer un région récupérable parmi les régions segmentées d'infarctus cérébral. La présente invention concerne également un procédé visant à analyser automatiquement un accident vasculaire cérébral. La présente invention concerne également un procédé visant à extraire une image représentative d'images médicales.
PCT/KR2014/012760 2013-12-23 2014-12-23 Procédé de segmentation de région d'infarctus cérébral WO2015099426A1 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
KR1020130161295A KR20150073519A (ko) 2013-12-23 2013-12-23 뇌경색 영역의 구분 방법
KR10-2013-0161295 2013-12-23
KR1020140034846A KR101578483B1 (ko) 2014-03-25 2014-03-25 뇌경색 영역의 분할 방법
KR10-2014-0034846 2014-03-25
KR1020140034851A KR101634334B1 (ko) 2014-03-25 2014-03-25 의료 영상으로부터 대표 영상을 추출하는 방법
KR10-2014-0034851 2014-03-25
KR20140128073 2014-09-25
KR10-2014-0128073 2014-09-25

Publications (1)

Publication Number Publication Date
WO2015099426A1 true WO2015099426A1 (fr) 2015-07-02

Family

ID=53479197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/012760 WO2015099426A1 (fr) 2013-12-23 2014-12-23 Procédé de segmentation de région d'infarctus cérébral

Country Status (1)

Country Link
WO (1) WO2015099426A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3269301A4 (fr) * 2015-03-12 2020-02-19 The Asan Foundation Méthode d'estimation du moment de l'apparition d'un infarctus en fonction d'une image cérébrale
CN111986242A (zh) * 2020-07-28 2020-11-24 沈阳东软智能医疗科技研究院有限公司 脑组织分区的确定方法、装置、存储介质及电子设备
CN113951912A (zh) * 2021-09-10 2022-01-21 数坤(北京)网络科技股份有限公司 一种脑灌注后处理方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000126150A (ja) * 1998-10-23 2000-05-09 Ge Yokogawa Medical Systems Ltd 関心領域設定方法、画像処理装置および医用画像処理装置
JP2004024637A (ja) * 2002-06-27 2004-01-29 Toshiba Corp Mri装置およびmri画像撮影方法
US20080021502A1 (en) * 2004-06-21 2008-01-24 The Trustees Of Columbia University In The City Of New York Systems and methods for automatic symmetry identification and for quantification of asymmetry for analytic, diagnostic and therapeutic purposes
US20090080748A1 (en) * 2002-10-18 2009-03-26 Cornell Research Foundation, Inc. System, Method and Apparatus for Small Pulmonary Nodule Computer Aided Diagnosis from Computed Tomography Scans
US20100231216A1 (en) * 2006-10-03 2010-09-16 Singapore Agency For Science Technology And Research Act Segmenting infarct in diffusion-weighted imaging volume

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000126150A (ja) * 1998-10-23 2000-05-09 Ge Yokogawa Medical Systems Ltd 関心領域設定方法、画像処理装置および医用画像処理装置
JP2004024637A (ja) * 2002-06-27 2004-01-29 Toshiba Corp Mri装置およびmri画像撮影方法
US20090080748A1 (en) * 2002-10-18 2009-03-26 Cornell Research Foundation, Inc. System, Method and Apparatus for Small Pulmonary Nodule Computer Aided Diagnosis from Computed Tomography Scans
US20080021502A1 (en) * 2004-06-21 2008-01-24 The Trustees Of Columbia University In The City Of New York Systems and methods for automatic symmetry identification and for quantification of asymmetry for analytic, diagnostic and therapeutic purposes
US20100231216A1 (en) * 2006-10-03 2010-09-16 Singapore Agency For Science Technology And Research Act Segmenting infarct in diffusion-weighted imaging volume

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3269301A4 (fr) * 2015-03-12 2020-02-19 The Asan Foundation Méthode d'estimation du moment de l'apparition d'un infarctus en fonction d'une image cérébrale
CN111986242A (zh) * 2020-07-28 2020-11-24 沈阳东软智能医疗科技研究院有限公司 脑组织分区的确定方法、装置、存储介质及电子设备
CN111986242B (zh) * 2020-07-28 2023-07-18 沈阳东软智能医疗科技研究院有限公司 脑组织分区的确定方法、装置、存储介质及电子设备
CN113951912A (zh) * 2021-09-10 2022-01-21 数坤(北京)网络科技股份有限公司 一种脑灌注后处理方法和装置

Similar Documents

Publication Publication Date Title
Shattuck et al. Magnetic resonance image tissue classification using a partial volume model
WO2020242239A1 (fr) Système de prise en charge de diagnostic basé sur l'intelligence artificielle utilisant un algorithme d'apprentissage d'ensemble
WO2016080813A1 (fr) Procédé et appareil de traitement d'image médicale
Cavalcanti et al. Pigmented skin lesion segmentation on macroscopic images
WO2015076607A1 (fr) Appareil et procédé de traitement d'une image médicale d'une lumière corporelle
García-Lorenzo et al. Multiple sclerosis lesion segmentation using an automatic multimodal graph cuts
Mikulka et al. Soft-tissues image processing: Comparison of traditional segmentation methods with 2D active contour methods
US9171366B2 (en) Method for localization of an epileptic focus in neuroimaging
WO2020076133A1 (fr) Dispositif d'évaluation de validité pour la détection de région cancéreuse
WO2020076135A1 (fr) Dispositif d'apprentissage à modèle d'apprentissage profond et procédé pour région cancéreuse
WO2015099426A1 (fr) Procédé de segmentation de région d'infarctus cérébral
EP3220826A1 (fr) Procédé et appareil de traitement d'image médicale
Despotovic et al. Brain volume segmentation in newborn infants using multi-modal MRI with a low inter-slice resolution
WO2024111913A1 (fr) Procédé et dispositif de conversion d'image médicale à l'aide d'une intelligence artificielle
Srimathi et al. An efficient cancer classification model for CT/MRI/PET fused images
Ben Salah et al. Fully automated brain tumor segmentation using two mri modalities
Beaumont et al. Automatic Multiple Sclerosis lesion segmentation from Intensity-Normalized multi-channel MRI
Dutta et al. Automatic segmentation of polyps in endoscopic image using level-set formulation
Koukounis et al. Retinal image registration based on multiscale products and optic disc detection
Milles et al. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis
WO2020076134A1 (fr) Dispositif et procédé pour corriger des informations sur des régions cancéreuses
Sood et al. Various techniques for detecting skin lesion: a review
WO2023075480A1 (fr) Procédé et appareil de fourniture d'un paramètre clinique pour une région cible prédite dans une image médicale, et procédé et appareil d'examen d'une image médicale pour le marquage
Udupa et al. Detection and quantification of MS lesions using fuzzy topological principles
Rajyalakshmi et al. Breast cancer cell-nuclei extraction using modified multi-phase level sets

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14875493

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14875493

Country of ref document: EP

Kind code of ref document: A1