WO2020054803A1 - 診断支援システムおよび方法 - Google Patents
診断支援システムおよび方法 Download PDFInfo
- Publication number
- WO2020054803A1 WO2020054803A1 PCT/JP2019/035914 JP2019035914W WO2020054803A1 WO 2020054803 A1 WO2020054803 A1 WO 2020054803A1 JP 2019035914 W JP2019035914 W JP 2019035914W WO 2020054803 A1 WO2020054803 A1 WO 2020054803A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- pseudo
- type
- image
- information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
- G06T7/0016—Biomedical image inspection using an image reference approach involving temporal comparison
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5235—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5247—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/037—Emission tomography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Definitions
- the present invention relates to a system and a method for supporting diagnosis of a complex system such as a living body.
- Apparatus for diagnosing the morphology and function of the test subject include CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), SPECT (Single Photon Emission Computed Tomography), PET-CT, etc.
- CT Computer Imagesography
- MRI Magnetic Resonance Imaging
- PET PET
- SPECT Single Photon Emission Computed Tomography
- PET-CT PET-CT
- Various tomography apparatuses modalities
- an image is generated by administering a radiopharmaceutical into the body of a subject by intravenous injection or the like and imaging radiation emitted from the drug in the body.
- doctors can tell not only the morphology of each part in the body, but also how the administered drugs are distributed in the body or the state of accumulation of substances in the body that react with the drugs. Since it can be grasped, it can contribute to improvement of the diagnosis accuracy of the disease.
- PET images are captured using Pittsburgh compound B as a radiopharmaceutical (tracer) for PET, and the degree of amyloid ⁇ protein accumulation in the brain is measured based on the captured PET images, whereby Alzheimer-type dementia is detected. Differential diagnosis or early diagnosis.
- While images photographed using drugs such as PET images have various advantages, they also have some disadvantages. For example, imaging of an image using a drug is generally more expensive than imaging of an MRI or the like without using a drug, so that the financial burden of the subject or a health insurance union that bears the cost of testing the subject increases. There is a risk. Further, in imaging of a PET image or the like using a radiopharmaceutical, a small amount of a radioactive substance must be taken into the body, and there is a risk of being exposed to radiation. In addition, a person suffering from a specific disease such as kidney disease may not be able to take an image using a drug in order to avoid adverse effects.
- the PET image is a PET image obtained by administering a drug obtained by synthesizing “positron nuclide (positron emitting nuclide)” serving as a mark on glucose (glucose).
- Estimating an image using a drug such as a PET image from an image that does not use a drug such as an MRI image and using it for diagnosis is useful for reducing the burden on the examinee and improving the diagnostic accuracy based on the image. It is. For this purpose, it is desired to further improve the usefulness of the estimated image or information obtained from the estimated image.
- One embodiment of the present invention is a system including an acquisition unit that acquires individual real image data of a first type including at least a reference region partially including an evaluation target region, such as a diagnosis support system. .
- the system further includes a first type of the first type based on the training data including the real image data of the first type of reference region of the plurality of test subjects and the real image data including the second type of the evaluation target region.
- An image processing model that has been trained to generate pseudo image data of a second type of evaluation target area from real image data of a reference area is generated from individual real image data of the first type to be examined.
- An information providing unit is provided for providing diagnosis support information based on the pseudo image data of the second type of evaluation target area.
- the image processing model is training data (teacher data, learning data) including first type real image data of a reference area partially including an evaluation target area and second type real image data including an evaluation area. Is a model that has been machine-learned to generate the second type of pseudo image data of the evaluation area from the first type of real image data of the reference area based on. Machine learning is also performed to generate second type pseudo image data of a reference region partially including the evaluation target region from first type real image data of the reference region partially including the evaluation target region.
- the image processing model according to the present invention which has been machine-learned so as to generate a second type of pseudo image data limited to the evaluation target region with respect to the model to be evaluated, is a reference to the real image data of the first type.
- the diagnosis support information provided from this system may include the second type of pseudo image data of the evaluation target area, or may include the first type of individual real image data, A result of analyzing the second type of pseudo image data and / or the second type of real image data may be included.
- the figure which shows an example of the diagnosis support network including a diagnosis support system It is a figure which shows the outline
- row (a) is a pseudo PET image of a region of interest in the brain
- row (b) is a pseudo PET image of the entire brain.
- A shows the correlation between a pseudo SUVR (predSUVR) obtained from a pseudo PET image based on a model trained to generate a pseudo PET of an evaluation target area and a SUVR value obtained from an actual PET image;
- B shows a correlation between a subject diagnosed with AD,
- c) shows a correlation with a subject diagnosed with MCI, and
- d shows a correlation between a subject determined as CN.
- A shows the correlation between the pseudo SUVR (predSUVR) obtained from the pseudo PET image by the model trained to generate the pseudo PET of the whole brain and the SUVR value obtained from the real PET image.
- B shows the correlation of AD,
- C) shows the correlation of MCI, and
- CN shows the correlation of CN.
- FIG. 1 is a block diagram illustrating an outline of an image processing model generation device.
- 9 is a flowchart illustrating an outline of a generation process of an image processing model.
- 9 is a flowchart illustrating an outline of preprocessing.
- 9 is a flowchart illustrating an outline of a process of generating an image processing model.
- FIG. 9 is a flowchart illustrating an outline of a learning process. The figure which shows the outline
- FIG. 2 is a block diagram showing an outline of a diagnosis support terminal. 9 is a flowchart illustrating an outline of a process for providing diagnosis support information.
- FIG. 1 shows an outline of a diagnosis support network including a diagnosis support system.
- the diagnosis support network 1 includes terminals 210 installed in one or a plurality of medical institutions 200, respectively, and a diagnosis support system 100 communicably connected to the terminals 210 via the Internet (cloud) 9.
- the medical institution 200 includes tomography apparatuses (modalities) 221 to 224 that acquire images for diagnosing the form and function of the examination target (examinee) 5.
- One example of the imaging device is an MRI device 221 that acquires an MR image (MRI) 15 of the patient 5, and another example may be a PET device 222 that acquires a PET image, and a SPECT that acquires a SPECT image.
- MRI magnetic resonance imaging
- the device may be a device 223 or a CT device 224 that acquires a CT image.
- Each medical institution 200 does not need to include all of these tomographic apparatuses 221 to 224, and the diagnosis support system 100 uses the PET image or the SPECT image based on the image acquired by the MRI apparatus 221 or the CT apparatus 224. Is estimated and the diagnosis support information 110 is provided.
- the terminal (diagnosis support terminal, medical support system) 210 of the medical institution 200 includes the image display device 25 that provides the doctor 8 or medical staff with the diagnosis support information 110, and the processing terminal 20.
- the processing terminal 20 is a computer terminal having computer resources including a CPU 28 and a memory (storage medium) 29, and has an input interface (acquisition unit, acquisition function, acquisition module) 21 for acquiring the attribute information 14 of the examinee 5, and A transmission / reception interface (communication function, communication module) 22 for transmitting the MR image 15 captured by the MRI apparatus 221 and the patient information 105 including the attribute information 14 of the patient 5 to the diagnosis support system 100; An output interface (output unit, output function, output module) 23 for displaying (outputting) the provided diagnosis support information 110 via the image display device 25 or the like.
- the processing terminal 20 includes a program (program product) 29p including an instruction for executing the above function by the CPU 28 downloading and executing the program 29p.
- the program 29p may be stored in the memory 29, It may be provided by a
- the diagnosis support system 100 that provides the diagnosis support information 110 to the terminal 210 of the medical institution 200 via the network 9 includes a diagnosis support module (diagnosis support function, diagnosis support unit) 10 and a model provision module (model provision function, model provision). Unit 50).
- diagnosis support system 100 is a server provided with computer resources including a CPU 101 and a memory (storage medium, first storage unit) 102.
- the diagnosis support system 100 includes programs (program products) 10p and 50p including instructions for executing the functions of the modules 10 and 50 by the CPU 101 downloading and executing the programs 10p and 50p. May be stored in the memory 102 or provided by an external recording medium.
- the diagnosis support module 10 is an acquisition unit that acquires the examinee information 105 including the first type of individual real image data 15 including at least a reference region partially including the evaluation target region of the examinee 5 (examinee).
- a model providing module 50 that provides an image processing model 60 includes a memory interface 51 that can access a memory (storage, storage unit) 102 storing training data (learning data, teacher data) 70, and image processing based on the training data 70.
- a learning unit (learning unit, learning module, learning function) 53 for learning the model 60 and a model output unit 52 for outputting a learned model (image processing model) 60 are included.
- the training data 70 includes real image data 71 of a first type of reference region of a plurality of test subjects (subjects) and real image data 72 including a second type of evaluation target region. These are three-dimensional image data, and the same applies to the following.
- the learning unit 53 generates a pseudo-image data (estimated image data) 75 of the second type evaluation target area from the real image data 71 of the first type reference area based on the training data 70.
- the learned image processing model 60 is provided.
- the information providing unit 12 of the diagnosis support module 10 generates a second type of real image data 15 of the first type of the subject 5 to be examined (examinee) included in the examinee information 105 and generated by the image processing model 60.
- the diagnostic support information 110 based on the pseudo image data (estimated image data) 115 of the type of the evaluation target area is provided.
- FIG. 2 shows some examples of images included in the training data 70.
- the image in the uppermost row (row (a)) is an example of the real image data 71 of the first type reference area.
- the real image data 71 of the present example is an MR image (MRI) that can accurately acquire information centering on a morphology inside a living body, and is a real image data including a whole brain region 81 as a reference region.
- the row (a) representatively shows three-dimensional cross-sectional views of the three-dimensional real image data 71. The same applies to the following.
- the image in the next row (row (b)) is an example of the actual image data 72 including the second type of evaluation target area.
- the second type of real image data 72 of the present example which is imaged using a drug, is a PET image, more specifically, a PET image showing the distribution of amyloid ⁇ protein in the brain.
- a PET image 72 showing the distribution of amyloid ⁇ protein in the brain can be obtained.
- the second type of real image data 72 of the present example is an example of real substance image data including distribution information obtained by visualizing the distribution of the first substance related to the abnormality to be diagnosed.
- the abnormality (disease) to be diagnosed includes Alzheimer-type dementia
- the first substance indicating the disease includes amyloid ⁇ protein.
- the sum of the amyloid ⁇ protein accumulation (SUV, Standardized Uptake Value) in the gray matter of part of the brain and the amyloid in the cerebellum The SUVR value (SUVR, Standardized Uptake Value Ratio, cerebellar ratio SUVR) indicating the ratio to the accumulation degree (SUV) of ⁇ protein is known.
- SUVR can be defined by the following equation (1).
- the numerator of formula (1) represents the sum of the SUVs of the four regions of cerebral gray matter, ie, the cortical regions of the cerebrum (prefrontal cortex, anterior and posterior cingulate cortex, parietal lobe, and lateral temporal lobe), and the denominator is the SUV of the cerebellum. Is shown.
- the evaluation target region 82 for obtaining the SUVR as an index value includes five regions of the prefrontal cortex, the anterior-posterior cingulate cortex, the parietal lobe, the lateral temporal lobe, and the cerebellum. Including.
- the SUV can be obtained from the degree of integration of pixels (image elements, voxels) having a predetermined brightness or higher included in the PET image 72.
- the cutoff value for amyloid positive / negative is 1.11. In FIG. 2, the amyloid-positive portion is indicated by hatching.
- the image shown in the third row (row (c)) of FIG. 2 is a mask image 73 showing the evaluation target area 82 set based on the MR image 71, and the image shown in the fourth row (row (d)) is , An actual image 74 of the evaluation target area 82 of the PET image cut out based on the mask image 73. That is, the image shown in row (d) is the real image data 74 of the second type of evaluation target area 82, and is included in the real image 72 as information.
- the positional relationship of the evaluation target region 82 with respect to the reference region (whole brain) 81 is determined by the Free Surfer method (FreeSurfer Method (Gregory Klein, MehulSampat, Davis Staewen, David Scott, Joyce Suhy, aComparison of SUVRMethods and Reference Regions In AmyNMI P Meeting, June 6-10, 2015, Baltimore, MD, USA)), but can be set using a ROI (Region of Interest) template, but is not limited to this method.
- Free Surfer method FreeSurfer Method (Gregory Klein, MehulSampat, Davis Staewen, David Scott, Joyce Suhy, aComparison of SUVRMethods and Reference Regions In AmyNMI P Meeting, June 6-10, 2015, Baltimore, MD, USA)
- ROI Region of Interest
- FIG. 3 shows the pseudo image data 75 of the second type of evaluation target area 82 generated by the learned image processing model 60.
- the upper part (row (a)) of FIG. 3 is a frontal area that is an evaluation target area 82 generated by the image processing model 60 from the real image 71 of the reference area (entire brain) 81 of the MR image (first type).
- an image processing model (image generation model) 60 that is a model to be learned includes a model algorithm 61 and a model parameter set (parameter) 62.
- the learning unit 53 of the model providing module 50 evaluates the PET image 74 to be learned and the pseudo PET image 75 as the learning result based on the loss function Ei, and thereby sets the parameter set (parameter) 62 of the image processing model 60. It includes a first parameter learning function 54 for learning.
- An example of the loss Ei is represented by the following equation (2).
- MSEi is the square error of the image element, and is an index for measuring the difference between the pixel elements, that is, the value of the pixel element of the pseudo image data (pseudo PET image) 75 of the second type evaluation target area 82 It is an example of the degree of divergence indicating the degree of divergence from the value of the image element of the real image data (PET image) 74 of the type 2 evaluation target area 82.
- the value of the image element of the real image data (PET image) 74 of the evaluation target area 82 is the same as the value of the pixel element of the evaluation target area 82 included in the real image data (PET image) 72 of the reference area 81, and
- the pixel elements of the PET image 75 may be compared with the pixel elements of any of the PET images 72 or 74.
- index degree of divergence
- other indexes such as SSIM (Structural Similarity) and PNSR (peak signal-to-noise ratio) are used instead of or together with MSE. You may.
- the real index value Vi is real image data of the second type (PET image data, PET image data) which is real substance image data including distribution information obtained by visualizing the distribution of the first substance (for example, amyloid) related to the abnormality to be diagnosed.
- This is a real index value obtained from the distribution information (SUV) of the first substance in the evaluation target area 82 of the real substance image data 72, and is, for example, SUVR (real SUVR).
- the actual index value SUVR may be obtained from the PET image 74 in the evaluation target area 82.
- the pseudo index value predVi is obtained from pseudo distribution information corresponding to the distribution of the first substance (for example, amyloid) included in the pseudo image data (pseudo PET image) 75 of the second type evaluation target area 82.
- a pseudo index value for example, a pseudo SUVR (estimated SUVR). That is, the pseudo index value is an SUVR value estimated from the MR image 71 using the image processing model 60.
- the pseudo PET image 75 includes a pseudo distribution of the evaluation target area 82 as pixel information. For this reason, the pseudo SUVR can be obtained from the pseudo PET image 75 using Expression (1), similarly to the SUVR.
- the loss function Ei shown in the equation (2) includes a divergence (square error) MSEi, an actual index value Vi (SUVR, actual SUVR), and a pseudo index value predVi (pseudo SUVR).
- the image processing model 60 can be evaluated using the index value.
- the parameter learning function 54 of the learning unit 53 sets the index value MSEi included in the loss function Ei such that the index value MSEi decreases or the difference between the actual index value Vi and the pseudo index value predVi decreases.
- the parameter 62 is set (corrected or changed), and the learned image processing model 60 including the learned parameter 62 is output to the memory 102.
- the coefficient ⁇ included in the loss function Ei is a coefficient that satisfies the condition of the following equation (3). 0 ⁇ ⁇ ⁇ 1 (3) If the coefficient ⁇ is 0, the difference between the actual index value Vi and the pseudo index value predVi is mainly evaluated at the time of learning the model 60, and if the coefficient ⁇ is 1, the difference (2 (Square error) MSEi is mainly evaluated. Equation (3) may specifically be (0 ⁇ ⁇ 1) or (0.5 ⁇ ⁇ 1.0).
- the image processing model 60 includes a function as an image output model for generating a second type of image from the first type of image, and may evaluate the correlation between image elements with priority.
- a data set of a plurality of subjects included in the training data 70 is input to the trained image processing model 60, and the obtained pseudo index value predVi (predSUVR) of the pseudo PET 75 is converted to the real index value Vi. (SUVR).
- predSUVR pseudo index value predVi
- SUVR real index value
- a data set of a plurality of subjects, which is not included in the training data 70, is input to the learned image processing model 60 as test data 79, and the obtained pseudo index value predVi (predSUVR) of the pseudo PET 75 is obtained.
- Actual index value Vi (SUVR).
- the pseudo index value predSUVR recognized from the pseudo PET 75 generated from the test data 79 by the image processing model 60 tends to be smaller than the actual index value SUVR in the test data 79. It can be seen that there is a positive correlation between the SUVR and the pseudo index value predSUVR.
- the PET image 75 can be accurately estimated from the MR image 71 of the unlearned subject by the learned image processing model 60, and the PET image 75 of the unlearned subject can be estimated by the learned image processing model 60. It can be seen from the MR image 71 that the SUVR value can be estimated.
- FIG. 6 a pseudo image generated by the image processing model 60 trained to generate a pseudo PET 75 limited to the evaluation target area 82 using the MR image 71 as an input, using SUVR as an index value as a loss function at the time of learning, and The correlation between the pseudo SUVR of PET75 and the SUVR value obtained from the real PET image 72 (74) is shown.
- the upper graph (a) shows the results of subjects diagnosed with AD (Alzheimer's disease), subjects diagnosed with MCI (mild cognitive impairment, Mild Cognitive Impairment), and subjects other than CN (normal subjects). The correlation of all data is shown.
- the next graph (b) shows the correlation of the data of the AD subjects
- the graph (c) shows the correlation of the data of the MCI subjects
- the graph (d) shows the correlation of the data of the CN subjects.
- the correlation coefficient r (a) of the graph (a) is 0.21502
- the correlation coefficient r (b) of the graph (b) is 0.66484
- the correlation coefficient r (c) of the graph (c) is 0.10547
- the correlation coefficient r (d) of the graph (d) was 0.34458.
- FIG. 7 shows a method disclosed by Sikka et al.
- Sikka et al. 3 shows the result of estimating the amyloid PET image and calculating the SUVR from the estimated image.
- Graphs (a) to (d) are the same as above.
- the correlation coefficient r (a) of the graph (a) is 0.12964
- the correlation coefficient r (b) of the graph (b) is 0.25603
- the correlation coefficient r (c) of the graph (c) is 0.07694
- the correlation coefficient r (d) in the graph (d) was 0.01905.
- the image processing model 60 refers to the pseudo PET image 75 to generate a pseudo PET image 75 of the partial evaluation target region 82 based on the MR image 71 using the entire brain as the reference region 81.
- the information amount of the MR image 71 is sufficiently large. Therefore, the pseudo PET image 75 of the evaluation target area 82 can be accurately generated based on more information. Therefore, by using the image processing model 60 that has been trained as described above, an index for determining the progress of a disease with high accuracy from an MR image (MRI) without acquiring a PET image from the patient 5.
- the diagnostic assistance information 110 including the value can be provided.
- the learning unit 53 includes a second parameter learning function 55 for learning the parameters 62 of the image processing model 60 including the attribute information 90 of a plurality of subjects. That is, the training data 70 includes, in addition to the first and second types of image information 71 to 74 of the plurality of subjects, attribute information 90 including information 91 related to the biomarkers of the plurality of subjects.
- the processing model 60 is provided from the model providing module 50 as a machine-learned model including the attribute information 90 in addition to the image information.
- the attribute information 90 includes, in addition to information 91 related to the biomarkers, that is, genetic information (ApoE or the like) 91a and / or blood test results 91b, age, gender, educational history, occupational history, cognitive test scores (MMSE, CDR, ADAS-Cog, etc.) or an interview result (ADL interview, etc.).
- genetic information ApoE or the like
- MMSE CDR
- ADAS-Cog ADAS-Cog
- interview result ADL interview, etc.
- FIG. 8 shows the relationship between the ApoE genotype and the distribution of SUVR values by AV45 amyloid PET imaging based on the data in the ADNI database described above.
- genotype is considered to be very effective information for judging amyloid positive / negative.
- the blood test result 91b may be effective in determining amyloid positive / negative.
- TOF Time of flight
- MALDI Microx Assisted Laser Desorption / Ionization, matrix-assisted laser desorption / ionization
- a method of extracting only amyloid ⁇ -related peptide by a method of measuring Flight has been proposed, and a method of estimating the amount of amyloid ⁇ accumulated in the brain with high accuracy has been proposed (“High performance plasma amyloid- ⁇ biomarkers for Alzheimer's disease”). Nakamura et al. Nature.
- the MR image 71 and the gene information (ApoE gene information) 91 a are input, and learning is performed so as to generate a pseudo PET 75 limited to the evaluation target area 82 by using SUVR as an index value as a loss function at the time of learning.
- the correlation between the pseudo SUVR of the pseudo PET 75 generated by the image processing model 60 and the SUVR value obtained from the real PET image 72 (74) is shown.
- Graphs (a) to (d) are the same as those in FIG.
- the correlation coefficient r (a) of the graph (a) is 0.27995
- the correlation coefficient r (b) of the graph (b) is 0.50173
- the correlation coefficient r (c) of the graph (c) is The correlation coefficient r (d) of 0.21462 and the graph (d) was 0.06563.
- the correlation of the SUVR value for the entire subject is improved, and particularly for the MCI subject, the effect of learning including the genetic information is seen. Therefore, by using the model 60 learned including the attribute information 90 including the biomarker 91 such as a gene, it is understood that an index value (SUVR value) for judging the degree of disease progression can be estimated with higher accuracy.
- the improvement of the index value estimation accuracy is useful in providing the diagnosis support information 110.
- the SUVR value based on the cerebellum is used.
- a PET image limited to the other evaluation target area 82 is learned. It is also possible to increase the learning accuracy by using an index value based on another evaluation area.
- the learned image processing model 60 provided from the model providing module 50 does not use a drug such as the MR image 71 (although a contrast agent or the like may be used to make the form more clear).
- a PET image 75 including information on the distribution of amyloid or the like indicating the cause or progression of a disease can be accurately estimated based on image data from which information on a form inside a living body can be mainly acquired. Therefore, the information providing unit 12 of the diagnosis support module 10 uses the image processing model 60 and uses the MR image (MRI, individual actual image data) included in the patient information 105 obtained from the patient 5 through the obtaining unit 11.
- a pseudo PET image (pseudo image data of the target region) 115 limited to the evaluation target region 82 is generated (estimated) based on the actual MR image) 15 and the pseudo PET image 115 and / or the pseudo PET image 115 is generated from the pseudo PET image 115.
- the diagnostic support information 110 including the obtained SUVR value (pseudo index value) 116 of the evaluation target area 82 can be provided.
- the evaluation target area 82 includes a plurality of parts of the brain, specifically, as shown in FIG. 2 and the like, the prefrontal cortex, the anterior-posterior cingulate cortex,
- the SUVR value which includes five regions (regions) of the parietal lobe, the lateral temporal lobe, and the cerebellum, is calculated from the distribution information of each of the plurality of regions as shown in Expression (1).
- the pseudo PET image 115 generated by the image processing model 60 based on the MR image 15 of the examinee 5 includes these five regions as the evaluation target regions 82. Therefore, the information providing unit 12 can obtain the pseudo SUVR 116 from the distribution information of the five regions included in the evaluation target region 82 of the pseudo PET image 115 and can provide the pseudo SUVR 116 as the diagnosis support information 110.
- the obtaining unit 11 of the diagnosis support module 10 may be configured to obtain the patient information 105 including the attribute information 14 including the information related to the biomarker. Based on the image data (MR image) 15 and the attribute information 14, pseudo image data (pseudo PET) 115 of the evaluation target area 82 is estimated by the image processing model 60 learned including the attribute information 90, and diagnosis based on them is performed. Support information 110 may be provided.
- the attribute information 14 included in the examinee information 105 may include all of the information included in the above-described attribute information 90, or may include only part of the information.
- the attribute information 14 may include information 91 relating to biomarkers such as genetic information 91a and / or blood test results 91b.
- the image processing model 60 may be learned so as to derive a diagnosis result of an abnormality of a diagnosis target, that is, a diagnosis result of a disease, from the MR image 71, and the information providing unit 12 outputs the estimated diagnosis result.
- the diagnosis support information 110 including (simulated diagnosis result) 117 may be provided.
- the training data 70 includes, in addition to the image data 71 to 74 of a plurality of subjects, a diagnosis result 94 of an abnormality of the subject to be diagnosed, for example, a diagnosis result of dementia.
- the information providing unit 12 provides the diagnosis support information 110 including the pseudo-diagnosis result 117 including the abnormality of the diagnosis target of the patient 5 derived from the information included in the patient information 105 by the learned image processing model 60. .
- the information providing function 12 of the diagnosis support module 10 may be included in the terminal 20 of the medical institution 200, and the diagnosis support information 110 may be provided as a standalone by mounting the learned image processing model 60 in advance. It is also possible to provide it in a state. Further, in the above description, the image processing model 60 is described as having been learned. However, the image processing model 60 may perform self-learning based on a new case at any time, and the image processing model 60 updated by the model providing module 50 may be used. 60 may be provided automatically.
- the image processing model 60 performs the same learning as described above based on training data (learning data, teacher data) 70 including different image types (modalities), and thereby obtains an image other than the amyloid PET image indicating the distribution of amyloid ⁇ .
- training data learning data, teacher data
- image types modalities
- the accumulation of tau protein (tau) is considered to be directly linked to neuronal death in dementia.
- PET images tau PET using a drug such as PBB3 for visualizing the accumulation of tau protein have been obtained. I can do it.
- the evaluation of the distribution of tau protein is often made of the cerebellum-based SUVR value in the same manner as described above. However, attention may be paid to another evaluation target area 82.
- the estimation of the FDG (sugar metabolism) PET image can be handled by the image processing model 60 learned by including the FDG-PET image in the training data 70.
- the image processing model 60 may be trained by using the CT image data as the first type image data 71 mainly including the morphological information.
- One of the other modalities that the image processing model 60 can support is a SPECT image.
- An example of a SPECT image there is an imaging method for visualizing the distribution of DatSCAN dopamine transporter called (Dopamine transporter SCAN) (DAT) in SPECT examination was administered radiopharmaceutical that 123 I- Iofurupan (123 I-Ioflupane).
- the purpose of this imaging is to provide early diagnosis of Parkinson's syndrome (PS) in Parkinson's disease (PD), assist in diagnosis of Dementia with Lewy Bodies (DLB), and dopamine nerve loss in the striatum.
- PS Parkinson's syndrome
- PD Parkinson's disease
- DLB Dementia with Lewy Bodies
- dopamine nerve loss in the striatum Of the kind of medication called levodoba.
- DatSCAN imaging requires 3-6 hours of waiting time for the test agent to reach the brain, has few side effects, is invasive, and is relatively expensive even for insurance. Simple estimation is useful for reducing the burden on the patient.
- the evaluation (index value) of DatSCAN uses BR (Binding Ratio or SBR Specific Binding Ratio) and is expressed by the following equation (4).
- C is the average value of DAT in each region of interest
- Cspecific is the average value of the putamen and caudate nucleus in the brain
- Cnonspecific is the average value of the occipital cortex in the brain. Is shown. Therefore, in the model providing module 50, the real image data 71 of the first type reference region 81 of the plurality of subjects is the MR image of the entire brain, and the real image data 72 (74) including the second type evaluation target region 82.
- the training data 70 including the SPECT image including the putamen, caudate nucleus, and occipital cortex in the brain Based on the training data 70 including the SPECT image including the putamen, caudate nucleus, and occipital cortex in the brain, from the MR image 71, the putamen, the caudate nucleus, and the occipital cortex which are the evaluation target areas 82 are included.
- the image processing model 60 can be machine-learned so as to generate the pseudo SPECT image 75.
- a pseudo SPECT image 115 which is pseudo image data of the evaluation target area 82 is generated, and the diagnosis support information 110 including a pseudo BR value (pseudo SBR value) as a pseudo index value based on the pseudo SPECT image 115 can be provided.
- the loss function (loss) Ei at the time of learning of the image processing model 60 the difference between pixels and the index value BR are used, and regions used for calculating them are put on the putamen, the caudate nucleus and the occipital cortex. By narrowing down, the parameter 62 can be optimized.
- FIG. 10 shows a model providing module (image output model generation device) 50.
- the generation device 50 includes a first control unit 101, a first storage unit 102, and a first communication unit (communication interface) 52.
- the first control unit 101 is configured to transmit and receive data by transmitting and receiving signals to and from the first storage unit 102 and the first communication unit 52.
- the first communication unit 52 is configured to be able to communicate with each of the external devices such as the hospital terminal 210 via the diagnosis support system 100 or the Internet 9 via a network by wire communication or wireless communication.
- the first control unit 101 includes, for example, an arithmetic processing unit such as a CPU (Central Processing Unit), a cache memory, and an I / O device.
- the first control unit 101 reads and executes the image output model generation program MGP (50p) stored in the first storage unit 102 to function as the learning unit 53 including the preprocessing unit 53a and the model generation unit 53b. .
- MGP image output model generation program
- the first storage unit 102 can be configured by, for example, a main storage device such as a memory and an auxiliary storage device such as an HDD.
- the first storage unit 102 stores the image output model generation program 10p, the training data set GDS (70), the test data set TDS (79), the model algorithm MA (61), and the model parameter set MP (62). It is configured to memorize.
- subject i the subject related to the data of the subject data SDi is referred to as “subject i” as appropriate.
- N is, for example, 300.
- the subject data SDi of the training data set GDS (70) includes whole morphological image data KDi (71), real material whole image data BDi (72), morphological partial image data KPDi (73), and real material partial image data BPDi. (74) and the actual index value Vi (SUVR).
- the training data set 70 may include part position data PDi that is a template (mask) for extracting the morphological partial image data 73 and the real substance partial image data 74.
- the part position data PDi, the morphological partial image data KPDi, the real substance partial image data BPDi, and the real index value Vi are recognized by the first control unit 101 during execution of the image output model generation program MGP. And may not be stored before the execution of the image output model generation program MGP.
- an example of the whole morphological image data KDi (71) is obtained by imaging one or a plurality of parts of the body of the subject i, specifically, a form in the brain by a predetermined method such as CT or MRI. Data.
- the morphological whole image data KDi is an MRI whole image of the brain.
- the morphological whole image data KDi (71), which is the first type of real image data of the reference area, is 3D data of a predetermined size (for example, 64 ⁇ 64 ⁇ 64).
- the whole form image data KDi can be represented by a combination of image elements (in this embodiment, voxel) indicating a number (for example, 64 ⁇ 64 ⁇ 64) of values (luminance values) corresponding to a predetermined size.
- the whole form image data KDi may be represented by a combination of image elements (voxel in the present embodiment) indicating the number (for example, 64 ⁇ 64 ⁇ 64) of colors (RGB values) according to a predetermined size. Since the positions of these image elements can be represented by, for example, three-dimensional coordinates, each image element of the whole form image data KDi is appropriately represented as KDi (x, y, z).
- real substance whole image data BDi (72), which is real image data including the second type of evaluation target area, is one or more parts of the body of the subject i, specifically, the brain. This is data obtained by imaging the distribution of the target substance in the inside by a predetermined method such as PET.
- the real substance whole image data BDi is 3D data having the same size as the morphological whole image data KDi.
- the real material whole image data BDi and the real material whole image data BDi are arranged such that the position on the image of each of the one or a plurality of parts shown in the real material whole image data BDi matches the position on the image of the corresponding part in the morphological whole image data KDi.
- FIG. 2 illustrates a cross-sectional view of the actual substance whole image data BDi (72) at a location corresponding to each of the cross-sectional views of the morphological whole image data KDi (71).
- the value (luminance value) indicating the luminance of the image element (voxel) in which the target substance exists is equal to or more than a predetermined value. Since the positions of these image elements can be represented by, for example, three-dimensional coordinates, each image element of the real-substance whole image data BDi is appropriately represented as BDi (x, y, z).
- the target substance imaged by the PET image may be a substance that reacts with the tracer, or may be the tracer itself. Further, in addition to the PET image showing the distribution of amyloid ⁇ protein, a PET image showing the distribution of tau protein may be used.
- Part position data PDi is data indicating the position of one or a plurality of parts (evaluation target areas, for example, cerebral gray matter and cerebellum) 82 of the subject i shown in the morphological whole image data KDi on the image.
- the part position data PDi may be represented by a parameter indicating which part of the subject i is each image element KDi (x, y, z) constituting the whole morphological image data KDi.
- the part position data PDi for one part may be represented by, for example, a parameter indicating a region in three-dimensional coordinates, or may be represented by a set of three-dimensional coordinates. .
- the morphological partial image data KPDi (73) is a part obtained by extracting a predetermined part (evaluation target area) 82 from the morphological whole image data KDi (71) indicating the entire reference area 81.
- FIG. 2 illustrates a cross-sectional view of the morphological partial image data KPDi (73) at a location corresponding to each of the cross-sectional views of the morphological whole image data KDi (71).
- the real material partial image data BPDi (74) is, as shown in FIG. 2, partial image data obtained by extracting a portion corresponding to the morphological partial image data KPDi (73) from the real material whole image data BDi (72). .
- FIG. 2 illustrates a cross-sectional view of the actual substance partial image data BPDi at a location corresponding to each of the cross-sectional views of the actual substance entire image data BDi.
- the value (luminance value) indicating the luminance of the image element (voxel) in which the target substance is present is equal to or more than a predetermined value, similarly to the actual substance entire image data BDi. I have.
- the value (RGB value) indicating the color of the image element in which the target substance exists may be a value (RGB value) indicating a color indicating a predetermined color (for example, yellow). Since the positions of these image elements can be represented by, for example, three-dimensional coordinates, each image element of the substance partial image data BPDi will be appropriately referred to as BPDi (x, y, z) similarly to the actual actual substance image data BDi. Represent.
- the actual index value Vi is an index value for determining whether or not the subject i is likely to develop a certain disease, for example, Alzheimer's dementia.
- the actual index value Vi is, for example, an SUVR value that is recognized based on the actual substance partial image data BPDi and indicates a ratio between the degree of accumulation of amyloid ⁇ protein in cerebral gray matter and the degree of accumulation of amyloid ⁇ protein in the cerebellum.
- the SUVR value may indicate a ratio between the degree of accumulation of tau protein in cerebral gray matter and the degree of accumulation of tau protein in the cerebellum.
- the blood analysis value BLi is, for example, a value such as Hba1c (blood sugar level) measured by a blood test of the subject i or a result of mass analysis of the blood of the subject i by IP-MS (mass spectrometry by immunoprecipitation). Is a value such as a numerical value obtained based on. Blood analysis value, BLi is the ratio of 1) APP699-711 and 2) A ⁇ 1-40 from each amount of 1) APP699-711, 2) A ⁇ 1-40, 3) A ⁇ 1-42 measured by mass spectrometry, And 2) AA1-40 and 3) AA1-422 may be a composite biomarker mathematically combined with the ratio.
- the blood analysis value BLi may be, for example, a value such as HbA1c (blood sugar level) measured by a blood test in a general normal examination or a value determined based on mass spectrometry.
- the genotype GEi (91a) is a value based on a genotype composed of a pair of alleles of the subject i.
- the genotype GEi is, for example, a value based on the genotype of the APOE gene having three alleles of ⁇ 2, ⁇ 3, and ⁇ 4. More specifically, for example, discrete values of 0 to 2 are set such that the genotype having two ⁇ 4 is 2, the case having one ⁇ 4 is 1, and the case having no ⁇ 4 is 0. is there.
- subject j the subject related to the subject data SDj
- M is, for example, 75. Since the data of the subject data SDj of the test data set TDS is the same as the subject data SDi of the training data set GDS, the description and illustration thereof are omitted.
- Data KDj is quoted by adding j to the sign of each data included in the subject data SDj of the test data set TDS.
- Model algorithm MA (61) is information for specifying the algorithm used for the image output model.
- Information for specifying the algorithm used for the image output model includes, for example, a machine learning library such as TensorFlow, the number of layers of the multilayer convolutional neural network realized by the library, an input layer, a hidden layer, and an output layer. , And information such as the number of functions in each layer.
- the model parameter set MP (62) is a set of parameters that defines the operation of the algorithm used for the image output model.
- An example of the parameter is, for example, a coefficient by which an input value to each function is multiplied.
- FIG. 11 is a flowchart showing an outline of processing in the model providing module (image processing model generation device) 50.
- training data 70 for learning the image processing model 60 is prepared (preprocessing).
- the image processing model 60 is generated and output.
- the real image data (the real image data of the MR image) 71 of the first type reference region 81 is converted into the pseudo image data (the pseudo PET image) of the second type evaluation target region 82. ) 75 (learning process) 510 for learning the image processing model 60 for generating the image processing model.
- the learning process 510 includes a process (model parameter optimization process) 520 of learning an optimal parameter (model parameter) 62 of the image processing model 60.
- the model parameter optimization processing 520 includes a real index value (SUVR) obtained from distribution information (SUV) of a first substance (for example, amyloid) in the evaluation target area 82 of the second type of real image data (PET image) 72.
- SUVR real index value obtained from distribution information (SUV) of a first substance (for example, amyloid) in the evaluation target area 82 of the second type of real image data (PET image) 72.
- the loss function Ei is calculated based on the value of the pixel element of the pseudo image data (pseudo PET image) 75 of the second type evaluation target area 82 by the actual image data (PET) of the second type evaluation target area 82.
- the degree of deviation (for example, square error MSE) indicating the degree of deviation from the value of the image element of the image 74 may be included.
- the learning process 510 may include a step 524 of learning the parameters 62 of the image processing model 60 using the attribute information 90 including the information 91 related to the biomarkers of the plurality of test subjects included in the training data 70.
- the biomarker 91 may include at least one of the information 91b obtained by analyzing the blood of each of the plurality of test subjects and the information 91a based on the type of the gene.
- the model algorithm 61 optimized in the learning process 510 may include a convolutional neural network structure. In the step 524 of optimizing using the attribute information, the attribute information 90 is included in the feature amount of the convolutional neural network structure. Learning the parameters 62 of the image processing model 60 may be included.
- step 530 the image processing model 60 whose parameters have been optimized by the learning process 510 is evaluated based on the test data 79.
- step 540 the learned image processing model 60 is output and can be used in the diagnosis support module 10. .
- FIG. 12 shows an outline of the processing included in the preprocessing 300.
- the pre-processing unit 53a executes the processing of STEP104 to STEP118 for each of the subject data SDi of the training data set GDS (70) and the subject data SDj of the test data set TDS (STEP102).
- data to be processed is represented by adding k to a code like subject data SDk.
- a subject related to the data is represented as a subject k.
- the preprocessing unit 53a recognizes the position of each part in the body of the subject k on the image by analyzing the morphological whole image data KDk (71), generates the part position data PDk, and stores it in the first storage unit 102. (Step 104).
- the pre-processing unit 53a compares the template form entire image data KDk indicating the shape, position, and luminance of a typical human part with the form whole image data KDk to form the form whole image data KDk, respectively.
- a part position data PDk is generated by labeling any part of the subject k.
- the preprocessing unit 53a indicates, for each of the image elements KDk (x, y, z) constituting the whole morphological image data KDk, one of the cerebral gray matter, the cerebellum, the white matter with many nonspecific bindings, and a blank area. By attaching a label, site position data PDk is generated.
- the pre-processing unit 53a recognizes the morphological partial image data KPDk (73) based on the morphological whole image data KDk and the part position data PDk, and stores it in the first storage unit 102 (STEP 106). For example, the preprocessing unit 53a blanks out the partial image data of the site labeled with the white matter with a large amount of non-specific binding shown in the site position data PDk from the entire morphological image data KDk (71), Recognize the partial image data KPDk (73).
- the preprocessing unit 53a recognizes the real material partial image data BPDk (74) by extracting the partial image data of the part indicated by the morphological partial image data KPDk (73) from the whole real material image data BDk (72). Are stored in the first storage unit 102 (STEP 108).
- the preprocessing unit 53a recognizes first real substance partial image data indicating a predetermined first part (for example, cerebral gray matter) in the real substance partial image data BPDk (74) based on the part position data PDk ( (STEP110).
- the preprocessing unit 53a recognizes a first real index value serving as a basic response to a diagnosis of a disease of the subject k based on the first real substance partial image data (STEP 112). For example, the preprocessing unit 53a counts the number of image elements in which the luminance value of each image element of the first real substance partial image data is equal to or greater than a predetermined value, and recognizes the number as the first real index value. . Alternatively, or in addition, the preprocessing unit 53a counts the number of image elements in which the luminance of each image element of the first real substance partial image data is equal to or greater than a predetermined value, and recognizes the number as a first real index value. May be.
- An image element whose luminance value is equal to or more than a predetermined value indicates that a target substance (for example, amyloid ⁇ protein or tau protein highly relevant to Alzheimer's dementia) is present at a site corresponding to the subject k.
- the first actual index value indicating the number of such image elements indicates the degree of accumulation of the target substance in the predetermined first portion.
- the preprocessing unit 53a recognizes second real substance partial image data indicating a predetermined second part (for example, cerebellum) of the real substance partial image data BPDk (74) based on the part position data PDk (STEP 114). .
- the preprocessing unit 53a recognizes, based on the second real substance partial image data, a second real index value that is an index of a disease that is a basic notification of the diagnosis of the disease of the subject k (STEP 116). For example, the preprocessing unit 53a counts the number of image elements in which the value indicating the brightness of the image element is a value indicating the predetermined brightness in the second real substance partial image data, and determines the number as the second real material partial image data. Recognize as an index value.
- the preprocessing unit 53a counts the number of image elements in which the brightness of each image element of the second real substance partial image data is equal to or greater than a predetermined value, and recognizes the number as a second real index value. May be.
- the second actual index value indicates the degree of accumulation of the target substance in the predetermined second portion, similarly to the first actual index value.
- the preprocessing unit 53a recognizes the real index value Vk based on the first real index value and the second real index value, and stores it in the first storage unit 102 (STEP 118). For example, the preprocessing unit 53a recognizes the ratio between the first real index value and the second real index value as the real index value Vk.
- FIG. 13 shows an outline of the image processing model generation processing 500.
- the model generation unit 53b sets the model algorithm MA (61) (STEP 202).
- the model generation unit 53b reads a library used for machine learning according to the image processing model generation program MGP (50p), and as a model algorithm 61, the number of layers of the multilayer convolutional neural network realized by the library, the input layer , The functions used in the hidden layer and the output layer, the number of functions in each layer, and the like are set.
- the model generation unit 53b initializes the model parameter set MP (62) (STEP 204). For example, the model generation unit 53b sets a random value as each parameter.
- the model generation unit 53b loops a predetermined number of times (for example, 20,000 times) to execute the learning process 510 of STEP 300 (STEP 206).
- the model generation unit 53b executes an evaluation process 530 (STEP 400).
- the model generation unit 53b stores the model algorithm MA (61) and the model parameter set MP (62) in the first storage unit 102 (STEP 208). Thus, the image output model generation processing ends.
- FIG. 14 shows an outline of the learning process 510.
- the processing shown in FIG. 14 is performed by inputting the whole shape image data KDi (71), the blood analysis value BLi (91b), and the genotype GEi (91a) as inputs, and the whole pseudo substance corresponding to the whole real substance image data BDi (72).
- This is a model learning process in which a model algorithm MA is defined so as to output image data predBDi.
- the pseudo substance whole image data predBDi can be said to be image data imitating the distribution of a substance related to a specific disease in the brain, for example, amyloid ⁇ protein or tau protein.
- the model generation unit 53b loops for each subject data SDi of the training data set 70 and executes the processing of STEP302 to STEP320 (STEP302).
- the model generation unit 53b inputs the whole morphological image data KDi to the model and recognizes the outputted pseudo-substance whole image data predBDi (STEP 304).
- FIG. 15 illustrates a specific example of recognition of the pseudo substance entire image data predBDi.
- a recognition model MOD700 of a U-net structure which is an example of a convolutional neural network structure, is used for recognizing the pseudo-substance whole image data predBDi.
- a convolutional neural network structure having a U-Net structure (or an Auto @ Encoder structure in which SkipConnection in the U-Net structure is abolished) includes an MR image to be input in the learning process 510 and a PET image actually taken as a pair thereof. Are used to optimize the internal parameters.
- the model 700 having the U-net structure is effective as the model algorithm 61 of the image processing model 60 for generating the pseudo substance partial image data predBPDi (75) from the whole shape image data KDi (71).
- the three-dimensional morphological MR image data which is the morphological whole image data KDi is set as the input image data IPI.
- the input image data IPI is input to the hidden layer 710, and pooled and convolved in each layer until the hidden layer 710 reaches the feature layer 720. That is, in the recognition model 700, convolution and pooling are performed in the downward path.
- each layer is convolved and upsampled until the output image OPI, which is the result of the hidden layer 710, is obtained from the feature layer 720.
- the recognition model 700 convolution and upsampling are performed in an upward path, and in this example, the pseudo substance whole image data predBDi is restored (estimated) and recognized as the output image OPI.
- the output of each layer of the downward path to the feature layer 720 of the hidden layer 710 to which the input image IPI has been input is also input to the layer of the same depth of the upward path. That is, in the recognition model 700, the output of each layer in the downward path is merged with the input to the layer of the same depth in the upward path.
- the recognition model 700 it is possible to add the attribute information 90 including the biomarker to the output data from the feature layer 720.
- the attribute information 90 including the image feature data (three-dimensional format: L ⁇ M ⁇ N ⁇ 1) 731 which is the output of the layer immediately before the feature layer 720 in the downward path and the biomarker information For example, characteristic data (L ⁇ M ⁇ 1 ⁇ ) combined with data (data having r features: L ⁇ M ⁇ r ⁇ 1) 732 including a value determined by the blood analysis value BLi (91b) and the genotype GEi (80a). (N + r) ⁇ 1) 730 is input in the feature layer 720.
- the output data 730 from the feature layer 720 indicates that the amyloid ⁇ protein or tau protein in the brain, which is highly relevant to Alzheimer's dementia as a specific disease.
- the output data 731 from the characteristic layer 720 is combined with the data 732 based on the biomarker, which is a value based on the biomarker and includes the blood analysis value BLi and the value determined by the genotype GEi.
- the biomarker-based data 732 may be coupled to any of the inputs to the location, specifically to each layer of the hidden layer 710. Further, data 732 based on the same biomarker may be combined a plurality of times. Also, when using the data 732 based on a plurality of biomarkers, the data based on the plurality of biomarkers 732 may be respectively coupled to inputs to different layers.
- the model generation unit 53b recognizes the pseudo substance partial image data predBDi by extracting partial image data corresponding to the morphological partial image data KBDi from the pseudo substance whole image data predBDi (STEP 306).
- the model generation unit 53b determines, for each value of the image element predBDi (x, y, z) of the pseudo substance partial image data predBDi, the image element BDi () of the real substance partial image data BDi having the same coordinates (x, y, z).
- a square error with the value of (x, y, z) is calculated, and an average value MSEi of the square errors in the pseudo-substance partial image data predBDi is obtained (STEP 308).
- the model generation unit 53b recognizes first pseudo substance partial image data indicating a predetermined first part (for example, cerebral gray matter) in the pseudo substance partial image data predBPDi (75) based on the part position data PDi ( (STEP 310).
- the model generation unit 53b recognizes a first pseudo index value serving as basic information for diagnosing a disease of the subject i based on the first pseudo substance partial image data (STEP 312). For example, the model generation unit 53b counts the number of image elements in which the value indicating the luminance of each image element of the first pseudo-substance partial image data is a value indicating a predetermined luminance, and determines the number as the first pseudo index. Recognize as a value. Alternatively, or in addition, the preprocessing unit 53a counts the number of image elements in which the luminance of each image element of the first pseudo substance partial image data is equal to or greater than a predetermined value, and recognizes the number as a first pseudo index value. May be.
- the model generation unit 53b recognizes second pseudo substance partial image data indicating a predetermined second part (for example, cerebellum) among the pseudo substance partial image data predBPDi based on the part position data PDi (STEP 314).
- the model generation unit 53b recognizes a second pseudo index value serving as a disease index serving as basic information for diagnosing the disease of the subject i based on the second pseudo substance partial image data (STEP 316). For example, the model generation unit 53b counts the number of image elements in which the luminance values of the image elements are equal to or greater than a predetermined value in the second pseudo substance partial image data, and recognizes the number as a second pseudo index value. .
- the model generation unit 53b recognizes the pseudo index value PredVi based on the first pseudo index value and the second pseudo index value, and stores the pseudo index value PredVi in the first storage unit 120 (STEP 318). For example, the model generation unit 112 recognizes a ratio between the first pseudo index value and the second pseudo index value as a pseudo index value predVi.
- the pseudo index value predVi corresponds to the actual index value Vi related to a specific disease.
- the actual index value Vi is an index value for determining whether or not the subject has a possibility of developing a certain disease.
- the pseudo index value predVi is an index value for determining whether or not the subject i may develop a certain disease, for example, Alzheimer's dementia.
- the actual index value Vi is an actual SUVR, it can be said that the pseudo index value predVi is a pseudo SUVR.
- the model generation unit 53b evaluates the loss Ei of the current model with respect to the subject data SDi based on an index for measuring the difference between the pixel elements, for example, the square error MSEi of the image element, the actual index value Vi, and the pseudo index value predVi. (Step 320).
- the model generation unit 53b evaluates the loss Ei of the current model with respect to the subject data SDi using the above-described equation (2).
- the model generation unit 53b modifies the model parameter set MP by the error back propagation method or the like so that each loss Ei is minimized (STEP 322). Thus, the present process ends.
- FIG. 16 shows the outline of the evaluation process 530.
- the model generation unit 53b inputs each subject data SDi of the training data set GDS (70) to the trained model 60, and outputs each pseudo substance whole image data predBDi. Is recognized (STEP 402).
- the model generating unit 53b recognizes each pseudo index value predVi for each pseudo substance whole image data predBDi by performing the same processing as in STEP 306 and STEP 310 to STEP 318 of the learning processing 510 (STEP 404).
- the model generating unit 53b creates a two-dimensional coordinate graph having the real index value Vi as the horizontal axis and the pseudo index value predVi as the vertical axis, for example, the graph shown in FIG. 4 (STEP 406).
- the model generation unit 53b inputs the subject data SDj of the test data set TDS (79) to the trained model 60, and recognizes the output pseudo-substance whole image data predBDj (STEP 408).
- the model generation unit 53b recognizes each pseudo index value predVj for each pseudo substance whole image data predBDj by performing the same processing as in STEP 306 and STEP 310 to STEP 318 of the learning processing 510 (STEP 410).
- the model generating unit 53b creates a graph of two-dimensional coordinates with the real index value Vj on the horizontal axis and the pseudo index value predVj on the vertical axis, for example, FIG. 5 (STEP 412).
- FIG. 17 shows a different example of the learning process 510.
- the learning process 510 in FIG. 14 is performed on the model defined by the model algorithm MA such that the whole shape image data KDi (71) is input and the pseudo material whole image data predBDi corresponding to the real material whole image data BDi is output. It was a learning process.
- the learning process 510 shown in FIG. 17 receives the whole shape image data KDi (71) as input and outputs pseudo substance partial image data predBPDi (75) corresponding to the real substance partial image data BPDi (74) shown in FIG. Next, learning of the model 60 in which the model algorithm MA (61) is defined is performed.
- the model generation unit 53b differs from the learning process shown in FIG. 14 in that it executes STEP 350 instead of STEP 304 to STEP 306, and the other processes match.
- the model generation unit 53b inputs the whole shape image data KDi to the model, and recognizes the output pseudo substance partial image data predBPDi +.
- the model generation unit 53b performs processing using the pseudo-substance partial image data predBPDi + recognized at STEP 350.
- FIG. 18 further shows the configuration of a terminal (diagnosis support terminal, image display device) 210 of the medical institution 200.
- the support terminal 210 can be configured by a desktop computer, a laptop computer, a tablet terminal, a smartphone, or the like.
- the support terminal 210 includes a second control unit 20, a second storage unit 29, a second communication unit 22, and a display unit 25.
- the second control unit 20 includes, for example, an arithmetic processing unit such as a CPU (Central Processing Unit), a cache memory, and an I / O device.
- CPU Central Processing Unit
- the second control unit 20 reads out and executes the image output program 29p stored in the second storage unit 29 to thereby provide a diagnosis support information providing unit (information providing module, display data recognition unit) 12, a display control unit (output)
- a diagnosis support information providing unit information providing module, display data recognition unit
- a display control unit output
- An example is shown that functions as the diagnostic support module 10 by functioning as the diagnostic support module 10 as a unit) and a patient information acquisition unit (input unit) 21.
- the second storage unit 29 can be configured by, for example, a main storage device such as a memory and an auxiliary storage device such as an HDD.
- the second storage unit 29 is configured to store an image output program 29p and a learned image processing model 60 including a model algorithm 61 and a model parameter set 62.
- the learned image processing model 60 may be downloaded from the model providing module 50 via communication.
- the image processing model 60 may be read from a portable storage medium such as a CD, a DVD, or a USB memory.
- the second communication unit 22 is configured to be able to communicate with the model providing module 50, the morphological image capturing device 221, and other external devices via a network by wire communication or wireless communication.
- the display unit 25 is configured by, for example, a liquid crystal panel.
- the display unit 25 is configured to display an image according to a signal from the display control unit 23.
- FIG. 19 shows an outline of the processing in the diagnosis support module 10.
- the examinee information acquisition unit 11 of the diagnosis support module 10 or the input interface 21 functioning as an acquisition unit acquires the examinee information 105 including the MR image 15 of the brain of the examinee 5.
- the MR image 15 is an example of first type real image data mainly including information on a morphology acquired as a reference region 81 including a brain including an evaluation target region 82, and an MR image included in the patient information 105.
- Reference numeral 15 denotes an example of individual actual image data of the examination target including at least the reference area 81.
- a diagnosis support using a pseudo PET image generated from an MR image is performed for Alzheimer's disease of the examinee 5, but as described above, other symptoms and other types Similarly, a diagnosis support service using the image processing model 60 can be provided for the image (1).
- the information providing unit 12 converts the MR image 71, which is the actual image data of the first type reference area 81, into the pseudo image data of the second type evaluation target area 82.
- the image processing model 60 that has been trained to generate the pseudo PET image 75 uses the MR image 15 of the whole brain (reference region) 81 included in the patient information 105 to generate a pseudo image including five parts of the evaluation target area 82.
- a PET image 115 is generated.
- the second type of real image data included in the training data 70 used for learning the image processing model 600 visualizes the distribution of amyloid ⁇ which is the first substance associated with the abnormality to be diagnosed, in this example, Alzheimer's disease.
- 7 is a PET image 72 of real substance image data including the obtained distribution information.
- the image processing model 600 includes a real index value Vi (SUVR) obtained from the distribution information of the first substance in the evaluation target area 82 of the second type of real image data and a pseudo index of the second type evaluation target area 82.
- SUVR real index value Vi
- the loss function Ei is calculated based on the value of the pixel element of the pseudo image data (pseudo PET image) 75 of the second type evaluation target area 82 by the actual image data (PET image) 74 of the second type evaluation target area 82. From the value of the image element, for example, MEEi.
- the attribute information 14 including the biomarker 91 of the examinee 5 is included in the examinee information 105 acquired in Step 601 in Step 603, in Step 604, the information 91 related to the biomarkers of a plurality of test subjects is determined.
- the pseudo PET image 115 is generated by employing the image processing model 60 including the parameter set 62 learned by the training data 70 including the attribute information 90 including.
- diagnosis support information 110 including the diagnosis result (pseudo-diagnosis result, estimated diagnosis result) 117 is requested in step 605, in step 606, the diagnosis result 94 of the abnormality of a plurality of diagnosis targets is diagnosed.
- the image processing model 60 including the parameter set 62 learned with the training data 70 including is used.
- diagnosis support information 110 based on the pseudo PET image 115 of the evaluation target area 82 generated by the image processing model 60 based on the data included in the patient information 105 is provided.
- the diagnosis support information 110 may include a pseudo index value 116 and a pseudo diagnosis result 117 in addition to the pseudo PET image 115. Further, instead of the pseudo PET image 115, the pseudo index value 116 and / or the pseudo diagnosis result 117 may be included in the diagnosis support information 110.
- the terminal 210 of the medical institution 200 supports the diagnosis of medical personnel such as the doctor 8 by outputting the diagnosis support information 110 provided from the diagnosis support module 10 or the diagnosis support information providing unit 12 to the image display terminal 25.
- the pseudo PET image 115 included in the diagnosis support information 110 may be displayed independently, may be displayed in parallel with or superimposed on the MR image 15 of the examinee 5, and there are various output methods of the diagnosis support information 110. Can be set to
- the doctor 8 can be provided with useful information for diagnosing the presence or absence of the disease or the risk of the patient 5.
- the doctor 8 recognizes the state of the patient 5 more deeply. Can be.
- the diagnosis support system and method using the image processing model (image output model) 60 information on the form such as an MR image or a CT image with a relatively small load on the examinee or the patient is mainly used.
- the acquired three-dimensional image it is limited to various regions of interest (evaluation target regions) 82, but it is possible to accurately generate three-dimensional images of substances related to various abnormalities (disease), and based on the three-dimensional images, Diagnosis support information 110 can be provided. Therefore, information comparable to a PET image or the like can be acquired with high frequency without imposing a burden on the examinee or patient, and information on the prevention, progress, or diagnosis of an abnormality (disease) can be provided in a timely manner.
- the diagnosis support system can be applied not only to the diagnosis of diseases but also to applications such as selection of a subject in drug discovery and clinical monitoring, and furthermore, not only diagnosis of a living body including a human body and livestock, but also It is also useful in an application in which the internal state of a structure for which image diagnosis is effective is non-destructively diagnosed.
- the entire shape image data is input to the model.
- the configuration may be such that the shape partial image data is input to the model.
- the morphological partial image data KPDi and the actual substance partial image data BPDi are used as image data in the learning process.
- the learning process only the whole morphological image data KDi and the whole real material image data BDi may be used as the image data.
- the morphological partial image data and the pseudo substance partial image data are output as image data, but the pseudo substance whole image data is output instead of the pseudo substance partial image data. It may be.
- the case where a model having a U-net structure is used as the recognition model 700 has been described.
- a model having another neural network structure such as Auto @ Encoder may be used.
- the display method includes: a patient form image recognition step of recognizing a form whole image data obtained by imaging a form inside the body of the examinee; and an image output model generated by a method of generating an image output model.
- a patient form image recognition step of recognizing a form whole image data obtained by imaging a form inside the body of the examinee
- an image output model generated by a method of generating an image output model.
- the display method may include a pseudo-index value recognition step of recognizing a pseudo-index value related to the disease of the examinee based on the pseudo-substance image data, and the display step includes a examinee pseudo-substance image partial recognition step.
- the method may include outputting the recognized pseudo substance image data and the pseudo index value recognized in the pseudo index value recognition step to the display unit.
- one device “recognizes” information means that one device receives the information from another device, and that one device stores the information in a storage medium connected to the one device. Reading stored information, one device acquiring information based on a signal output from a sensor connected to the one device, one device receiving information or stored in a storage medium.
- the information by executing predetermined arithmetic processing (calculation processing or search processing, etc.) based on the information or the information acquired from the sensor, and the information obtained by one apparatus as a result of the arithmetic processing by another apparatus From the other device, and one device reads the information from the internal storage device or the external storage device in accordance with the received signal. It may mean the Rukoto.
- the image display method includes a patient morphological image recognition step of recognizing morphological image data obtained by imaging a morphology of the brain of the patient, and a patient biomarker recognition step of recognizing a biomarker related to a specific disease of the patient.
- a pseudo substance image recognition step of recognizing pseudo substance image data imitating the distribution of the pseudo substance image data recognized in the pseudo substance image recognition step, or the pseudo substance image data is recognized based on the pseudo substance image data, Displaying the pseudo index value, which is a value relating to the occurrence of the specific disease, to the display unit.
- the pseudo-substance image recognition step of obtaining pseudo-substance image data used for diagnosis of a specific disease in addition to the morphological image data of the examinee's brain, By inputting a value based on the relevant biomarker, pseudo substance image data that mimics the distribution of the target substance in the brain of the examinee is recognized.
- the pseudo-substance image data is generated in consideration of the value based on the biomarker related to the specific disease of the examinee, so that the relevant brain image is generated based on only the morphological image data.
- the estimation accuracy of the distribution of the target substance can be improved as compared with the estimation of the distribution of the target substance related to a specific disease, and the reproduction accuracy of the pseudo substance image data and the pseudo index value can be improved.
- the image output model may have a U-net structure.
- the pseudo-substance image recognition step may include adding a value based on the biomarker in an input layer of the U-net structure.
- the pseudo-substance image recognition step may include adding a value based on the biomarker in the feature layer of the U-net structure.
- the value based on the biomarker may include a value obtained by analyzing the blood of the examinee, and may include a value based on the genotype of the examinee.
- the morphological image data may be CT image data or MR image data indicating the entire morphology of the cross section of the brain of the examinee who has been imaged.
- the morphological image data of the brain is input in the input layer of the U-net structure, and the value based on the biomarker related to the specific disease is input in the input layer or the characteristic layer, whereby the brain is input.
- the estimation accuracy of the distribution of the target substance related to the specific disease in the target substance can be further improved, and the reproduction accuracy of the pseudo substance image data and the pseudo index value can be further improved.
- the image display method for example, in determining whether there is a risk of developing Alzheimer's dementia, it is a biomarker related to the distribution of highly relevant amyloid ⁇ protein or tau protein in the brain By using values and genotypes obtained by analyzing blood, it becomes possible to diagnose Alzheimer's dementia as a specific disease with high accuracy.
- One of the other forms included in the above is a display unit, which recognizes the whole form image data obtained by imaging the form inside the body of the examinee, and adds the examinee to the image output model generated by the image output model generation method.
- a display unit By inputting morphological image data including an image portion of the entire morphological image data, a pseudo-substance image pseudo-showing the distribution of the target substance in a part of the body of the examinee output from the image output model
- An image display device comprising: a display data recognizing unit that recognizes data; and a display control unit that outputs the pseudo substance image data recognized by the display data recognizing unit to the display unit.
- the display data recognition unit may recognize the pseudo substance image data and the pseudo index value relating to the disease of the examinee.
- the display control unit may output the pseudo substance image data and the pseudo index value to the display unit.
- the image display device a display unit, recognizes morphological image data obtained by imaging the morphology of the examinee's brain, recognizes a value based on a biomarker related to a specific disease of the examinee, and outputs the image to the image output model.
- a display control unit that outputs the pseudo substance image data or the pseudo substance image data or both of them to the display unit.
- the generation method includes morphological image data that is a captured image of the form of the brain of the subject, real substance image data indicating the distribution of the target substance related to the specific disease of the subject in the brain of the subject, and the subject's brain.
- a computer-implemented method that includes a storage unit that stores a value based on a biomarker related to a specific disease, wherein the morphological image data and the value based on the biomarker are input, and the value in the brain of the subject is input.
- the method includes a step of correcting a model that outputs pseudo substance image data, which is an image relating to the distribution of the target substance, based on the actual substance image data.
- a value based on the morphological image data of the subject's brain stored in the storage unit and a biomarker related to a specific disease of the subject is input.
- the image output model is compared with a case where only the morphological data of the subject's brain is input.
- the estimation accuracy of the distribution of the target substance related to the specific disease in the subject's brain can be improved.
- usefulness of information provided in diagnosing a disease or the like can be improved.
- the storage unit is configured to extract morphological partial image data including an image part of a part of the morphological image of the subject and an image part related to a distribution of a target substance in the partial part in the real substance image of the subject.
- the material partial image data may be stored.
- the model generating step inputs the morphological partial image data and a value based on the biomarker, and outputs pseudo substance image data including an image part relating to a distribution of the target substance in the partial site in the brain of the subject. May be modified based on the real substance partial image data.
- the generation method includes morphological image data including an image portion related to a part of the morphological whole image data that is a captured image of one or a plurality of parts of the body of the subject, and distribution of the target substance in the body of the subject.
- the computer is provided with a storage unit that stores actual substance partial image data obtained by extracting an image part relating to the distribution of the target substance in the one or more parts of the subject in the actual substance whole image data.
- the morphological image data stored in the storage unit is input, and a part of the subject, for example, a part of the brain
- a model for outputting pseudo-substance image data relating to a distribution of a target substance related to a specific disease is based on real substance partial image data relating to the distribution of the target substance in a part of the subject (part of the brain). Will be corrected.
- an image output model is generated.
- the distribution of the target substance related to the disease in the body for example, in the whole brain is not important, but the distribution in some regions is important.
- the accuracy of estimating the distribution of the target substance in the part of the subject by the image output model can be improved.
- the accuracy of estimating the distribution of the target substance in a part can be improved.
- the method for generating an image output model includes a real index value recognition step of recognizing a real index value related to the specific disease of the subject based on the real substance partial image data, and the model generation step includes: A divergence evaluation step of evaluating a divergence indicating a degree of divergence from the value of the image element of the actual substance partial image data with respect to the value of the image element of the image data; and The method may include a pseudo index value recognition step of recognizing a pseudo index value, and a correction step of correcting the model based on the divergence, the real index value, and the pseudo index value.
- the degree of deviation indicating the degree of deviation of the image element of the pseudo substance image data from the image element of the actual substance partial image data is evaluated.
- a pseudo index value relating to the disease of the subject is recognized based on the pseudo substance image data.
- the model is modified based on the divergence, the actual index value, and the pseudo index value.
- the model is modified in consideration of both the image element and the index value relating to the disease.
- the real index value recognition step recognizes a first real index value based on a distribution of the target substance in a first part of the subject included in the real substance partial image data, and is included in the real substance partial image data.
- the method may include a step of recognizing a second actual index value based on a distribution of the target substance in a second site of the subject.
- the pseudo index value recognition step includes recognizing a first pseudo index value based on a distribution of the target substance in a first portion of the subject included in the pseudo substance image data, and recognizing the first pseudo index value in the pseudo substance image data. And recognizing a second pseudo index value based on the distribution of the target substance in the second region.
- the correcting step may include a step of correcting the model based on the degree of divergence, the first real index value, the second real index value, the first pseudo index value, and the second pseudo index value. .
- Diagnosis of some brain diseases depends not only on the total amount of the target substance in the brain but also on how and where the target substance related to a specific disease is distributed in the brain. Can change.
- the model is modified by taking into account the index values of each of the plurality of parts of the brain of the subject. Thereby, the usefulness of the information provided in diagnosing a disease or the like can be improved.
- the method of generating an image output model may include a position recognition step of recognizing a position of the first part and a position of the second part in the whole morphological image data or the whole real substance image data.
- the pseudo index value recognition step includes a step of: determining a position of the first part in the pseudo substance image data based on the position of the first part and the position of the second part in the whole morphological image data or the whole real substance image data. And recognizing the position of the second part, recognizing the first pseudo index value based on the distribution of the target substance at the position of the first part in the pseudo substance image data, and recognizing the second pseudo index value in the pseudo substance image data.
- the method may include a step of recognizing a second pseudo index value based on a distribution of the target substance at a position of a part.
- the position or size of a part in the human brain varies from person to person, it is unknown at the time of generation of the image which part is located on the image from the generated image.
- the second part in the pseudo substance image data is used. The position of one part and the position of the second part are recognized. This eliminates the need to analyze what kind of data is included in the pseudo-substance image data, and thus can reduce the calculation cost in generating the image output model.
- the morphological image data may be CT image data or MR image data indicating the entire morphology of a cross section of the brain of the subject who has been imaged, and the real substance partial image data may be It may be image data of the PET image relating to cerebral gray matter and cerebellum in the brain of the subject.
- CT partial image data or MRI partial image data including an image part relating to cerebral gray matter and cerebellum in the CT image data or MR image data stored in the storage unit is input, and the cerebral gray area of the subject is input.
- Generating an image output model by modifying a model that outputs pseudo PET image data including an image portion related to the quality and distribution of amyloid ⁇ protein or tau protein in the cerebellum based on the image data of the PET image You may go out.
- CT partial image data or MRI partial image data including an image part relating to cerebral gray matter and cerebellum in CT image data or MR image data stored in the storage unit is input.
- a model that outputs pseudo PET image data including an image portion related to the distribution of amyloid ⁇ protein or tau protein in the cerebral gray matter and the cerebellum of the subject is corrected based on the partial image data of the PET image,
- An image output model is generated.
- the distribution of amyloid ⁇ protein or tau protein in cerebral gray matter and cerebellum can be used as basic information for risk determination of dementia.
- An output model is generated.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Epidemiology (AREA)
- Optics & Photonics (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Primary Health Care (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Heart & Thoracic Surgery (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Nuclear Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980060132.4A CN112702947A (zh) | 2018-09-12 | 2019-09-12 | 诊断辅助系统以及方法 |
EP19859252.9A EP3851036A4 (en) | 2018-09-12 | 2019-09-12 | Diagnosis assistance system and method |
US16/973,708 US11978557B2 (en) | 2018-09-12 | 2019-09-12 | Diagnosis support system and method |
JP2020532078A JP6746160B1 (ja) | 2018-09-12 | 2019-09-12 | 診断支援システムおよび方法 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018170708 | 2018-09-12 | ||
JP2018-170708 | 2018-09-12 | ||
JP2019067854 | 2019-03-29 | ||
JP2019-067854 | 2019-03-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020054803A1 true WO2020054803A1 (ja) | 2020-03-19 |
Family
ID=69777069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/035914 WO2020054803A1 (ja) | 2018-09-12 | 2019-09-12 | 診断支援システムおよび方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11978557B2 (enrdf_load_stackoverflow) |
EP (1) | EP3851036A4 (enrdf_load_stackoverflow) |
JP (3) | JP6746160B1 (enrdf_load_stackoverflow) |
CN (1) | CN112702947A (enrdf_load_stackoverflow) |
WO (1) | WO2020054803A1 (enrdf_load_stackoverflow) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2021221008A1 (enrdf_load_stackoverflow) * | 2020-04-28 | 2021-11-04 | ||
JP2022040486A (ja) * | 2020-08-31 | 2022-03-11 | Brain Linkage合同会社 | 脳画像を扱う装置、方法及びプログラム |
WO2022054711A1 (ja) * | 2020-09-10 | 2022-03-17 | 株式会社Splink | コンピュータプログラム、情報処理装置、端末装置、情報処理方法、学習済みモデル生成方法及び画像出力装置 |
JPWO2022065061A1 (enrdf_load_stackoverflow) * | 2020-09-28 | 2022-03-31 | ||
JPWO2022168969A1 (enrdf_load_stackoverflow) * | 2021-02-05 | 2022-08-11 | ||
JP2023122529A (ja) * | 2022-02-22 | 2023-09-01 | ニューロフェット インコーポレイテッド | 認知症診断支援情報の提供装置及びその方法 |
JP2023122528A (ja) * | 2022-02-22 | 2023-09-01 | ニューロフェット インコーポレイテッド | 認知症診断に必要な情報の提供方法及び装置 |
WO2023167157A1 (ja) * | 2022-03-01 | 2023-09-07 | 株式会社Splink | コンピュータプログラム、情報処理装置及び情報処理方法 |
JP2024010034A (ja) * | 2020-12-30 | 2024-01-23 | ニューロフェット インコーポレイテッド | 診断補助情報の提供方法およびそれの実行する装置 |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11166764B2 (en) | 2017-07-27 | 2021-11-09 | Carlsmed, Inc. | Systems and methods for assisting and augmenting surgical procedures |
US20210059822A1 (en) * | 2018-09-12 | 2021-03-04 | Carlsmed, Inc. | Systems and methods for designing orthopedic implants based on tissue characteristics |
JP6746160B1 (ja) * | 2018-09-12 | 2020-08-26 | 株式会社Splink | 診断支援システムおよび方法 |
WO2021060462A1 (ja) * | 2019-09-27 | 2021-04-01 | 富士フイルム株式会社 | 画像処理装置、方法およびプログラム、学習装置、方法およびプログラム、並びに導出モデル |
JP7335157B2 (ja) * | 2019-12-25 | 2023-08-29 | 富士フイルム株式会社 | 学習データ作成装置、学習データ作成装置の作動方法及び学習データ作成プログラム並びに医療画像認識装置 |
US11376076B2 (en) | 2020-01-06 | 2022-07-05 | Carlsmed, Inc. | Patient-specific medical systems, devices, and methods |
JPWO2022138960A1 (enrdf_load_stackoverflow) * | 2020-12-25 | 2022-06-30 | ||
JP2024510080A (ja) * | 2021-03-11 | 2024-03-06 | テラン バイオサイエンシズ インコーポレイテッド | バイオマーカーを含む撮像データセットのハーモナイゼーションのためのシステム、デバイス及び方法 |
KR102609153B1 (ko) * | 2021-05-25 | 2023-12-05 | 삼성전기주식회사 | 딥러닝 기반 제조 영상의 이상 탐지 장치 및 방법 |
WO2022250253A1 (en) * | 2021-05-25 | 2022-12-01 | Samsung Electro-Mechanics Co., Ltd. | Apparatus and method with manufacturing anomaly detection |
CN113633306B (zh) * | 2021-08-31 | 2024-10-29 | 上海商汤智能科技有限公司 | 图像处理方法及相关装置、电子设备和存储介质 |
WO2024166932A1 (ja) * | 2023-02-07 | 2024-08-15 | 公立大学法人大阪 | 医療用画像生成方法及び装置、人工知能モデル学習方法及び装置、並びにプログラム |
CN117036305B (zh) * | 2023-08-16 | 2024-07-19 | 郑州大学 | 一种用于咽喉检查的图像处理方法、系统及存储介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006320387A (ja) * | 2005-05-17 | 2006-11-30 | Univ Of Tsukuba | 計算機支援診断装置および方法 |
JP2018505705A (ja) * | 2014-12-10 | 2018-03-01 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 機械学習を用いた医用イメージングの変換のためのシステムおよび方法 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004071080A (ja) * | 2002-08-08 | 2004-03-04 | Pioneer Electronic Corp | 情報再生出力装置、方法、プログラム及び記録媒体 |
US7782998B2 (en) * | 2004-12-21 | 2010-08-24 | General Electric Company | Method and apparatus for correcting motion in image reconstruction |
JP4685078B2 (ja) * | 2007-10-25 | 2011-05-18 | 富士フイルムRiファーマ株式会社 | 画像診断支援システム |
JP5243865B2 (ja) * | 2008-07-07 | 2013-07-24 | 浜松ホトニクス株式会社 | 脳疾患診断システム |
JP2010029284A (ja) * | 2008-07-25 | 2010-02-12 | Konica Minolta Medical & Graphic Inc | プログラム、可搬型記憶媒体及び情報処理装置 |
JP6703323B2 (ja) | 2015-09-17 | 2020-06-03 | 公益財団法人神戸医療産業都市推進機構 | 生体の画像検査のためのroiの設定技術 |
US10102451B2 (en) * | 2015-10-13 | 2018-10-16 | Elekta, Inc. | Pseudo-CT generation from MR data using tissue parameter estimation |
CN110234400B (zh) | 2016-09-06 | 2021-09-07 | 医科达有限公司 | 用于生成合成医学图像的神经网络 |
KR101917947B1 (ko) * | 2017-02-23 | 2018-11-12 | 고려대학교 산학협력단 | 딥러닝을 사용하여 재발 부위를 예측하고, pet 이미지를 생성하는 방법과 장치 |
CN107123095B (zh) * | 2017-04-01 | 2020-03-31 | 上海联影医疗科技有限公司 | 一种pet图像重建方法、成像系统 |
CN107644419A (zh) * | 2017-09-30 | 2018-01-30 | 百度在线网络技术(北京)有限公司 | 用于分析医学影像的方法和装置 |
JP6746160B1 (ja) * | 2018-09-12 | 2020-08-26 | 株式会社Splink | 診断支援システムおよび方法 |
-
2019
- 2019-09-12 JP JP2020532078A patent/JP6746160B1/ja active Active
- 2019-09-12 US US16/973,708 patent/US11978557B2/en active Active
- 2019-09-12 EP EP19859252.9A patent/EP3851036A4/en not_active Withdrawn
- 2019-09-12 WO PCT/JP2019/035914 patent/WO2020054803A1/ja not_active Application Discontinuation
- 2019-09-12 CN CN201980060132.4A patent/CN112702947A/zh active Pending
-
2020
- 2020-07-30 JP JP2020129194A patent/JP7357927B2/ja active Active
-
2023
- 2023-09-20 JP JP2023151875A patent/JP2023169313A/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006320387A (ja) * | 2005-05-17 | 2006-11-30 | Univ Of Tsukuba | 計算機支援診断装置および方法 |
JP2018505705A (ja) * | 2014-12-10 | 2018-03-01 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 機械学習を用いた医用イメージングの変換のためのシステムおよび方法 |
Non-Patent Citations (5)
Title |
---|
APOORVA SIKKA ET AL., MRI TO FDG-PET: CROSS-MODAL SYNTHESIS USING 3D U-NET FOR MULTI-MODAL ALZHEIMER'S CLASSIFICATION, 30 July 2018 (2018-07-30), Retrieved from the Internet <URL:https://arxiv.org/abs/1807.10111vl> |
GREGORY KLEINMEHUL SAMPATDAVIS STAEWENDAVID SCOTTJOYCE SUHY: "Comparison of SUVR Methods and Reference Regions in Amyloid PET", SNMMI 2015 ANNUAL MEETING, 6 June 2015 (2015-06-06) |
NAKAMURA ET AL.: "High performance plasma amytoid-β biomarkers for Alzheimer's disease", NATURE, 8 February 2018 (2018-02-08) |
SUSAN LANDAUWILLIAM JAGUST: "Florbetapir processing methods", ADNI (ALZHEIMER'S DISEASE NEUROIMAGING INITIATIVE, 25 June 2015 (2015-06-25) |
TOKUI, SEIYA: "Deep Learning Concepts understood from Optimization", OPERATIONS RESEARCH AS A MANAGEMENT SCIENCE RESEARCH, vol. 60, no. 4, April 2015 (2015-04-01), pages 191 - 197, XP009524766, ISSN: 0030-3674 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2021221008A1 (enrdf_load_stackoverflow) * | 2020-04-28 | 2021-11-04 | ||
JP2022040486A (ja) * | 2020-08-31 | 2022-03-11 | Brain Linkage合同会社 | 脳画像を扱う装置、方法及びプログラム |
WO2022054711A1 (ja) * | 2020-09-10 | 2022-03-17 | 株式会社Splink | コンピュータプログラム、情報処理装置、端末装置、情報処理方法、学習済みモデル生成方法及び画像出力装置 |
JPWO2022054711A1 (enrdf_load_stackoverflow) * | 2020-09-10 | 2022-03-17 | ||
JP7440655B2 (ja) | 2020-09-28 | 2024-02-28 | 富士フイルム株式会社 | 画像処理装置、画像処理装置の作動方法、画像処理装置の作動プログラム |
WO2022065061A1 (ja) * | 2020-09-28 | 2022-03-31 | 富士フイルム株式会社 | 画像処理装置、画像処理装置の作動方法、画像処理装置の作動プログラム |
JPWO2022065061A1 (enrdf_load_stackoverflow) * | 2020-09-28 | 2022-03-31 | ||
JP2024010034A (ja) * | 2020-12-30 | 2024-01-23 | ニューロフェット インコーポレイテッド | 診断補助情報の提供方法およびそれの実行する装置 |
JP7650530B2 (ja) | 2020-12-30 | 2025-03-25 | ニューロフェット インコーポレイテッド | 診断補助情報の提供方法およびそれの実行する装置 |
JPWO2022168969A1 (enrdf_load_stackoverflow) * | 2021-02-05 | 2022-08-11 | ||
WO2022168969A1 (ja) * | 2021-02-05 | 2022-08-11 | 株式会社Medicolab | 学習装置、学習済みモデルの生成方法、診断処理装置、コンピュータプログラム及び診断処理方法 |
JP2023122529A (ja) * | 2022-02-22 | 2023-09-01 | ニューロフェット インコーポレイテッド | 認知症診断支援情報の提供装置及びその方法 |
JP2023122528A (ja) * | 2022-02-22 | 2023-09-01 | ニューロフェット インコーポレイテッド | 認知症診断に必要な情報の提供方法及び装置 |
JP7477906B2 (ja) | 2022-02-22 | 2024-05-02 | ニューロフェット インコーポレイテッド | 認知症診断支援情報の提供装置及びその方法 |
JP7492769B2 (ja) | 2022-02-22 | 2024-05-30 | ニューロフェット インコーポレイテッド | 認知症診断に必要な情報の提供方法及び装置 |
WO2023167157A1 (ja) * | 2022-03-01 | 2023-09-07 | 株式会社Splink | コンピュータプログラム、情報処理装置及び情報処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JP6746160B1 (ja) | 2020-08-26 |
JP2020171841A (ja) | 2020-10-22 |
EP3851036A4 (en) | 2022-06-15 |
CN112702947A (zh) | 2021-04-23 |
US20210257094A1 (en) | 2021-08-19 |
JP7357927B2 (ja) | 2023-10-10 |
US11978557B2 (en) | 2024-05-07 |
JPWO2020054803A1 (ja) | 2020-10-22 |
EP3851036A1 (en) | 2021-07-21 |
JP2023169313A (ja) | 2023-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6746160B1 (ja) | 診断支援システムおよび方法 | |
CN111563523B (zh) | 利用机器训练的异常检测的copd分类 | |
US11996172B2 (en) | Diagnosis support system, information processing method, and program | |
JP5468905B2 (ja) | 神経変性疾患の診断を支援するためのツール | |
Willette et al. | Prognostic classification of mild cognitive impairment and Alzheimer׳ s disease: MRI independent component analysis | |
JP7619662B2 (ja) | 医療映像分析方法、医療映像分析装置及び医療映像分析システム | |
Zhou et al. | MR-less surface-based amyloid assessment based on 11C PiB PET | |
JP6036009B2 (ja) | 医用画像処理装置、およびプログラム | |
JP7492769B2 (ja) | 認知症診断に必要な情報の提供方法及び装置 | |
US11403755B2 (en) | Medical information display apparatus, medical information display method, and medical information display program | |
KR101919847B1 (ko) | 동일 피사체에 대하여 시간 간격을 두고 촬영된 영상 간에 동일 관심구역을 자동으로 검출하는 방법 및 이를 이용한 장치 | |
KR102067412B1 (ko) | 치매 평가 방법 및 이를 이용한 장치 | |
WO2019044081A1 (ja) | 医用画像表示装置、方法及びプログラム | |
Hassanaly et al. | Evaluation of pseudo-healthy image reconstruction for anomaly detection with deep generative models: Application to brain FDG PET | |
Schell et al. | Automated hippocampal segmentation algorithms evaluated in stroke patients | |
Cheng et al. | Alzheimer’s disease prediction algorithm based on de-correlation constraint and multi-modal feature interaction | |
US20250069745A1 (en) | Diagnosis support device, recording medium, and diagnosis support method | |
US20160166192A1 (en) | Magnetic resonance imaging tool to detect clinical difference in brain anatomy | |
WO2023119866A1 (ja) | 情報処理装置、情報処理装置の作動方法、情報処理装置の作動プログラム、予測モデル、学習装置、および学習方法 | |
WO2019003749A1 (ja) | 医用画像処理装置、方法およびプログラム | |
JP7477906B2 (ja) | 認知症診断支援情報の提供装置及びその方法 | |
KR101948701B1 (ko) | 피검체의 뇌 구조를 기술하는 잠재 변수에 기반하여 상기 피검체의 뇌질환을 판정하는 방법 및 이를 이용한 장치 | |
KR102708335B1 (ko) | 치매 진단을 위한 의료 영상 회귀분석 장치 및 방법 | |
HK40045264A (en) | Diagnosis assistance system and method | |
JP6998760B2 (ja) | 脳画像解析装置、脳画像解析方法、及び脳画像解析プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19859252 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020532078 Country of ref document: JP Kind code of ref document: A |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019859252 Country of ref document: EP Effective date: 20210412 |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2019859252 Country of ref document: EP |