US11494586B2 - Tomographic image machine learning device and method - Google Patents
Tomographic image machine learning device and method Download PDFInfo
- Publication number
- US11494586B2 US11494586B2 US17/000,372 US202017000372A US11494586B2 US 11494586 B2 US11494586 B2 US 11494586B2 US 202017000372 A US202017000372 A US 202017000372A US 11494586 B2 US11494586 B2 US 11494586B2
- Authority
- US
- United States
- Prior art keywords
- learning
- region
- data
- learning data
- divided
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G06K9/6256—
-
- G06T12/30—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/2163—Partitioning the feature space
-
- G06K9/6261—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G06T12/00—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/441—AI-based methods, deep learning or artificial neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/031—Recognition of patterns in medical or anatomical images of internal organs
Definitions
- the present invention relates to a machine learning device and a machine learning method, and particularly relates to machine learning device and method for constructing a machine-learned model that performs classification (segmentation) of structures of an image.
- volume data three-dimensional medical image data
- CT computed tomography
- JP2017-202321A by using a supervised machine learning algorithm, anatomical feature points included in volume data are extracted, a model indicating a three-dimensional positional relationship of anatomical feature points in the body and the extracted feature points are compared to optimize the extracted feature points, and a part is detected. Further, in a case where a plurality of organs are shown at one absolute position, positional information of a plurality of parts is assigned to one CT image. In this manner, for example, in a case where a CT image is divided into series, it is possible to divide the CT image into series for each part on the basis of the positional information.
- a discriminator that simultaneously extracts a plurality of organs from volume data by machine learning or the like is developed. That is, in a case where volume data is input to the discriminator, data in which a label such as “stomach”, “lung”, “bronchi”, “liver”, or “hepatic portal vein” is added to each of voxels constituting the volume data is output.
- a label such as “stomach”, “lung”, “bronchi”, “liver”, or “hepatic portal vein”
- the discriminator that simultaneously performs labeling of two or more organs prepares a number of sets of original volume data and voxels of each organ, which are manually labeled by a doctor or the like, included in the volume data, as ground truth data and completes a learned model by performing machine learning of the data.
- the volume data for learning has a very large capacity, and thus the volume data for learning is divided to be input to a machine learning process for the convenience of a memory size used in the machine learning, in some cases.
- the data is divided in an axial direction by a certain number within a memory size limit.
- the divided data is data for learning.
- the data may adversely affect machine learning.
- the invention is made in view of such problems, and an object of the invention is to provide machine learning device and method which can prepare divided data suitable for machine learning from volume data for learning.
- a machine learning device comprises a learning data input unit that receives an input of learning data including volume data of a tomographic image and labeling of a region in the volume data; a division unit that divides the learning data of which the input is received by the learning data input unit to create divided learning data; a learning exclusion target region discrimination unit that discriminates a learning exclusion target region which is a region to be excluded from a target of learning, from the divided learning data created by the division unit and the learning data; and a machine learning unit that performs machine learning of labeling of a region other than the learning exclusion target region discriminated by the learning exclusion target region discrimination unit, on the basis of the divided learning data created by the division unit.
- the learning exclusion target region discrimination unit compares a volume of the region labeled in the divided learning data created by the division unit and a volume of the region labeled in the learning data and discriminates the learning exclusion target region according to whether the volume is equal to or less than a threshold value.
- the machine learning device further comprises a detection accuracy calculation unit that calculates detection accuracy of a region other than the learning exclusion target region discriminated by the learning exclusion target region discrimination unit, in which the machine learning unit performs machine learning of the labeling of the region other than the learning exclusion target region on the basis of the divided learning data created by the division unit and the detection accuracy calculated by the detection accuracy calculation unit.
- the detection accuracy calculation unit calculates the detection accuracy on the basis of an average of Intersection over Union (IoU) between a predicted label and a ground truth label of each region.
- IoU Intersection over Union
- the division unit re-divides the learning data such that the entire learning exclusion target region is included.
- the division unit creates pieces of divided learning data having an overlapping portion.
- the tomographic image is a three-dimensional medical tomographic image, and the region includes an organ.
- a machine learning method which is executed by a computer, comprises a step of receiving an input of learning data including volume data of a tomographic image and labeling of a region in the volume data; a step of dividing the learning data to create divided learning data; a step of discriminating a learning exclusion target region which is a region to be excluded from a target of learning, from the divided learning data and the learning data; and a step of performing machine learning of labeling of a region other than the learning exclusion target region on the basis of the divided learning data.
- a machine learning program for causing a computer to execute the machine learning method and a machine-learned model obtained by machine learning by the machine learning program are also included in the invention.
- the learning exclusion target region is discriminated from the divided learning data and machine learning of labeling of regions other than the learning exclusion target region is performed, it is possible to perform machine learning with high accuracy even from the divided data in which only a part of an organ is included.
- FIG. 1 is a schematic configuration diagram of a machine learning device.
- FIG. 2 is a conceptual explanatory diagram of divided learning data.
- FIG. 3 is a conceptual explanatory diagram of divided ground truth data in which a labeling region of an organ is cut.
- FIG. 4 is a conceptual explanatory diagram of backpropagation performed for each piece of divided learning data Dj.
- FIG. 5 is a flowchart of a machine learning process.
- FIG. 6 is a conceptual explanatory diagram of re-division of learning data.
- FIG. 1 is a schematic configuration diagram of a machine learning device 1 according to a preferred embodiment of the invention.
- the machine learning device 1 comprises an original learning data input unit 11 , an original learning data division unit 12 , a learning exclusion target discrimination unit 13 , a divided learning data output unit 14 , and a machine learning unit 15 .
- the machine learning device 1 is constituted by a computer comprising a processor such as a graphics processing unit (GPU), and each unit described above is realized by a program executed by a processor.
- the machine learning device 1 may include or may not include a neural network 16 .
- the original learning data input unit 11 receives an input of sets (original learning data) of volume data V consisting of a number of axial tomographic images (multi-slice images) and a ground truth mask G in which each pixel in an image is classified into the type (class) of an anatomical structure by a doctor or the like manually assigning (labeling) a ground truth label such as “lung”, “bronchi”, “blood vessel”, “air filling pattern”, and “others (background)” to each voxel included in the volume data.
- sets (original learning data) of volume data V consisting of a number of axial tomographic images (multi-slice images) and a ground truth mask G in which each pixel in an image is classified into the type (class) of an anatomical structure by a doctor or the like manually assigning (labeling) a ground truth label such as “lung”, “bronchi”, “blood vessel”, “air filling pattern”, and “others (background)” to each voxel included in the volume
- the original learning data division unit 12 divides (crops) the original learning data of which the input is received by the original learning data input unit 11 , in the axial direction by a predetermined unit to create N pieces of divided learning data D 1 , D 2 , D 3 , . . . , and DN consisting of divided volume data V 1 , V 2 , V 3 , . . . , and VN and divided ground truth masks G 1 , G 2 , G 3 , . . . , and GN (refer to FIG. 2 ).
- the unit for division of the divided learning data D 1 , D 2 , D 3 , and the like depends on a hardware limit such as computing devices or a memory of the neural network 16 . That is, the division unit depends on the amount of data that the neural network 16 can accept at one time.
- Different two pieces of divided learning data may include an overlapping portion.
- the original learning data may be divided not only in the axial direction but also in a sagittal direction or a coronal direction.
- O(j,i) is assigned with the same organ label as Oi. However, O(j,i) and Oi do not exactly match depending on the position of division.
- an organ O( 1 , 1 ) with a label of “liver” in the divided ground truth mask G 1 has a shape in which a part of an organ O 1 with a label of “liver” in the ground truth mask G is cut.
- the learning exclusion target discrimination unit 13 discriminates the divided learning data Dk having a subscript k with A(k,i) ⁇ Th as a learning exclusion target of the organ Oi.
- the organ Oi with A(k,i) ⁇ Th in the divided learning data Dk is expressed as O(k,i).
- an area ratio of the organ Oi included in the divided learning data Dj may be calculated from the divided learning data Dj in the sagittal direction or coronal direction, and whether only a part of the organ Oi is included in the divided learning data may be discriminated on the basis of the area ratio.
- the divided learning data output unit 14 outputs the divided learning data Dj subjected to the discrimination of the learning exclusion target discrimination unit 13 , to the machine learning unit 15 .
- the machine learning unit 15 causes the neural network 16 to perform machine learning on the basis of the divided learning data Dj output from the divided learning data output unit 14 .
- the neural network 16 is a multi-layer classifier configured by a convolutional neural network (CNN) or the like.
- CNN convolutional neural network
- the machine learning of the neural network 16 by the machine learning unit 15 uses backpropagation (error propagation method).
- the backpropagation is a method of comparing teacher data for the input data with actual output data obtained from the neural network 16 to change each connection load from an output layer side to an input layer side on the basis of the error.
- the neural network 16 classifies structures in the divided volume data Vj by assigning a label such as “lung”, “bronchi”, “blood vessel”, “air filling pattern”, and “others (background)” to each voxel (pixel in case of two-dimensional data) of the divided volume data Vj included in the divided learning data Dj according to a learned model obtained by some machine learning. In this manner, a predicted mask Pj which is a set of voxels subjected to the labeling of each organ is obtained for each divided learning data Dj.
- the machine learning unit 15 compares the predicted mask Pj with the divided ground truth mask Gj as the teacher data to perform backpropagation of the neural network 16 on the basis of the error. That is, the backpropagation of the neural network 16 is performed for each divided learning data Dj.
- the machine learning unit 15 does not perform backpropagation of labeling of the organ O(k,i). The details will be described below.
- FIG. 5 is a flowchart of a machine learning process using the divided learning data Dj.
- a program for causing a processor of the machine learning device 1 to execute the machine learning process is stored in a computer-readable tangible storage medium such as a random access memory (RAM) of the machine learning device 1 .
- the medium in which the program is stored may be a non-transitory computer-readable recording medium such as a hard disk, a compact disk (CD), a Digital Versatile Disk (DVD), and various semiconductor memories.
- the original learning data division unit 12 creates N divided learning data D 1 , D 2 , . . . , and DN from the original learning data that is received by the original learning data input unit 11 .
- N is an integer of 2 or more.
- the unit for division of the learning data depends on the memory capacity and the processing performance of the GPU, and any amount less than the maximum amount that the divided learning data can be processed is set as a unit for division.
- the neural network 16 inputs the divided volume data Vj of the divided learning data Dj to create the predicted mask Pj of each of n(j) organs O(j,i).
- j 1, 2, . . . , and N.
- the detection accuracy acc(j,i) is calculated for each of n(j) types of organs O(j,i) except for the organ O(k,i) as the learning exclusion target, and the average value thereof is regarded as the loss function Loss(j) corresponding to the divided learning data Dj.
- acc(j,i) is the Intersection over Union (IoU) of each organ O(j,i) in the predicted mask Pj. That is, the IoU is a value obtained by dividing the number of voxels the intersection of a set Pr(i) of the organ O(j,i) in the predicted mask Pj and a set Ht of the organ O(j,i) in the divided ground truth mask Gj, by the number of voxels of a union of the set Pr(i) and the set Ht.
- acc ( i ) f 1( Pr ( i ) ⁇ Ht )/ f 2( Pr ( i ) ⁇ Ht ) (1)
- a value obtained by multiplying the IoU by a constant (such as 100 times) or a Dice coefficient may be used as acc (i).
- the machine learning unit 15 changes each connection load of the neural network 16 from the output layer side to the input layer side according to the loss function Loss.
- the original learning data division unit 12 re-creates the divided learning data Dk.
- the original learning data division unit 12 re-divides the original learning data such that the entire organ O(k,i) is included in the divided learning data Dk.
- the unit for re-division is also constrained by hardware resources.
- the process returns to S 2 , and for the divided learning data Dk, the predicted mask Pk of each organ including the organ O(k,i) is created.
- FIG. 6 is an example of re-creation of the divided learning data Dk.
- the divided learning data D 2 created once is shifted toward the head along the axial (body axis) direction.
- machine learning with high accuracy can be performed on the basis of the shifted divided learning data D 2 .
- S 2 to S 7 described above can be repeated any number of times. Accordingly, division is performed again such that any organ Oi is included in any divided learning data, and further, backpropagation based on the loss function may be performed each time division is performed.
- the organ cut due to the division of the learning data can be subjected to the calculation of detection accuracy and the backpropagation by re-division of the learning data.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Pathology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biodiversity & Conservation Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
acc(i)=f1(Pr(i)∩Ht)/f2(Pr(i)∪Ht) (1)
Pr(i)∩Ht (2)
Pr(i)∪Ht (3)
- 1: machine learning device
- 11: original learning data input unit
- 12: original learning data division unit
- 13: learning exclusion target discrimination unit
- 14: divided learning data output unit
- 15: machine learning unit
- 16: neural network
Claims (10)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JPJP2018-033451 | 2018-02-27 | ||
| JP2018-033451 | 2018-02-27 | ||
| JP2018033451 | 2018-02-27 | ||
| PCT/JP2019/007048 WO2019167882A1 (en) | 2018-02-27 | 2019-02-25 | Machine learning device and method |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/007048 Continuation WO2019167882A1 (en) | 2018-02-27 | 2019-02-25 | Machine learning device and method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200387751A1 US20200387751A1 (en) | 2020-12-10 |
| US11494586B2 true US11494586B2 (en) | 2022-11-08 |
Family
ID=67805382
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/000,372 Active 2039-05-02 US11494586B2 (en) | 2018-02-27 | 2020-08-24 | Tomographic image machine learning device and method |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11494586B2 (en) |
| JP (1) | JP6952185B2 (en) |
| WO (1) | WO2019167882A1 (en) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPWO2019176806A1 (en) * | 2018-03-16 | 2021-04-08 | 富士フイルム株式会社 | Machine learning equipment and methods |
| JP7252158B2 (en) * | 2020-03-13 | 2023-04-04 | 富士フイルム株式会社 | LEARNING METHOD, LEARNING DEVICE, IMAGE ANALYSIS DEVICE, AND PROGRAM |
| JP7723479B2 (en) * | 2021-02-01 | 2025-08-14 | 株式会社デンソーテン | Training data set generation device and training data set generation method |
| WO2024024055A1 (en) * | 2022-07-28 | 2024-02-01 | 富士通株式会社 | Information processing method, device, and program |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004097535A (en) | 2002-09-10 | 2004-04-02 | Toshiba Corp | Method for segmenting medical three-dimensional image data |
| JP2006325629A (en) | 2005-05-23 | 2006-12-07 | Ge Medical Systems Global Technology Co Llc | Three-dimensional interest region setting method, image acquisition apparatus, and program |
| US20100128946A1 (en) | 2008-11-22 | 2010-05-27 | General Electric Company | Systems, apparatus and processes for automated medical image segmentation using a statistical model |
| JP2013506478A (en) | 2009-09-30 | 2013-02-28 | インペリアル イノベ−ションズ リミテッド | Medical image processing method and apparatus |
| US20140086465A1 (en) | 2012-09-27 | 2014-03-27 | Siemens Product Lifecycle Management Software Inc. | Multi-bone segmentation for 3d computed tomography |
| US20160110632A1 (en) * | 2014-10-20 | 2016-04-21 | Siemens Aktiengesellschaft | Voxel-level machine learning with or without cloud-based support in medical imaging |
| JP2017202321A (en) | 2016-05-09 | 2017-11-16 | 東芝メディカルシステムズ株式会社 | Medical diagnostic imaging equipment |
| US20180025255A1 (en) | 2016-07-21 | 2018-01-25 | Toshiba Medical Systems Corporation | Classification method and apparatus |
-
2019
- 2019-02-25 JP JP2020503491A patent/JP6952185B2/en active Active
- 2019-02-25 WO PCT/JP2019/007048 patent/WO2019167882A1/en not_active Ceased
-
2020
- 2020-08-24 US US17/000,372 patent/US11494586B2/en active Active
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004097535A (en) | 2002-09-10 | 2004-04-02 | Toshiba Corp | Method for segmenting medical three-dimensional image data |
| JP2006325629A (en) | 2005-05-23 | 2006-12-07 | Ge Medical Systems Global Technology Co Llc | Three-dimensional interest region setting method, image acquisition apparatus, and program |
| US20100128946A1 (en) | 2008-11-22 | 2010-05-27 | General Electric Company | Systems, apparatus and processes for automated medical image segmentation using a statistical model |
| JP2010119850A (en) | 2008-11-22 | 2010-06-03 | General Electric Co <Ge> | System, apparatus, and process for automated medical image segmentation using statistical model |
| JP2013506478A (en) | 2009-09-30 | 2013-02-28 | インペリアル イノベ−ションズ リミテッド | Medical image processing method and apparatus |
| US9251596B2 (en) | 2009-09-30 | 2016-02-02 | Imperial Innovations Limited | Method and apparatus for processing medical images |
| JP2015530193A (en) | 2012-09-27 | 2015-10-15 | シーメンス プロダクト ライフサイクル マネージメント ソフトウェアー インコーポレイテッドSiemens Product Lifecycle Management Software Inc. | Multiple bone segmentation for 3D computed tomography |
| US20140086465A1 (en) | 2012-09-27 | 2014-03-27 | Siemens Product Lifecycle Management Software Inc. | Multi-bone segmentation for 3d computed tomography |
| US20160110632A1 (en) * | 2014-10-20 | 2016-04-21 | Siemens Aktiengesellschaft | Voxel-level machine learning with or without cloud-based support in medical imaging |
| JP2017202321A (en) | 2016-05-09 | 2017-11-16 | 東芝メディカルシステムズ株式会社 | Medical diagnostic imaging equipment |
| US20180184997A1 (en) | 2016-05-09 | 2018-07-05 | Canon Medical Systems Corporation | Medical image diagnosis apparatus |
| US20180025255A1 (en) | 2016-07-21 | 2018-01-25 | Toshiba Medical Systems Corporation | Classification method and apparatus |
| JP2018011958A (en) | 2016-07-21 | 2018-01-25 | 東芝メディカルシステムズ株式会社 | Medical image processing apparatus and medical image processing program |
Non-Patent Citations (4)
| Title |
|---|
| "International Search Report (Form PCT/ISA/210)" of PCT/JP2019/007048, dated May 21, 2019, with English translation thereof, pp. 1-5. |
| "Office Action of Japan Counterpart Application" with English translation thereof, dated Apr. 6, 2021, p. 1-p. 4. |
| "Written Opinion of the International Searching Authority (Form PCT/ISA/237)" of PCT/JP2019/007048, dated May 21, 2019, with English translation thereof, pp. 1-7. |
| Bottger et al, Measuring the Accuracy of Object Detectors and Trackers (Year: 2017). * |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2019167882A1 (en) | 2021-03-04 |
| JP6952185B2 (en) | 2021-10-20 |
| WO2019167882A1 (en) | 2019-09-06 |
| US20200387751A1 (en) | 2020-12-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11494586B2 (en) | Tomographic image machine learning device and method | |
| CN111784700B (en) | Lung lobe segmentation, model training, model construction and segmentation method, system and device | |
| Shin et al. | Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning | |
| Shen et al. | An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy | |
| El-Regaily et al. | Survey of computer aided detection systems for lung cancer in computed tomography | |
| US12086992B2 (en) | Image processing apparatus, image processing system, image processing method, and storage medium for classifying a plurality of pixels in two-dimensional and three-dimensional image data | |
| Li et al. | Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images | |
| CN110517262B (en) | Target detection method, device, equipment and storage medium | |
| Alilou et al. | A comprehensive framework for automatic detection of pulmonary nodules in lung CT images | |
| US20110293157A1 (en) | Medical Image Segmentation | |
| EP3929936A1 (en) | Automatic detection of covid-19 in chest ct images | |
| Sangeetha et al. | Diagnosis of pneumonia using image recognition techniques | |
| US20250173874A1 (en) | Method for detecting white matter lesions based on medical image | |
| CN111798424A (en) | Medical image-based nodule detection method and device and electronic equipment | |
| US12471798B2 (en) | Sacroiliitis discrimination method using sacroiliac joint MR image | |
| Modak et al. | Gpd-nodule: A lightweight lung nodule detection and segmentation framework on computed tomography images using uniform superpixel generation | |
| Shi et al. | MAST-UNet: More adaptive semantic texture for segmenting pulmonary nodules | |
| GB2457022A (en) | Creating a fuzzy inference model for medical image analysis | |
| Fonseca et al. | Tuberculosis detection in chest radiography: A combined approach of local binary pattern features and monarch butterfly optimization algorithm | |
| Rasi et al. | YOLO based deep learning model for segmenting the color images | |
| Heeneman et al. | Lung nodule detection by using Deep Learning | |
| Li et al. | SIFT-GVF-based lung edge correction method for correcting the lung region in CT images | |
| Tasnádi | Active contour and deep learning methods for single-cell segmentation in microscopy images | |
| KR102680365B1 (en) | Bone metastasis detecting method using ct image and analysis apparatus | |
| Shaffie et al. | A Comprehensive Framework for Accurate Classification of Pulmonary Nodules |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: FUJIFILM CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KESHWANI, DEEPAK;REEL/FRAME:053608/0764 Effective date: 20200601 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |