WO2020003990A1 - 医用画像処理装置及び方法、機械学習システム、プログラム並びに記憶媒体 - Google Patents
医用画像処理装置及び方法、機械学習システム、プログラム並びに記憶媒体 Download PDFInfo
- Publication number
- WO2020003990A1 WO2020003990A1 PCT/JP2019/022908 JP2019022908W WO2020003990A1 WO 2020003990 A1 WO2020003990 A1 WO 2020003990A1 JP 2019022908 W JP2019022908 W JP 2019022908W WO 2020003990 A1 WO2020003990 A1 WO 2020003990A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- learning
- computer
- medical image
- information
- difference information
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 189
- 238000010801 machine learning Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000004195 computer-aided diagnosis Methods 0.000 claims abstract description 175
- 238000004891 communication Methods 0.000 claims abstract description 75
- 238000011156 evaluation Methods 0.000 claims abstract description 46
- 230000006872 improvement Effects 0.000 claims abstract description 11
- 230000010365 information processing Effects 0.000 claims description 38
- 230000006870 function Effects 0.000 claims description 32
- 230000008569 process Effects 0.000 claims description 29
- 238000000605 extraction Methods 0.000 claims description 19
- 238000003672 processing method Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 12
- 238000003745 diagnosis Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000002603 single-photon emission computed tomography Methods 0.000 description 2
- 238000012327 Endoscopic diagnosis Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013170 computed tomography imaging Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000009206 nuclear medicine Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present invention relates to a medical image processing apparatus and method, a machine learning system, and a program, and more particularly, to a machine learning technique applied to computer-aided diagnosis (CAD).
- CAD computer-aided diagnosis
- medical image diagnosis such as endoscopic diagnosis, ultrasonic diagnosis, X-ray image diagnosis, or CT (Computerized Tomography) image diagnosis is very important.
- a CAD system that analyzes a medical image using a computer to support diagnosis uses an image recognition technique using machine learning.
- Patent Document 1 describes a medical image system including a classifier for performing a class classification for determining an imaging part of a medical image, and the classifier is configured using a machine learning technique.
- the medical image system described in Patent Literature 1 has a configuration in which the performance of an existing classifier and one or more new classifiers are compared, and the classifier is switched to the classifier with the highest discrimination accuracy. “Identifier” is synonymous with the recognizer.
- Patent Document 2 describes a CAD system that automatically detects a problem site in a medical image and displays the detected problem site together with a mark.
- the CAD system described in Patent Document 2 includes a learning engine and performs machine learning for improving a knowledge base such as a classification model used in a process of detecting a problem part.
- Patent Document 3 describes a diagnosis support apparatus having a function of updating diagnostic knowledge using case data including a medical image and a function of identifying case data using diagnostic knowledge.
- the “diagnosis support device” in Patent Document 3 is understood as a term corresponding to a CAD device.
- an image machine learning device such as deep learning
- learning requires a large amount of high-quality learning data.
- one or more edge devices each having an image storage unit for storing medical images newly created in the hospital as data for use in learning are installed in the hospital.
- a system is constructed that collects learning data distributed from each edge device of a plurality of hospitals. That is, learning data is created based on a medical image newly acquired in each of the plurality of edge devices, and the learning data created by each edge device is sent to a facility outside the hospital and aggregated.
- CAD module may be a program module used in a recognition processing unit of the CAD system.
- a CAD module may be replaced with terms such as "CAD device”, "recognition model”, or "recognizer”.
- the present invention has been made in view of such circumstances, and it is possible to suppress a communication amount and reduce a load of a process of re-learning or additional learning performed by a machine learning device, a medical image processing device, a method, and a machine.
- the purpose is to provide learning systems and programs.
- a medical image processing device includes a learning device that performs additional learning of a first computer-aided diagnosis device based on an input medical image, and a second computer-aided device that is obtained by additional learning using the learning device.
- An evaluation unit that compares the diagnostic device with the first computer-aided diagnostic device and evaluates whether learning difference information of the additional learning contributes to improving the performance of the first computer-aided diagnostic device;
- a communication determination unit that determines whether communication of the learning difference information is necessary based on the result, and a communication unit that outputs the learning difference information according to the determination result of the communication determination unit.
- the learning device is mounted on the medical image processing apparatus, and additional learning of the first computer-aided diagnosis apparatus is performed inside the medical image processing apparatus using the input medical image, and the effect of the additional learning is performed.
- additional learning of the first computer-aided diagnosis apparatus is performed inside the medical image processing apparatus using the input medical image, and the effect of the additional learning is performed.
- the load of the learning data creation process in the external information processing device that receives the learning difference information output from the medical image processing device according to aspect 1 and collects the data is reduced, and the re-learning performed by the external device is performed.
- the weight of the additional learning process can be reduced.
- Additional learning means learning that is additionally performed to update the performance of the computer-aided diagnosis obtained by the learning that has already been performed.
- the “additional learning” may be performed by batch learning or online learning. Online learning is synonymous with sequential learning.
- the medical image processing device may be configured as a single device, or may be configured by combining a plurality of devices.
- the medical image processing apparatus can be realized using one or a plurality of computers.
- Apparatus includes the concepts of “system” and “module”.
- Data includes the concepts of "information” and "signal”.
- Medical images include endoscope images, CT images, X-ray images, ultrasonic diagnostic images, MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography) images, SPECT (Single Photon Emission Computed Tomography) images, Alternatively, there can be various types of images, such as fundus images.
- the medical image processing apparatus further includes a feature amount extracting unit that outputs feature amount information of the medical image, and a recognition processing unit that performs a recognition process based on the feature amount information.
- the additional learning may be performed based on the feature amount information and the recognition result of the recognition processing unit.
- the term "recognition” includes concepts such as identification, discrimination, inference, estimation, detection, and region extraction.
- the “recognition processing unit” includes concepts such as a recognizer, a classifier, a classifier, a detector, and a recognition model.
- the “recognition model” is a learned model in which a certain level of recognition performance has been obtained by machine learning.
- the recognition model may be understood as a program module that performs a recognition process.
- Aspect 3 may be a configuration in which the medical image processing apparatus according to aspect 1 or aspect 2 further includes a first information storage unit that stores parameter information of the first computer-aided diagnosis device.
- the learning difference information is a combination of the parameter information of the first computer-aided diagnosis apparatus and the parameter information of the second computer-aided diagnosis apparatus.
- the configuration may include parameter difference information indicating the difference.
- the communication amount can be further reduced.
- the learning difference information may include a parameter information of the second computer-aided diagnosis apparatus.
- Aspect 6 may be configured such that in the medical image processing apparatus of Aspect 2, the learning difference information includes feature amount information.
- the communication amount can be suppressed as compared with the case where the medical image itself is communicated.
- the learning difference information may include a medical image provided for additional learning.
- An eighth aspect is the medical image processing apparatus according to any one of the first to seventh aspects, wherein the first computer-aided diagnosis apparatus is created by performing machine learning in advance using the first learning data set.
- the first recognizer and the second computer-aided diagnosis device may be configured as a second recognizer in which parameters of the first recognizer are changed.
- a machine learning system receives the medical image processing apparatus according to any one of the aspects 1 to 8, and the learning difference information output from the communication unit, and transmits learning data including the learning difference information. And an information processing device for collecting.
- connection here includes the concept of connection via a communication line.
- Connection includes the concept of both wired and wireless connections.
- the machine learning system may include a plurality of medical image processing devices and at least one information processing device. According to this aspect, it is possible to efficiently collect high-quality learning data from a plurality of medical image processing apparatuses.
- the information processing device may include a learning processing unit that performs a learning process of the first computer-aided diagnosis device using the received learning difference information.
- the information processing device functions as a machine learning device. According to the tenth aspect, the load of the process of the re-learning or the additional learning in the machine learning device is reduced.
- the information processing apparatus further includes a second information storage unit that stores the received learning difference information.
- the information processing device functions as a machine learning device. According to the eleventh aspect, the load of the process of the re-learning or the additional learning in the machine learning device is reduced.
- a medical image processing method is a medical image processing method performed by a medical image processing apparatus, wherein a first step of acquiring a medical image and a first step using a learning device are performed based on the acquired medical image.
- a step of performing additional learning of the computer-aided diagnosis apparatus, and comparing the second computer-aided diagnosis apparatus obtained by the additional learning using the learning device with the first computer-aided diagnosis apparatus, and learning difference information of the additional learning is obtained.
- the same matters as those specified in the second to tenth aspects can be appropriately combined.
- the elements of the processing unit and the functional unit as means for performing the processes and operations specified in the configuration of the apparatus can be grasped as the elements of the steps (processes) of the corresponding processes and operations.
- the medical image processing method according to aspect 12 may be understood as a method of operating a medical image processing apparatus.
- the program according to the thirteenth aspect is configured such that a computer has a function of a learning device that performs additional learning of the first computer-aided diagnosis device based on an input medical image, and a second device that is obtained by additional learning using the learning device. Comparing the computer-aided diagnosis apparatus with the first computer-aided diagnosis apparatus to evaluate whether or not the learning difference information of the additional learning contributes to the performance improvement of the first computer-aided diagnosis apparatus; This is a program for realizing a function of determining whether communication of learning difference information is necessary based on an evaluation result obtained, and a function of outputting learning difference information from a communication unit according to a determination result obtained by the determination.
- a medical image processing device includes at least one processor, and the processor performs processing of a learning device that performs additional learning of the first computer-aided diagnosis device based on the input medical image. Comparing the second computer-aided diagnosis device obtained by the additional learning using the learning device with the first computer-aided diagnosis device, the learning difference information of the additional learning is used to improve the performance of the first computer-aided diagnosis device. Evaluation processing for evaluating whether or not to contribute, communication determination processing for determining whether communication of learning difference information is necessary based on the evaluation result of the evaluation processing, and learning difference information output in accordance with the determination result of the communication determination processing And a medical image processing apparatus that performs communication processing.
- the amount of communication can be reduced, and high-quality learning data can be efficiently collected. Further, according to the present invention, it is possible to reduce the load of the re-learning or additional learning process performed by the machine learning device using the learning data collected from the medical image processing device.
- FIG. 1 is a block diagram schematically illustrating functions of a machine learning system including the medical image processing apparatus according to the first embodiment.
- FIG. 2 is a flowchart illustrating an operation example of the medical image processing apparatus.
- FIG. 3 is a flowchart illustrating an operation example of the information processing apparatus of the out-of-hospital system.
- FIG. 4 is a block diagram schematically illustrating functions of a machine learning system including a medical image processing device according to the second embodiment.
- FIG. 5 is a block diagram schematically showing functions of a machine learning system including a medical image processing device according to the third embodiment.
- FIG. 6 is a block diagram illustrating another example of the machine learning system.
- FIG. 7 is a block diagram illustrating an outline of a machine learning system configured by combining a plurality of in-hospital systems and one out-of-hospital system.
- FIG. 8 is a block diagram illustrating an example of a hardware configuration of a computer.
- FIG. 1 is a block diagram schematically illustrating an overall configuration of a machine learning system including a medical image processing apparatus according to the first embodiment.
- the machine learning system 10 collects new learning data that contributes to improving the performance of the existing CAD module, and performs re-learning or additional learning learning processing on the existing CAD module using the collected learning data. And a process of creating a new CAD module having improved performance compared to the CAD module.
- CAD module performance means the accuracy of CAD module recognition or diagnosis.
- the performance of the CAD module may be referred to as “CAD performance”.
- the existing CAD module is an example of a “first computer-aided diagnostic device” and a “first recognizer”.
- the existing CAD module is a trained model that has acquired recognition performance by machine learning that has already been performed.
- the learned model that performs the recognition process is called a “recognition model”.
- the learning performed to obtain the initial recognition performance as an existing CAD module is referred to as “first learning”.
- the learning data set used for the first learning of the existing CAD module is referred to as a “first learning data set”.
- the first learning data set may be a learning data set prepared in advance.
- the machine learning system 10 includes an in-hospital system 12 and an out-of-hospital system 14.
- the in-hospital system 12 includes a medical image processing device 13 which is an edge device installed in a hospital.
- the “hospital” includes concepts of a hospital, a clinic, a medical examination center, and other similar medical institutions.
- the out-of-hospital system 14 includes an information processing device 15 installed in a facility outside the hospital.
- the medical image processing device 13 and the information processing device 15 are connected via a communication line. In FIG. 1, illustration of a communication line is omitted.
- the medical image processing apparatus 13 can be configured using one or a plurality of computers. The function of the medical image processing apparatus 13 will be briefly described.
- the medical image processing apparatus 13 acquires data of a medical image captured using a medical image capturing apparatus (not shown), and acquires feature amount information extracted from the acquired medical image. Perform additional learning using. Further, the medical image processing apparatus 13 evaluates the result of the additional learning, and selectively communicates learning information that truly contributes to the improvement of the performance of the CAD module to the out-of-hospital system 14 based on the evaluation result.
- the medical imaging apparatus is, for example, one or a combination of an endoscope, an X-ray imaging apparatus, a CT imaging apparatus, an MRI (magnetic resonance imaging) imaging apparatus, a nuclear medicine diagnostic apparatus, or a fundus camera. Good.
- an endoscope image photographed using an endoscope will be described.
- the medical image processing device 13 may be a processor device connected to an endoscope, an image capturing terminal that collects medical image data from the processor device, or an image information management device. There may be.
- the medical image processing apparatus 13 includes an evaluation target CAD information holding unit 20, a feature amount extraction unit 22, a recognition processing unit 24, a learning unit 26, a CAD evaluation unit 28, a learning difference information creation unit 30, a processing unit 32, and a communication unit 34. including.
- the evaluation target CAD information holding unit 20 is a storage device that holds the evaluation target CAD information.
- the CAD information to be evaluated is information on a CAD module to be compared and evaluated with a CAD module on which additional learning has been performed using the learning device 26.
- the CAD information to be evaluated includes parameter information of the existing CAD module.
- the parameters of the processing algorithm in the CAD module are an example of parameter information.
- Parameter information that specifies the contents of the CAD module is called a CAD parameter.
- the CAD parameters include, for example, connection weights between nodes in a model of a neural network, node biases, and the like.
- the CAD information holding unit to be evaluated 20 is configured using, for example, a hard disk device, an optical disk, a magneto-optical disk, or a semiconductor memory, or a storage device using an appropriate combination thereof.
- the evaluation target CAD information holding unit 20 is an example of a “first information storage unit”.
- the feature extracting unit 22 performs a process of extracting the feature by processing the input medical image.
- the feature amount extraction unit 22 may be, for example, a lower layer part of a convolutional neural network (CNN: Convolutional Neural Network) learned by machine learning.
- CNN convolutional neural network
- the lower layer of the CNN can be used as the feature amount extraction unit 22 to extract the feature amount of the image.
- the feature amount extraction unit 22 outputs feature amount information extracted from the medical image.
- the feature amount information may be, for example, a feature map of a plurality of channels.
- the feature amount information extracted by the feature amount extraction unit 22 is sent to each of the recognition processing unit 24 and the learning device 26.
- the same CAD module as the existing CAD module is applied to the recognition processing unit 24.
- the recognition processing unit 24 may be, for example, a higher layer part of the CNN that performs a task of class classification. Alternatively, the recognition processing unit 24 may be a higher layer part of a hierarchical network that performs a segmentation task.
- the recognition processing unit 24 is not limited to the neural network model, and may be another model such as a support vector machine (SVM: Support ⁇ Vector ⁇ Machine) whose performance can be improved by learning.
- SVM Support ⁇ Vector ⁇ Machine
- the recognition processing unit 24 performs a recognition process such as a class classification task or a segmentation task based on the feature amount information extracted by the feature amount extraction unit 22, and outputs a recognition result.
- the recognition result of the recognition processing unit 24 is input to the learning device 26.
- the learning unit 26 includes a learning model therein, and uses the feature amount information obtained from the feature amount extraction unit 22 and the recognition result obtained from the recognition processing unit 24 as learning data, and generates a learning model. Perform additional learning.
- the learning model may be the same as the recognition processing unit 24, and may also be used as the recognition processing unit 24.
- the learning model may be the same model as the existing CAD module in an initial state. The parameters of the learning model of the learning device 26 are updated by performing additional learning.
- the new CAD parameters created by the learning device 26 are called “new CAD parameters”.
- the new CAD parameters created by the learning unit 26 are sent to the CAD evaluation unit 28.
- the CAD module specified by the new CAD parameters is an example of a “second computer-aided diagnosis device” and a “second recognizer”.
- the CAD evaluation unit 28 performs the learning of the performance of the CAD module specified by the new CAD parameters learned by the learning unit 26 and the existing CAD module specified by the CAD parameters stored in the CAD information storage unit 20 to be evaluated. By comparing with the performance of the CAD module, it is evaluated whether the information used for the additional learning in the learning device 26 contributes to the improvement of the CAD performance.
- the index for evaluating the model performance in the CAD evaluation unit 28 one or a combination of known indexes represented by “sensitivity”, “specificity”, or “accuracy” can be used.
- the evaluation result of the CAD evaluation unit 28 is sent to the learning difference information creation unit 30.
- the CAD evaluation unit 28 is an example of an “evaluation unit”.
- the learning difference information creation unit 30 determines whether or not to transmit the learning difference information to the out-of-hospital system 14 according to the evaluation result output from the CAD evaluation unit 28, that is, determines whether communication is necessary, and performs the transmission. If it is determined, learning difference information for communication is created.
- the learning difference information creating unit 30 is an example of a “communication determining unit” that determines whether communication is necessary. Note that the CAD evaluation unit 28 may include a communication determination function.
- learning difference information means information capable of reproducing the “difference” between the CAD module before learning and the CAD module after learning. That is, the learning difference information is information that causes a “difference” between the existing CAD module before learning and the new CAD module after learning.
- the learning difference information may be a difference between an existing CAD parameter before learning and a new CAD parameter after learning, that is, parameter difference information indicating a change. Under the condition that the existing CAD parameters are known, if there is parameter difference information, a new CAD module after learning can be reproduced.
- the learning difference information may be the new CAD parameters themselves after learning. If there is a new CAD parameter, a new CAD module after learning can be reproduced.
- the learning difference information may be feature amount information used as an input in the additional learning in the learning device 26.
- the feature amount information used as learning data used for the additional learning, the same learning process as that of the learning device 26 is performed, thereby reproducing a new CAD module after learning similar to the learning result of the learning device 26. be able to.
- the learning difference information may be a set of original medical images from which feature amount information used as input in the additional learning in the learning device 26 is extracted.
- [Mode 1] has the smallest traffic.
- the amount of communication tends to increase in the order of [form 1] ⁇ [form 2] ⁇ [form 3] ⁇ [form 4] in order of the communication amount.
- the learning difference information of [Mode 1] is used.
- the learning difference information creating unit 30 shown in FIG. 1 is a learning difference representing the difference between the existing CAD parameters read from the evaluated CAD information holding unit 20 and the new CAD parameters obtained from the learning unit 26. Create parameter difference information as information.
- the learning difference information created by the learning difference information creating unit 30 is sent to the processing unit 32.
- the combination of the learning difference information creating unit 30 and the processing unit 32 plays a role as a learning data creating unit.
- the processing unit 32 is a communication signal processing unit that performs signal conversion suitable for the communication format of the communication unit 34.
- the processing unit 32 can acquire the evaluation target CAD information from the evaluation target CAD information holding unit 20.
- the learning difference information and the CAD information to be evaluated undergo necessary signal conversion in the processing unit 32 and are transmitted from the communication unit 34.
- the communication unit 34 is a communication interface, and is connected to a communication line (not shown).
- the information processing device 15 of the out-of-hospital system 14 includes a communication unit 42 and a learning processing unit 46.
- the communication unit 42 is a communication interface similar to the communication unit 34, and is connected to a communication line (not shown).
- the learning processing unit 46 performs machine learning using learning data including learning difference information collected from the medical image processing apparatus 13.
- the learning processing unit 46 performs re-learning or additional learning processing on the existing CAD module, and creates a new CAD module with improved CAD performance.
- the new CAD module created by the learning processing unit 46 can be provided at an appropriate time as an updated CAD module that replaces the existing CAD module.
- the information processing device 15 functions as a learning data collection device that collects learning data.
- the information processing device 15 functions as a machine learning device that performs machine learning using the collected learning data. Note that the first learning for creating the existing CAD module may be performed using the learning processing unit 46, or may be performed using another machine learning device (not shown).
- FIG. 2 is a flowchart illustrating an operation example of the medical image processing apparatus 13. Each step shown in FIG. 2 is executed by a computer constituting the medical image processing apparatus 13 according to a program.
- step S102 the medical image processing apparatus 13 acquires a medical image.
- step S104 the medical image processing apparatus 13 extracts feature amounts from the acquired medical image to create feature amount information.
- the feature amount information output by the feature amount extraction unit 22 of the medical image processing apparatus 13 is used as an input for additional learning.
- step S106 the medical image processing apparatus 13 performs a recognition process using the feature amount information.
- the recognition processing unit 24 of the medical image processing apparatus 13 outputs a recognition result according to the input feature amount information.
- step S108 the medical image processing apparatus 13 performs additional learning on the existing CAD module using the feature amount information and the recognition result. By performing additional learning in the learning device 26 of the medical image processing apparatus 13, new CAD parameters are created.
- step S110 the medical image processing apparatus 13 evaluates the CAD performance by comparing the new CAD module obtained by the additional learning with the existing CAD module.
- the CAD evaluation unit 28 of the medical image processing apparatus 13 evaluates whether the information used for the additional learning contributes to the improvement of the CAD performance.
- step S112 the medical image processing apparatus 13 determines whether or not to transmit the learning difference information to the outside based on the evaluation result in step S110.
- step S112 If the result of the determination in step S112 is a Yes determination, that is, if it is determined that transmission is to be performed outside, the process proceeds to step S114.
- step S114 the medical image processing apparatus 13 creates learning difference information to be output to the outside.
- step S114 the medical image processing apparatus 13 transmits the created learning difference information.
- step S112 when the result of the determination in step S112 is No, that is, when it is determined that there is no need to transmit to the outside, the medical image processing apparatus 13 omits the processing in step S114 and step S116, and The flowchart of FIG.
- FIG. 3 is a flowchart showing an operation example of the information processing device 15 of the system 14 outside the hospital. Each step shown in FIG. 3 is executed by a computer constituting information processing device 15 according to a program.
- step S202 the information processing device 15 receives the learning difference information via the communication unit 42.
- step S204 the information processing device 15 stores the received learning difference information.
- the information processing device 15 includes a storage device, and the learning difference information received from the medical image processing device 13 is stored in the storage device.
- step S206 the information processing device 15 performs a learning process using the learning difference information received from the medical image processing device 13 as learning data.
- the learning processing unit 46 of the information processing device 15 performs re-learning or additional learning of the existing CAD module using the learning difference information. Note that the learning processing unit 46 may use the first learning data set used when creating the existing CAD module, in addition to the learning difference information.
- the learning process in step S206 may be batch learning or online learning. After the learning process in step S206, a new CAD module with improved CAD performance is created.
- step S208 the information processing device 15 outputs CAD information of the created new CAD module.
- the “output” of the CAD information referred to here includes, for example, a mode of transmitting data via the communication unit 42 or a mode of outputting data for storing the CAD information in an external storage device.
- the CAD information of the new CAD module created by the information processing device 15 is sent to the medical image processing device 13, and the new CAD module is replaced with the “existing CAD module” and the “existing CAD module” is updated. You may.
- the CAD information of the new CAD module output from the information processing device 15 can be “evaluated CAD information” in the medical image processing device 13.
- CAD information of a new CAD module created by the information processing device 15 can be stored in an external storage device and distributed through the external storage device.
- step S208 the flowchart of FIG. 3 ends.
- the communication amount between the medical image processing apparatus 13 and the information processing apparatus 15 can be suppressed, and the load of the learning processing on the information processing apparatus 15 can be reduced.
- FIG. 4 is a block diagram schematically illustrating an overall configuration of a machine learning system including a medical image processing apparatus according to the second embodiment. 4, elements that are the same as or similar to the configuration shown in FIG. 1 are given the same reference numerals, and redundant description will be omitted. The difference from FIG. 1 will be described.
- the machine learning system 60 shown in FIG. 4 includes a medical image processing device 63 instead of the medical image processing device 13 described in FIG.
- the medical image processing apparatus 63 according to the second embodiment uses the feature amount information described in [Mode 3] as learning difference information. That is, the medical image processing device 63 transmits the feature amount information as the learning difference information to the information processing device 15 of the out-of-hospital system 14.
- the medical image processing device 63 includes a processing unit 31 instead of the learning difference information creating unit 30 and the processing unit 32 in FIG.
- the processing unit 31 illustrated in FIG. 4 performs communication determination of the feature amount information based on the evaluation result of the CAD evaluation unit 28. As a result of the communication determination, when it is determined that the feature amount information is to be transmitted, the processing unit 31 sends the feature amount information output from the feature amount extraction unit 22 to the communication unit 34.
- the processing unit 31 is an example of a “communication determining unit”. Other configurations and operations are the same as those of the first embodiment.
- FIG. 5 is a block diagram schematically illustrating an overall configuration of a machine learning system including a medical image processing device according to the third embodiment. 5, the same or similar elements as those shown in FIG. 1 are denoted by the same reference numerals, and redundant description will be omitted. The difference from FIG. 1 will be described.
- the machine learning system 70 shown in FIG. 5 includes a medical image processing device 73 and a plurality of devices 74A, 74B, 74C instead of the medical image processing device 13 described in FIG.
- the medical image processing device 73 is connected to the plurality of devices 74A, 74B, 74C.
- FIG. 5 shows three devices 74A, 74B, and 74C, but the number of devices is not limited.
- Each of the plurality of devices 74A, 74B, 74C may be, for example, an endoscope processor device.
- the medical image processing device 73 may be, for example, a data storage system or a data management system that collects data from a plurality of processor devices.
- the device 74A includes a feature amount extraction unit 22A and a recognition processing unit 24A.
- the device 74B includes a feature amount extraction unit 22B and a recognition processing unit 24B.
- the device 74C includes a feature amount extraction unit 22C and a recognition processing unit 24C.
- Each of the feature amount extraction units 22A, 22B, and 22C has the same configuration as the feature amount extraction unit 22 described with reference to FIG.
- Each of the recognition processing units 24A, 24B, and 24C has the same configuration as the recognition processing unit 24 described with reference to FIG.
- the medical image processing apparatus 73 includes the evaluated CAD information storage unit 20, the learning unit 26, the CAD evaluation unit 28, the learning difference information creation unit 30, the processing unit 32, and the communication unit 34.
- the learning device 26 acquires the feature amount information and the recognition result from each of the devices 74A, 74B, and 74C, and creates a new CAD parameter.
- the CAD evaluation unit 28 evaluates each of a plurality of new CAD parameters created using information obtained from each of the plurality of devices 74A, 74B, and 74C, and selectively transmits those having a good evaluation. Subject to. For example, among a plurality of new CAD parameters, only one having higher performance or only two higher performance may be selectively transmitted. Other configurations are the same as in the first embodiment.
- ⁇ Modification 1 In each of the first to third embodiments, an example in which a medical image processing apparatus which is an edge device of the in-hospital system 12 and an information processing apparatus of the out-of-hospital system 14 are connected one-to-one. As described above, a plurality of medical image processing apparatuses (edge devices) may be connected to the information processing apparatuses of the out-of-hospital system 14 in a plural-to-one relationship.
- FIG. 6 shows another embodiment of the machine learning system.
- FIG. 6 is a block diagram illustrating an example in which the in-hospital system 12 includes a plurality of edge devices. 6, the same or similar elements as those in FIG. 1 are denoted by the same reference numerals, and duplicate description will be omitted.
- the in-hospital system 12 includes a plurality of edge devices 16, and each edge device is connected to the out-of-hospital system 14 via a communication line 18.
- each edge device 16 is illustrated and described as “E1” and “E2”, respectively.
- the configuration of each edge device 16 is the same as the configuration of the medical image processing apparatus 13 in FIG.
- the information processing device 15 of the out-of-hospital system 14 includes an information storage unit 44.
- the information storage unit 44 is configured using, for example, a hard disk device, an optical disk, a magneto-optical disk, or a semiconductor memory, or a storage device configured using an appropriate combination thereof.
- the information storage unit 44 is an example of a “second information storage unit”.
- the information storage unit 44 may hold the first learning data set used for the first learning.
- the information obtained via the communication unit 42 is stored in the information storage unit 44.
- the new CAD information created by the learning processing unit 46 is stored in the information storage unit 44.
- FIG. 7 is a block diagram illustrating an outline of a machine learning system configured by combining a plurality of in-hospital systems and one out-of-hospital system.
- FIG. 7 the same or similar elements as those in FIGS. 1 and 6 are denoted by the same reference numerals, and redundant description will be omitted.
- each of the in-hospital systems 12 includes one or more edge devices.
- each of the plurality of in-hospital systems 12 is described as “IHS1,” “IHS2,”... “IHSn”.
- edge devices 16 included in each in-hospital system 12 are described as “E11”, “E12”, “E21”, “En1”, and “Enm”. “N” is an index number that specifies the hospital system 12, and “m” is an index number that specifies the edge device 16 in one hospital system 12.
- high-quality learning data can be efficiently collected from many edge devices 16, and a large amount of learning data can be obtained.
- each processing unit and control unit >> The feature amount extraction unit 22, the recognition processing unit 24, the learning unit 26, the CAD evaluation unit 28, the learning difference information creation unit 30, the processing units 31, 32, the communication units 34 and 42, and the learning processing unit 46 described in each embodiment.
- the hardware structure of a processing unit (processing unit) that executes various types of processing is, for example, the following various types of processors.
- the various processors include a CPU (Central Processing Unit) which is a general-purpose processor that functions as various processing units by executing programs, a GPU (Graphics Processing Unit) which is a processor specialized in image processing, and an FPGA (Field Designed to execute specific processing such as Programmable Logic Device (PLD), ASIC (Application Specific Integrated Circuit) which is a processor whose circuit configuration can be changed after manufacturing such as Programmable Gate Array.
- a dedicated electric circuit which is a processor having a circuit configuration, is included.
- One processing unit may be configured by one of these various processors, or may be configured by two or more processors of the same type or different types.
- one processing unit may be configured by a plurality of FPGAs, a combination of a CPU and an FPGA, or a combination of a CPU and a GPU.
- a plurality of processing units may be configured by one processor.
- configuring a plurality of processing units with one processor first, as represented by a computer such as a client or a server, one processor is configured by a combination of one or more CPUs and software. There is a form in which a processor functions as a plurality of processing units.
- SoC system-on-chip
- a form using a processor that realizes the function of the entire system including a plurality of processing units with one integrated circuit (IC) chip is used.
- IC integrated circuit
- the various processing units are configured using one or more of the various processors described above as a hardware structure.
- circuitry in which circuit elements such as semiconductor elements are combined.
- FIG. 8 is a block diagram illustrating an example of a hardware configuration of a computer that can be used as a device that realizes a part or all of the functions of a medical image processing device and an information processing device image.
- the computer includes various types of computers such as a desktop type, a notebook type, and a tablet type. Further, the computer may be a server computer or a microcomputer.
- the computer 500 includes a CPU 502, a memory 504, a GPU 506, a storage device 508, an input interface unit 510, a communication interface unit 512 for network connection, a display control unit 514, a peripheral device interface unit 516, and a bus. 518.
- the notation “IF” indicates “interface”.
- the storage device 508 may be configured using, for example, a hard disk device.
- the storage device 508 stores various programs and data necessary for image processing such as learning processing and / or recognition processing.
- the computer By loading the program stored in the storage device 508 into the memory 504 and executing the program by the CPU 502, the computer functions as a unit for performing various processes specified by the program.
- the input device 520 is connected to the input interface unit 510.
- the input device 520 may be, for example, an operation button, a keyboard, a mouse, a touch panel, a voice input device, or an appropriate combination thereof.
- the user can input various instructions by operating the input device 520.
- the display device 530 is connected to the display control unit 514.
- the display device 530 may be, for example, a liquid crystal display, an organic EL (organic electro-luminescence: OEL) display, a projector, or an appropriate combination of these.
- OEL organic electro-luminescence
- An optical disk, a magnetic disk, or a program that causes a computer to implement at least one of the processing function of the medical image processing apparatus described in each of the above embodiments and the learning data collection function and the learning function of the information processing apparatus It is possible to record the program on a computer readable medium which is a non-transitory information storage medium such as a semiconductor memory or other tangible material, and to provide a program through the information storage medium.
- the program signal can be provided as a download service using an electric communication line such as the Internet.
- the medical image processing apparatus 13 may transmit the parameter difference information and the feature amount information to the out-of-hospital system 14. Further, the medical image processing apparatus 13 may transmit the new CAD parameters and the feature information to the out-of-hospital system 14.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computing Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Optics & Photonics (AREA)
- Evolutionary Biology (AREA)
- Signal Processing (AREA)
- High Energy & Nuclear Physics (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
Description
図1は、第1実施形態に係る医用画像処理装置を含む機械学習システムの全体構成を概略的に示すブロック図である。機械学習システム10は、既存のCADモジュールの性能向上に寄与する新しい学習用データの収集と、収集した学習用データを用いて既存のCADモジュールに対する再学習又は追加学習の学習処理を行い、既存のCADモジュールよりも性能を改善した新規のCADモジュールを作成する処理と、を実施する情報処理システムである。
図2は、医用画像処理装置13の動作例を示すフローチャートである。図2に示す各ステップは、医用画像処理装置13を構成するコンピュータがプログラムに従って実行する。
図4は、第2実施形態に係る医用画像処理装置を含む機械学習システムの全体構成を概略的に示すブロック図である。図4において、図1に示した構成と同一又は類似の要素には同一の符号を付し、重複する説明を省略する。図1との相違点を説明する。
図5は、第3実施形態に係る医用画像処理装置を含む機械学習システムの全体構成を概略的に示すブロック図である。図5において、図1に示した構成と同一又は類似の要素には同一の符号を付し、重複する説明を省略する。図1との相違点を説明する。
第1実施形態から第3実施形態の各実施形態では、病院内システム12のエッジデバイスである医用画像処理装置と、病院外システム14の情報処理装置とが1対1で接続されている例を説明したが、複数の医用画像処理装置(エッジデバイス)が、病院外システム14の情報処理装置と複数対1の関係で接続されてよい。
図7は、複数の病院内システムと1つの病院外システムとを組み合わせて構成される機械学習システムの概要を示すブロック図である。図7において、図1及び図6と同一又は類似の要素には同一の符号を付し、重複する説明は省略する。
各実施形態で説明した特徴量抽出部22、認識処理部24、学習器26、CAD評価部28、学習差分情報作成部30、処理部31、32、通信部34、42、及び学習処理部46などの各種の処理を実行する処理部(processing unit)のハードウェア的な構造は、次に示すような各種のプロセッサ(processor)である。
図8は、医用画像処理装置及び情報処理装置画像の機能の一部又は全部を実現する装置として用いることができるコンピュータのハードウェア構成の例を示すブロック図である。コンピュータには、デスクトップ型、ノート型、又はタブレット型など、各種形態のコンピュータが含まれる。また、コンピュータは、サーバコンピュータであってもよいし、マイクロコンピュータであってもよい。
上述の各実施形態で説明した医用画像処理装置の処理機能、及び情報処理装置の学習用データの収集機能及び学習機能のうち少なくとも1つの機能をコンピュータに実現させるプログラムを光ディスク、磁気ディスク、若しくは、半導体メモリその他の有体物たる非一時的な情報記憶媒体であるコンピュータ可読媒体に記録し、この情報記憶媒体を通じてプログラムを提供することが可能である。またこのような有体物たる非一時的な情報記憶媒体にプログラムを記憶させて提供する態様に代えて、インターネットなどの電気通信回線を利用してプログラム信号をダウンロードサービスとして提供することも可能である。
上述した各実施形態で説明した構成要素、及び変形例で説明した構成要素は、適宜組み合わせて用いることができ、また、一部の構成要素を置き換えることもできる。
以上説明した本発明の実施形態は、本発明の趣旨を逸脱しない範囲で、適宜構成要件を変更、追加、又は削除することが可能である。本発明は以上説明した実施形態に限定されるものではなく、本発明の技術的思想内で同等関連分野の通常の知識を有する者により、多くの変形が可能である。
12 病院内システム
13 医用画像処理装置
14 病院外システム
15 情報処理装置
16 エッジデバイス
18 通信回線
20 被評価対象CAD情報保持部
22、22A、22B、22C 特徴量抽出部
24、24A、24B、24C 認識処理部
26 学習器
28 CAD評価部
30 学習差分情報作成部
31、32 処理部
34、42 通信部
44 情報記憶部
46 学習処理部
60 機械学習システム
63 医用画像処理装置
70 機械学習システム
73 医用画像処理装置
74A、74B、74C デバイス
500 コンピュータ
502 CPU
504 メモリ
506 GPU
508 記憶装置
510 入力インターフェース部
512 通信インターフェース部
514 表示制御部
516 周辺機器用インターフェース部
518 バス
520 入力装置
530 表示装置
S102~S110 医用画像処理装置が実施する処理のステップ
S202~S208 情報処理装置が実施する処理のステップ
Claims (14)
- 入力された医用画像を基に、第1のコンピュータ支援診断装置の追加学習を行う学習器と、
前記学習器を用いた前記追加学習によって得られる第2のコンピュータ支援診断装置と前記第1のコンピュータ支援診断装置とを比較して、前記追加学習の学習差分情報が前記第1のコンピュータ支援診断装置の性能向上に寄与するか否かを評価する評価部と、
前記評価部の評価結果を基に、前記学習差分情報の通信の要否を判定する通信判定部と、
前記通信判定部の判定結果に従い、前記学習差分情報を出力する通信部と、
を備える医用画像処理装置。 - 前記医用画像の特徴量情報を出力する特徴量抽出部と、
前記特徴量情報に基づいて認識処理を行う認識処理部と、をさらに備え、
前記学習器は、前記特徴量情報と、前記認識処理部の認識結果とに基づいて前記追加学習を行う請求項1に記載の医用画像処理装置。 - 前記第1のコンピュータ支援診断装置のパラメータ情報を記憶しておく第1の情報記憶部をさらに備える請求項1又は2に記載の医用画像処理装置。
- 前記学習差分情報が、前記第1のコンピュータ支援診断装置のパラメータ情報と、前記第2のコンピュータ支援診断装置のパラメータ情報との差分を示すパラメータ差分情報を含む請求項1から3のいずれか一項に記載の医用画像処理装置。
- 前記学習差分情報が、前記第2のコンピュータ支援診断装置のパラメータ情報を含む請求項1から3のいずれか一項に記載の医用画像処理装置。
- 前記学習差分情報が、前記特徴量情報を含む請求項2に記載の医用画像処理装置。
- 前記学習差分情報が、前記追加学習に供された前記医用画像を含む請求項1から6のいずれか一項に記載の医用画像処理装置。
- 前記第1のコンピュータ支援診断装置は、予め第1の学習用データセットを用いて機械学習を行うことによって作成された第1の認識器であり、
前記第2のコンピュータ支援診断装置は、前記第1の認識器のパラメータが変更された第2の認識器である請求項1から7のいずれか一項に記載の医用画像処理装置。 - 請求項1から8のいずれか一項に記載の医用画像処理装置と、
前記通信部から出力された前記学習差分情報を受信して、前記学習差分情報を含む学習用のデータを収集する情報処理装置と、
を備える機械学習システム。 - 前記情報処理装置は、前記受信した前記学習差分情報を用いて前記第1のコンピュータ支援診断装置の学習処理を行う学習処理部を含む請求項9に記載の機械学習システム。
- 前記情報処理装置は、前記受信した前記学習差分情報を記憶しておく第2の情報記憶部をさらに備える請求項9又は10に記載の機械学習システム。
- 医用画像処理装置が実施する医用画像処理方法であって、
医用画像を取得するステップと、
前記取得された前記医用画像を基に、学習器を用いて第1のコンピュータ支援診断装置の追加学習を行うステップと、
前記学習器を用いた前記追加学習によって得られる第2のコンピュータ支援診断装置と前記第1のコンピュータ支援診断装置とを比較して、前記追加学習の学習差分情報が前記第1のコンピュータ支援診断装置の性能向上に寄与するか否かを評価するステップと、
前記評価によって得られる評価結果を基に、前記学習差分情報の通信の要否を判定するステップと、
前記判定によって得られる判定結果に従い、前記学習差分情報を通信部から出力するステップと、
を含む医用画像処理方法。 - コンピュータに、
入力された医用画像を基に、第1のコンピュータ支援診断装置の追加学習を行う学習器の機能と、
前記学習器を用いた前記追加学習によって得られる第2のコンピュータ支援診断装置と前記第1のコンピュータ支援診断装置とを比較して、前記追加学習の学習差分情報が前記第1のコンピュータ支援診断装置の性能向上に寄与するか否かを評価する機能と、
前記評価によって得られる評価結果を基に、前記学習差分情報の通信の要否を判定する機能と、
前記判定によって得られる判定結果に従い、前記学習差分情報を通信部から出力する機能と、を実現させるためのプログラム。 - 非一時的かつコンピュータ読取可能な記憶媒体であって、前記記憶媒体に格納された指令がコンピュータによって読み取られた場合に、
入力された医用画像を基に、第1のコンピュータ支援診断装置の追加学習を行う学習器の機能と、
前記学習器を用いた前記追加学習によって得られる第2のコンピュータ支援診断装置と前記第1のコンピュータ支援診断装置とを比較して、前記追加学習の学習差分情報が前記第1のコンピュータ支援診断装置の性能向上に寄与するか否かを評価する機能と、
前記評価によって得られる評価結果を基に、前記学習差分情報の通信の要否を判定する機能と、
前記判定によって得られる判定結果に従い、前記学習差分情報を通信部から出力する機能と、
をコンピュータに実行させる記憶媒体。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19826063.0A EP3815610A4 (en) | 2018-06-28 | 2019-06-10 | MEDICAL IMAGE PROCESSING DEVICE AND METHOD, AUTOMATIC LEARNING SYSTEM, PROGRAM AND STORAGE MEDIA |
CN201980042852.8A CN112334070B (zh) | 2018-06-28 | 2019-06-10 | 医用图像处理装置和方法、机器学习系统以及存储介质 |
JP2020527357A JP7033202B2 (ja) | 2018-06-28 | 2019-06-10 | 医用画像処理装置及び方法、機械学習システム、プログラム並びに記憶媒体 |
CA3104779A CA3104779A1 (en) | 2018-06-28 | 2019-06-10 | Medical image processing apparatus, medical image processing method, machine learning system, and program |
US17/117,408 US20210125724A1 (en) | 2018-06-28 | 2020-12-10 | Medical image processing apparatus, medical image processing method, machine learning system, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-123441 | 2018-06-28 | ||
JP2018123441 | 2018-06-28 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/117,408 Continuation US20210125724A1 (en) | 2018-06-28 | 2020-12-10 | Medical image processing apparatus, medical image processing method, machine learning system, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020003990A1 true WO2020003990A1 (ja) | 2020-01-02 |
Family
ID=68984823
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/022908 WO2020003990A1 (ja) | 2018-06-28 | 2019-06-10 | 医用画像処理装置及び方法、機械学習システム、プログラム並びに記憶媒体 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20210125724A1 (ja) |
EP (1) | EP3815610A4 (ja) |
JP (1) | JP7033202B2 (ja) |
CN (1) | CN112334070B (ja) |
CA (1) | CA3104779A1 (ja) |
WO (1) | WO2020003990A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022138876A1 (ja) * | 2020-12-24 | 2022-06-30 | 富士フイルム株式会社 | 画像診断支援装置、画像診断支援装置の作動方法、及びプログラム |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210065899A1 (en) * | 2019-08-30 | 2021-03-04 | GE Precision Healthcare LLC | Methods and systems for computer-aided diagnosis with deep learning models |
TWI795108B (zh) * | 2021-12-02 | 2023-03-01 | 財團法人工業技術研究院 | 用於判別醫療影像的電子裝置及方法 |
CN115274099B (zh) * | 2022-09-26 | 2022-12-30 | 之江实验室 | 一种人与智能交互的计算机辅助诊断系统与方法 |
CN116664966B (zh) * | 2023-03-27 | 2024-02-20 | 北京鹰之眼智能健康科技有限公司 | 一种红外图像处理系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002174603A (ja) * | 2000-12-08 | 2002-06-21 | Olympus Optical Co Ltd | 欠陥分類方法 |
US20060018524A1 (en) * | 2004-07-15 | 2006-01-26 | Uc Tech | Computerized scheme for distinction between benign and malignant nodules in thoracic low-dose CT |
JP2007528746A (ja) | 2003-06-27 | 2007-10-18 | シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド | Cad(コンピュータ援用決定)プロセスを日常的なcadシステム利用から得られた知識に適合させるために機械学習を利用する医用画像診断用のcad支援 |
WO2010005034A1 (ja) | 2008-07-11 | 2010-01-14 | クラリオン株式会社 | 音響処理装置 |
JP2013192624A (ja) * | 2012-03-16 | 2013-09-30 | Hitachi Ltd | 医用画像診断支援装置、医用画像診断支援方法ならびにコンピュータプログラム |
JP2015116319A (ja) | 2013-12-18 | 2015-06-25 | パナソニックIpマネジメント株式会社 | 診断支援装置、診断支援方法、および診断支援プログラム |
JP2017187918A (ja) * | 2016-04-05 | 2017-10-12 | 日本電信電話株式会社 | 訓練画像選択装置、方法、及びプログラム |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100138240A1 (en) * | 2008-11-28 | 2010-06-03 | David Leib | Data for Use of Accessible Computer Assisted Detection |
US9349178B1 (en) * | 2014-11-24 | 2016-05-24 | Siemens Aktiengesellschaft | Synthetic data-driven hemodynamic determination in medical imaging |
US20160259899A1 (en) * | 2015-03-04 | 2016-09-08 | Expeda ehf | Clinical decision support system for diagnosing and monitoring of a disease of a patient |
US10878320B2 (en) * | 2015-07-22 | 2020-12-29 | Qualcomm Incorporated | Transfer learning in neural networks |
JP2018061771A (ja) * | 2016-10-14 | 2018-04-19 | 株式会社日立製作所 | 画像処理装置、及び画像処理方法 |
KR101879207B1 (ko) * | 2016-11-22 | 2018-07-17 | 주식회사 루닛 | 약한 지도 학습 방식의 객체 인식 방법 및 장치 |
CA3083035A1 (en) * | 2017-10-13 | 2019-04-18 | Ai Technologies Inc. | Deep learning-based diagnosis and referral of ophthalmic diseases and disorders |
CN108056789A (zh) * | 2017-12-19 | 2018-05-22 | 飞依诺科技(苏州)有限公司 | 一种生成超声扫描设备的配置参数值的方法和装置 |
-
2019
- 2019-06-10 JP JP2020527357A patent/JP7033202B2/ja active Active
- 2019-06-10 EP EP19826063.0A patent/EP3815610A4/en active Pending
- 2019-06-10 CA CA3104779A patent/CA3104779A1/en active Pending
- 2019-06-10 CN CN201980042852.8A patent/CN112334070B/zh active Active
- 2019-06-10 WO PCT/JP2019/022908 patent/WO2020003990A1/ja active Application Filing
-
2020
- 2020-12-10 US US17/117,408 patent/US20210125724A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002174603A (ja) * | 2000-12-08 | 2002-06-21 | Olympus Optical Co Ltd | 欠陥分類方法 |
JP2007528746A (ja) | 2003-06-27 | 2007-10-18 | シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド | Cad(コンピュータ援用決定)プロセスを日常的なcadシステム利用から得られた知識に適合させるために機械学習を利用する医用画像診断用のcad支援 |
US20060018524A1 (en) * | 2004-07-15 | 2006-01-26 | Uc Tech | Computerized scheme for distinction between benign and malignant nodules in thoracic low-dose CT |
WO2010005034A1 (ja) | 2008-07-11 | 2010-01-14 | クラリオン株式会社 | 音響処理装置 |
JP2013192624A (ja) * | 2012-03-16 | 2013-09-30 | Hitachi Ltd | 医用画像診断支援装置、医用画像診断支援方法ならびにコンピュータプログラム |
JP2015116319A (ja) | 2013-12-18 | 2015-06-25 | パナソニックIpマネジメント株式会社 | 診断支援装置、診断支援方法、および診断支援プログラム |
JP2017187918A (ja) * | 2016-04-05 | 2017-10-12 | 日本電信電話株式会社 | 訓練画像選択装置、方法、及びプログラム |
Non-Patent Citations (2)
Title |
---|
NOMURA YUKIHIRO: ""Development clinical use , continuous performance improvement of CAD software based on multi-institutional collaboration in Teleradiologie environment",", MEDICAL IMAGING TECHNOLOGY, vol. 32, no. 2, 1 March 2014 (2014-03-01), pages 98 - 108, XP055666943 * |
See also references of EP3815610A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022138876A1 (ja) * | 2020-12-24 | 2022-06-30 | 富士フイルム株式会社 | 画像診断支援装置、画像診断支援装置の作動方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
EP3815610A4 (en) | 2021-09-15 |
JPWO2020003990A1 (ja) | 2021-07-08 |
US20210125724A1 (en) | 2021-04-29 |
CN112334070B (zh) | 2024-03-26 |
EP3815610A1 (en) | 2021-05-05 |
CN112334070A (zh) | 2021-02-05 |
JP7033202B2 (ja) | 2022-03-09 |
CA3104779A1 (en) | 2020-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020003990A1 (ja) | 医用画像処理装置及び方法、機械学習システム、プログラム並びに記憶媒体 | |
Maghdid et al. | Diagnosing COVID-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms | |
KR101818074B1 (ko) | 인공지능 기반 의료용 자동 진단 보조 방법 및 그 시스템 | |
Narin et al. | Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks | |
Wang et al. | A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis | |
Rajpurkar et al. | Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists | |
Ghaderzadeh et al. | Deep convolutional neural network–based computer-aided detection system for COVID-19 using multiple lung scans: design and implementation study | |
US9760990B2 (en) | Cloud-based infrastructure for feedback-driven training and image recognition | |
Do et al. | Bone tumor diagnosis using a naïve Bayesian model of demographic and radiographic features | |
Javor et al. | Deep learning analysis provides accurate COVID-19 diagnosis on chest computed tomography | |
Ghayvat et al. | AI-enabled radiologist in the loop: novel AI-based framework to augment radiologist performance for COVID-19 chest CT medical image annotation and classification from pneumonia | |
Roy et al. | Early prediction of COVID-19 using ensemble of transfer learning | |
Montazeri et al. | Machine learning models for image-based diagnosis and prognosis of COVID-19: Systematic review | |
JP2022036125A (ja) | 検査値のコンテキストによるフィルタリング | |
Khalsa et al. | Artificial intelligence and cardiac surgery during COVID‐19 era | |
Khelili et al. | IoMT-fog-cloud based architecture for Covid-19 detection | |
Fallahpoor et al. | Generalizability assessment of COVID-19 3D CT data for deep learning-based disease detection | |
Iyer et al. | Performance analysis of lightweight CNN models to segment infectious lung tissues of COVID-19 cases from tomographic images | |
Deb et al. | CoVSeverity-Net: an efficient deep learning model for COVID-19 severity estimation from Chest X-Ray images | |
CN113257412B (zh) | 信息处理方法、装置、计算机设备及存储介质 | |
Kang et al. | Current challenges in adopting machine learning to critical care and emergency medicine | |
Chu et al. | Artificial Intelligence and Infectious Disease Imaging | |
Shahi et al. | Application of Deep Learning in Healthcare | |
Beniwal et al. | COVID Detection Using Chest X-ray Images Using Ensembled Deep Learning | |
Jiji et al. | A new model to detect COVID-19 patients based on Convolution Neural Network via l1 regularization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19826063 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020527357 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3104779 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2019826063 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2019826063 Country of ref document: EP Effective date: 20210128 |