WO2022163071A1 - 情報処理装置、方法およびプログラム - Google Patents
情報処理装置、方法およびプログラム Download PDFInfo
- Publication number
- WO2022163071A1 WO2022163071A1 PCT/JP2021/041617 JP2021041617W WO2022163071A1 WO 2022163071 A1 WO2022163071 A1 WO 2022163071A1 JP 2021041617 W JP2021041617 W JP 2021041617W WO 2022163071 A1 WO2022163071 A1 WO 2022163071A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- medical
- patient
- information processing
- diagnostic
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims description 48
- 238000000034 method Methods 0.000 title claims description 15
- 238000004458 analytical method Methods 0.000 claims abstract description 156
- 238000003745 diagnosis Methods 0.000 claims abstract description 47
- 230000003902 lesion Effects 0.000 claims description 39
- 238000003384 imaging method Methods 0.000 claims description 21
- 238000012360 testing method Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 3
- 208000004434 Calcinosis Diseases 0.000 description 27
- 230000002308 calcification Effects 0.000 description 25
- 206010028980 Neoplasm Diseases 0.000 description 22
- 238000013528 artificial neural network Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 19
- 230000002159 abnormal effect Effects 0.000 description 18
- 238000012545 processing Methods 0.000 description 17
- 230000000306 recurrent effect Effects 0.000 description 17
- 201000011510 cancer Diseases 0.000 description 13
- 239000007787 solid Substances 0.000 description 13
- 238000002591 computed tomography Methods 0.000 description 12
- 230000036210 malignancy Effects 0.000 description 12
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 10
- 230000008859 change Effects 0.000 description 10
- 210000004072 lung Anatomy 0.000 description 10
- 201000005202 lung cancer Diseases 0.000 description 10
- 208000020816 lung neoplasm Diseases 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000010521 absorption reaction Methods 0.000 description 8
- 201000008968 osteosarcoma Diseases 0.000 description 8
- 239000013598 vector Substances 0.000 description 8
- 238000009534 blood test Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000014509 gene expression Effects 0.000 description 6
- 238000002595 magnetic resonance imaging Methods 0.000 description 6
- 230000003211 malignant effect Effects 0.000 description 5
- 239000005411 L01XE02 - Gefitinib Substances 0.000 description 4
- XGALLCVXEZPNRQ-UHFFFAOYSA-N gefitinib Chemical compound C=12C=C(OCCCN3CCOCC3)C(OC)=CC2=NC=NC=1NC1=CC=C(F)C(Cl)=C1 XGALLCVXEZPNRQ-UHFFFAOYSA-N 0.000 description 4
- 229940084651 iressa Drugs 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 210000004224 pleura Anatomy 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 239000005337 ground glass Substances 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 230000009545 invasion Effects 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 208000024891 symptom Diseases 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- 206010003598 Atelectasis Diseases 0.000 description 2
- 206010027476 Metastases Diseases 0.000 description 2
- 206010067472 Organising pneumonia Diseases 0.000 description 2
- 208000007123 Pulmonary Atelectasis Diseases 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000004195 computer-aided diagnosis Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 201000009805 cryptogenic organizing pneumonia Diseases 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000009401 metastasis Effects 0.000 description 2
- 230000008092 positive effect Effects 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 206010009944 Colon cancer Diseases 0.000 description 1
- 208000019693 Lung disease Diseases 0.000 description 1
- 208000016433 Tuberculoma Diseases 0.000 description 1
- 239000002253 acid Substances 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 208000029742 colonic neoplasm Diseases 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000013604 expression vector Substances 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 238000007373 indentation Methods 0.000 description 1
- 230000008595 infiltration Effects 0.000 description 1
- 238000001764 infiltration Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 208000027531 mycobacterial infectious disease Diseases 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present disclosure relates to an information processing device, method and program.
- CT Computer Tomography
- MRI Magnetic Resonance Imaging
- medical images are analyzed by CAD (Computer-Aided Diagnosis) using a learning model trained by deep learning, etc., and the shape, density, position and size of abnormal shadows such as lesions contained in medical images It is performed to detect the properties of
- the analysis results obtained in this manner are stored in a database in association with examination information such as the patient's name, sex, age, and imaging device that acquired the medical image.
- the medical image and the analysis result are transmitted to the terminal of the interpretation doctor who interprets the medical image.
- the interpreting doctor interprets the medical image by referring to the delivered medical image and the analysis result at his/her own interpretation terminal, and creates an interpretation report.
- image diagnosis uses not only one type of image such as a CT image, but also a plurality of types of images including MRI images and the like. For this reason, if only the analysis result of the medical image and the findings input by the interpreting doctor are used as in the method described in Japanese Patent Application Laid-Open No. 2019-153250, the sentence generated by the learning model is not necessarily the patient's It is not a medical text that accurately describes the situation.
- the present disclosure has been made in view of the above circumstances, and aims to enable the generation of medical texts that accurately represent the patient's situation.
- An information processing apparatus includes at least one processor, the processor obtains one or more analysis results regarding a medical image of a patient, Acquiring diagnostic information about a patient's diagnosis other than analysis results, A medical statement about the patient is generated based on the analysis results and diagnostic information.
- the processor selects analysis results based on diagnostic information, It may generate a medical document containing the selected analysis results.
- the processor may generate medical texts including the analysis results with priority according to diagnostic information.
- the processor may generate medical text including diagnostic information and analysis results.
- the diagnostic information may include first information that is established regarding lesions included in medical images.
- the first information may include at least one of a lesion measurement result, a definitive diagnosis result of the lesion, and the patient's medical history.
- the diagnosis information may include confirmed second information other than the information regarding the lesion included in the medical image.
- the second information may include at least one of the purpose of the examination for which the medical image was obtained and the image conditions regarding the medical image.
- the diagnostic information may include third information representing the judgment result of the medical image interpreting doctor.
- the third information includes at least one of an undetermined diagnosis result regarding the medical image, a relationship between a lesion included in the medical image and tissue other than the lesion, and a selection result of the analysis result by the radiologist.
- the diagnostic information may include fourth information representing the results of examinations performed on the patient.
- the fourth information is at least the result of examination by diagnostic equipment different from the imaging device that acquires the medical image of the patient, the analysis result of a type of medical image different from the medical image, and the examination result of the biological information of the patient. It may contain one.
- An information processing method comprises at least one processor, the processor obtaining one or more analysis results for a medical image of a patient, Acquiring diagnostic information about a patient's diagnosis other than analysis results, A medical statement about the patient is generated based on the analysis results and diagnostic information.
- the information processing method of the present disclosure may be provided as a program for causing a computer to execute it.
- FIG. 1 is a diagram showing an example of a schematic configuration of a medical information system to which the information processing apparatus according to the first embodiment is applied;
- FIG. 1 is a block diagram showing an example of a hardware configuration of an information processing apparatus according to a first embodiment;
- FIG. 1 is a block diagram showing an example of a functional configuration of an information processing apparatus according to a first embodiment;
- FIG. Diagram showing an example of analysis results A schematic diagram of a recurrent neural network
- FIG. 3 is a block diagram showing an example of the functional configuration of an information processing apparatus according to a second embodiment;
- FIG. 1 is a diagram showing the schematic configuration of the medical information system 1.
- the medical information system 1 shown in FIG. 1 captures an examination target region of a patient as a subject, stores medical images acquired by the imaging, This is a system for interpretation of medical images and creation of an interpretation report by an interpreting doctor, and viewing of the interpretation report and detailed observation of the medical image to be interpreted by a doctor of the medical department of the requesting department.
- a medical information system 1 includes a plurality of imaging devices 2, a plurality of interpretation WSs (WorkStations) 3 which are interpretation terminals, a medical examination WS 4, an image server 5, an image DB (DataBase) 6, a report server 7, and The report DBs 8 are connected to each other via a wired or wireless network 10 so as to be communicable with each other.
- Each device is a computer installed with an application program for functioning as a component of the medical information system 1.
- Application programs are recorded on recording media such as DVDs (Digital Versatile Discs) and CD-ROMs (Compact Discs Read Only Memory) for distribution, and are installed in computers from the recording media.
- recording media such as DVDs (Digital Versatile Discs) and CD-ROMs (Compact Discs Read Only Memory) for distribution, and are installed in computers from the recording media.
- recording media such as DVDs (Digital Versatile Discs) and CD-ROMs (Compact Discs Read Only Memory) for distribution, and are installed in computers from the recording media.
- it is stored in a storage device of a server computer connected to the network 10 or a network storage in a state accessible from the outside, and is downloaded and installed in a computer upon request.
- the imaging device 2 is a device (modality) that generates a medical image representing the diagnostic target region by imaging the diagnostic target region of the patient. Specifically, they are plain X-ray equipment, CT equipment, MRI equipment, and PET (Positron Emission Tomography) equipment. A medical image generated by the imaging device 2 is transmitted to the image server 5 and stored in the image DB 6 .
- the interpretation WS3 is a computer used by, for example, a radiology interpreting doctor to interpret medical images and create interpretation reports, and includes the information processing apparatus 20 (details will be described later) according to the first embodiment.
- the interpretation WS 3 requests the image server 5 to view medical images, performs various image processing on the medical images received from the image server 5 , displays the medical images, and accepts input of remarks on the medical images. Further, the interpretation WS 3 performs analysis processing on medical images, supports creation of interpretation reports based on the analysis results, requests registration and viewing of interpretation reports to the report server 7 , and displays interpretation reports received from the report server 7 . . These processes are performed by the interpretation WS3 executing a software program for each process.
- the clinical WS 4 is a computer used by, for example, doctors in clinical departments for detailed observation of images, viewing of interpretation reports, and preparation of electronic medical charts. It consists of a device.
- the medical examination WS 4 requests the image server 5 to view images, displays the images received from the image server 5 , requests the report server 7 to view interpretation reports, and displays the interpretation reports received from the report server 7 . These processes are performed by the clinical WS 4 executing a software program for each process.
- the image server 5 is a general-purpose computer installed with a software program that provides the functions of a database management system (DBMS). Further, the image server 5 has a storage in which an image DB 6 is configured. This storage may be a hard disk device connected to the image server 5 by a data bus, or a disk device connected to NAS (Network Attached Storage) and SAN (Storage Area Network) connected to network 10. may be When the image server 5 receives a registration request for a medical image from the imaging device 2 , the image server 5 prepares the medical image into a database format and registers it in the image DB 6 .
- DBMS database management system
- the image server 5 stores diagnostic information related to patient diagnosis. Diagnostic information will be described later.
- the incidental information includes, for example, an image ID (identification) for identifying individual medical images, a patient ID for identifying a patient, an examination ID for identifying an examination, a unique ID assigned to each medical image ( UID: unique identification), examination date when the medical image was generated, examination time, type of imaging device used in the examination to acquire the medical image, patient information such as patient name, age, gender, examination site (imaging part), imaging information (imaging protocol, imaging sequence, imaging method, imaging conditions, use of contrast agent, etc.), and information such as series number or collection number when multiple medical images are acquired in one examination .
- image ID identification
- UID unique ID assigned to each medical image
- examination date when the medical image was generated examination time
- type of imaging device used in the examination to acquire the medical image patient information such as patient name, age, gender, examination site (imaging part), imaging information (imaging protocol, imaging sequence, imaging method, imaging conditions, use of contrast agent, etc.), and information such as series number or collection number when multiple medical images are acquired in one examination .
- the image server 5 When the image server 5 receives a viewing request from the interpretation WS 3 and the medical care WS 4 via the network 10 , the image server 5 searches for medical images registered in the image DB 6 and distributes the retrieved medical images to the requesting interpretation WS 3 and the medical care WS 4 . Send to WS4.
- the report server 7 incorporates a software program that provides the functions of a database management system to a general-purpose computer.
- the report server 7 receives a registration request for an interpretation report from the interpretation WS 3 , the interpretation report is formatted for a database and registered in the report DB 8 .
- the interpretation report contains information such as, for example, the medical image to be interpreted, the image ID for identifying the medical image, the interpretation doctor ID for identifying the interpretation doctor who performed the interpretation, the lesion name, the position information of the lesion, and the properties of the lesion. may contain
- the report server 7 When the report server 7 receives a viewing request for an interpretation report from the interpretation WS 3 and the medical care WS 4 via the network 10, the report server 7 searches for the interpretation report registered in the report DB 8, and sends the retrieved interpretation report to the requested interpretation report. Send to WS3 and medical care WS4.
- the network 10 is a wired or wireless local area network that connects various devices in the hospital. If the image interpretation WS3 is installed in another hospital or clinic, the network 10 may be configured by connecting the local area networks of each hospital via the Internet or a dedicated line.
- the information processing device 20 includes a CPU (Central Processing Unit) 11, a nonvolatile storage 13, and a memory 16 as a temporary storage area.
- the information processing apparatus 20 also includes a display 14 such as a liquid crystal display, an input device 15 such as a keyboard and a pointing device such as a mouse, and a network I/F (InterFace) 17 connected to the network 10 .
- CPU 11 , storage 13 , display 14 , input device 15 , memory 16 and network I/F 17 are connected to bus 18 .
- the CPU 11 is an example of a processor in the present disclosure.
- the storage 13 is realized by HDD (Hard Disk Drive), SSD (Solid State Drive), flash memory, and the like.
- the information processing program 12 is stored in the storage 13 as a storage medium.
- the CPU 11 reads the information processing program 12 from the storage 13 , expands it in the memory 16 , and executes the expanded information processing program 12 .
- FIG. 3 is a diagram showing a functional configuration of the information processing apparatus according to the first embodiment;
- the information processing device 20 includes an information acquisition unit 21, an analysis unit 22, a text generation unit 23, and a display control unit 24.
- FIG. By executing the information processing program 12 by the CPU 11 , the CPU 11 functions as an information acquisition unit 21 , an analysis unit 22 , a sentence generation unit 23 and a display control unit 24 .
- the information acquisition unit 21 acquires a medical image G0 as an example of an image from the image server 5 via the network I/F 17.
- a lung CT image is used as the medical image G0.
- the information acquisition unit 21 also acquires diagnostic information related to the diagnosis of the patient who acquired the medical image G0 from the image server 5 via the network I/F 17 . Diagnostic information will be described later.
- the analysis unit 22 derives the analysis result of the medical image G0 by analyzing the medical image G0.
- the image analysis unit 22 detects an abnormal shadow such as a lesion included in the medical image G0, and determines the property of the detected abnormal shadow for each of a plurality of predetermined property items. It has a learning model 22A that has been trained.
- property items specified for the abnormal lung shadow the location of the abnormal shadow, the type of absorption value (solid type and ground glass type), the presence or absence of spicules, the presence or absence of calcification, the presence or absence of cavities, and pleural invagination Presence or absence of pleural contact, presence or absence of pleural infiltration, etc.
- properties items are not limited to these.
- the learning model 22A is a convolutional neural network that has been machine-learned by deep learning or the like using teacher data so as to discriminate the properties of abnormal shadows in medical images.
- the learning model 22A is constructed, for example, by machine learning using a combination of a medical image containing an abnormal shadow and a property item representing the property of the abnormal shadow as teacher data.
- the learning model 22A outputs a property score derived for each property item in an abnormal shadow included in the medical image.
- the attribute score is a score that indicates the conspicuousness of each attribute item.
- the property score takes a value of, for example, 0 or more and 1 or less, and the larger the value of the property score, the more remarkable the property.
- the property score for “presence or absence of spicules”, which is one of the property items of abnormal shadows, is equal to or greater than a predetermined threshold value (for example, 0.5)
- the property score for “presence or absence of spicules” of abnormal shadows is "with spicules (positive)”
- the attribute score for "with or without spicules” is less than the threshold
- the attribute for the presence or absence of spicules of the abnormal shadow is "no spicules (negative)” to be specified.
- the threshold value of 0.5 used for property determination is merely an example, and is set to an appropriate value for each property item. It should be noted that when the property score is near a threshold value (for example, 0.4 or more and 0.6 or less), false positive may be specified.
- FIG. 4 is a diagram showing an example of analysis results derived by the analysis unit 22.
- the property information specified by the analysis unit 22 includes property items such as the location of the abnormal shadow, the type of absorption value, the spicule, the calcification, the cavity, and the pleural indentation. They are “upper left segment”, “solid type”, “spicule present”, “calcification present”, “cavity present”, and “pleural invagination absent”.
- “yes”, ie, positive, is given by +
- “no”, ie, negative is given by ⁇ .
- any learning model such as a support vector machine (SVM (Support Vector Machine)) can be used.
- SVM Support Vector Machine
- a learning model for detecting an abnormal shadow from the medical image G0 and a learning model for determining the properties of the abnormal shadow may be constructed separately.
- the sentence generation unit 23 generates medical sentences regarding the patient based on the analysis result derived by the analysis unit 22 and the diagnostic information acquired by the information acquisition unit 21 .
- the text generation unit 23 is composed of a learning model 23A constructed by machine learning so as to generate a medical text from the input information as an observation text to be written in the interpretation report.
- a learning model 23A for example, a neural network such as a recurrent neural network described in US Pat. No. 10181098 or US Pat. No. 10268671 can be used.
- the learning model 23A is constructed by making a recurrent neural network learn by supervised learning.
- the training data used at this time is data in which combinations of various analysis results and various diagnostic information are associated with various training sentences to be generated from the analysis results and diagnostic information.
- FIG. 5 is a diagram schematically showing a recurrent neural network.
- the recurrent neural network 40 consists of an encoder 41 and a decoder 42.
- the analysis result derived by the analysis unit 22 and the label of the diagnostic information are input to the encoder 41 .
- the encoder 41 receives the 1-hot representation of the analysis result and diagnostic information label.
- the 1-hot representation represents each label by a vector in which one component is 1 and the remaining components are 0. For example, if the 1-hot representation is a vector with three elements, (1,0,0), (0,1,0), and (0,0,1) each represent three different labels. becomes.
- the encoder 41 converts the 1-hot representation of each label using the embedding matrix to derive the vector representation of each label.
- Each element of the embedding matrix is a learning parameter. Learning parameters are determined by machine learning of the recurrent neural network 40 .
- the decoder 42 is configured by connecting multiple networks consisting of an input layer, an intermediate layer and an output layer. Each network receives the vector representation xt output by the encoder 41 and the output ht of the preceding network. In the intermediate layer, the calculation shown in the following formula (1) is performed.
- Wh, Wx, and b are learning parameters determined by learning.
- tanh is the activation function. Note that the activation function is not limited to tanh, and a sigmoid function or the like may be used.
- the analysis results and diagnostic information of the training data are "left lung subpleural”, “4.2 cm”, “spicula+”, and “mass”. Since “left lung subpleural” is a term for a location in the lung, it is labeled for location. Since “4.2 cm” is the size of the diameter, a label representing the size is given. “Spicula+” is given a label indicating positive spicula, and “tumor” is given a label indicating a medically small mass. These labels are input to the encoder 41, and a vector representation of each label is output.
- the output of the previous stage and the vector representation are input to the input layer of each neural network, and the remark "A [small block] of [size] diameter having [spicule] at [location] is recognized.” is output.
- the text generation unit 23 embeds the analysis result and diagnostic information in the label included in the finding text output by the learning model 23A, thereby generating the message "A tumor with a diameter of 4.2 cm having a spicule under the left lung pleura is found. generated.
- the diagnostic information is information related to the patient's diagnosis other than the analysis result derived by the analysis unit 22 .
- first information D1 information that has been determined regarding the lesion included in the medical image
- second information D2 determined information other than information regarding the lesion included in the medical image
- third information D3 information representing the results of judgments made by radiologists on medical images
- fourth information D4 information representing examination results performed on patients
- the first information D1 confirmed about the lesion included in the medical image G0 includes, for example, measurement information such as the size (vertical and horizontal length or area) of the lesion included in the medical image G0, the confirmed diagnosis result for the lesion, The medical history of the patient from whom the medical image G0 was acquired, the content of the treatment performed on the patient, and the like can be mentioned.
- the size of a lesion information representing changes over time from the size of a lesion included in medical images previously acquired for the same patient can be used as the first information D1.
- the information representing change is information representing whether the size of the lesion has increased, decreased, or has not changed.
- a definitive diagnosis of a lesion is a diagnosis established by a physician, such as that the lesion is cancer and benign tumor.
- the patient's medical history is the history of diseases suffered in the past by the patient who acquired the medical image G0.
- the details of the treatment performed on the patient include details of the operation performed on the patient, the type of drug used, the amount of drug, and the administration period
- the diagnostic information may be directly included in the statement of findings (for example, the major axis is 10 mm, the patient has a history of primary lung cancer or colon cancer).
- the diagnostic information includes labels of size, definitive diagnosis of malignancy, and past history as the first information D1, and training a neural network using teacher data including teacher sentences in which the labels are described. , to construct the learning model 23A.
- the diagnostic information includes, for example, a size label as the first information D1
- the analysis result includes a label of "nodule, bronchial fluoroscopic image-, cavity-”.
- the learning model 23A may be constructed by learning the recurrent neural network 40 using teacher data including teacher sentences that do not include label descriptions.
- the diagnostic information includes, as the first information D1, a label for a definitive diagnosis of malignancy such as primary lung cancer and a label for "enlargement", and a label for a positive property item as an analysis result.
- the confirmed second information D2 other than the information about the lesion included in the medical image G0 is information about the medical image G0 that is not related to the lesion.
- image conditions when obtaining Acquisition of the medical image G0 is performed as part of an examination for determining the patient's condition, and has various examination purposes such as detailed examination and follow-up observation.
- various conditions windshield level, window width, and slice interval
- Image conditions are conditions for generating a medical image G0 such as window level, window width and slice interval.
- the content to be written in the statement of findings differs depending on whether the purpose of the inspection is a detailed examination or a follow-up observation.
- the purpose of the inspection is a detailed examination or a follow-up observation.
- detailed contents may be described in the remarks, and in the case of follow-up observations, remarks may be generated so as to describe changes in size (increase or reduction) over time.
- the diagnosis information includes a label indicating that the inspection purpose is a detailed inspection as the second information D2, and teacher data including teacher sentences describing the labels of all property items included in the analysis results
- the diagnosis information includes a label indicating that the purpose of the examination is follow-up observation as the second information D2
- the teacher includes a teacher sentence describing only the size change label among the property labels included in the analysis result.
- the learning model 23A may be constructed by learning the recurrent neural network 40 using the data.
- the diagnostic information includes a label indicating that the slice thickness of the CT image is 5 mm as the second information D2, includes a positive property item label among the property item labels included in the analysis result, and
- a learning model 23A is constructed by learning a recurrent neural network 40 using teacher data including teacher sentences written as " ⁇ suspect” instead of " ⁇ accept” at the end of labels of positive attribute items. do it.
- the third information D3 representing the judgment result of the medical image interpretation doctor is information representing the interpretation result of the lesion included in the medical image G0 by the interpretation doctor who interpreted the medical image G0. Specifically, there are undetermined diagnosis results for the medical image G0, relationships between lesions and tissues other than lesions, and results of selection by radiologists for a plurality of analysis results.
- the third information D3 is used directly as a finding. What is necessary is just to generate
- the diagnosis information includes, for example, a label "mediastinal invasion is suspected” as the third information D3, and teacher data including a teacher sentence describing the label "mediastinal invasion is suspected” is recurrent.
- the learning model 23A may be constructed.
- the fourth information D4 representing the results of examinations performed on the patient includes the results of interpretation of medical images acquired by an imaging device different from the imaging device that acquired the medical image G0, and examinations other than examinations using images such as blood tests.
- the differentiation of tuberculoma which is one of the lung diseases, is performed in combination with not only image diagnosis but also blood tests. For this reason, diagnostic information such as blood test results should be written directly in the findings. For example, if the blood test result is “quantiferon negative,” and the suspected symptom information based on the blood test result is “non-tuberculous mycobacterial disease,” I suspect acid bacteriosis.”
- the diagnostic information includes, for example, a "blood test" label and a suspected symptom label as the fourth information D4, and the recurrent neural network 40 uses teacher data including teacher sentences in which these labels are described.
- the learning model 23A may be constructed. In this case, the learning model 23A may be constructed so as to generate an observation sentence that includes the property item label included in the analysis result, or the learning model 23A is constructed so as to generate an observation sentence that does not include the property item label. You may
- the diagnostic information should be similarly described in the remarks. Just do it. For example, if the test result is "low FDG uptake on mPET" and the suspected symptom information based on the test result is "rounded atelectasis or organizing pneumonia", "FDG uptake on mPET is also Low, round atelectasis or organizing pneumonia is suspected.” Further, the analysis results related to the diagnostic information may be described in the observation statement, and the analysis results unrelated to the patient information may not be described in the observation statement.
- the analysis result related to the fourth information D4 may or may not be described in the remarks, lower or increase the confidence, or be important. You may generate
- FIG. 6 is a diagram showing an example of diagnostic information and analysis results.
- the diagnostic information shown in FIG. 6 is the first information D1.
- the first information D1 includes a lesion type of "tumor”, a diameter of "maximum diameter of 46 mm", a definitive diagnosis of malignancy of "primary lung cancer", and a treatment content of "after Iressa treatment”.
- a change in size is an "increase.”
- These labels are Nodule, Diameter, Malignant, Treated and Progress respectively.
- the analysis results show that the area of the lesion in the lung is "upper left segment", the type of absorption value is solid type, the presence of spicules, the presence of calcification, the absence of cavities, and the presence of pleural contact.
- These labels are Segment, Solid, Spiculated+, Calcification+, Cavity-, PleuralContact+ respectively.
- the diagnostic information shown in Figure 6 includes a definitive diagnosis of malignancy, primary lung cancer.
- the size change of the lesion over time is more important than the analysis result of the internal properties such as presence of spicules.
- the analysis results that express the internal properties are not included, and the findings are generated so that only the size change over time is included, or the analysis results that contradict the definitive diagnosis
- the learning model 23A is constructed so as to generate a finding sentence so as not to include the finding suggesting
- a learning model 23A is constructed by learning a recurrent neural network using teacher data including and.
- the analysis results are sorted out based on the diagnostic information to generate observation sentences.
- the diagnostic information and analysis results shown in Figure 6 are input, "[Segment] is [Treated] for [Malignant].
- [Nodule] is further increased to [Diameter]. ” is performed to construct the learning model 23A.
- the text generation unit 23 embeds diagnostic information and analysis results in the label of the finding text output by the learning model 23A, thereby generating the following message: "Primary lung cancer in the upper left ward after Iressa treatment. The tumor has a maximum diameter of 46 mm. It is increasing further.” is generated.
- the diagnostic information labels are Nodule [tumor] and Diameter [length 48 mm]
- the analysis result labels are Segment [lower right lobe S6], Solid [solid type], IrregularForm [irregular form]. ], Spiculated + [with spicules], Lobulated + [with lobulation], Airbronchogram + [with bronchial lucidity], Cavity + [with cavities], Calcification- [without calcification], PleuralContact + [with pleural contact].
- the contents in parentheses indicate the specific content of each label.
- the finding text generated by the text generation unit 23 is "An irregularly shaped solid mass with a major diameter of 48 mm in contact with the pleura is found in the right lower lobe S6. It is lobulated and accompanied by spicules. Bronchial translucence inside. I recognize the image, the cavity, I do not recognize the calcification.” In the generated observation sentences, the end of the sentences for property items for which the presence or absence is clear is "-accepted.”, "-not accepted.”, and "accompanied.”.
- the diagnostic information labels are Nodule [tumor] and Diameter [length 48 mm]
- the analysis result labels are Segment [lower right lobe S6], Solid [solid type], IrregularForm [irregular form]. ], Spiculated+ [with spicules], Lobulated+ [with lobulation], Airbronchogram? Property items with "?” represent false-positive analysis results.
- the finding text generated by the text generation unit 23 is "An irregularly shaped solid tumor with a long diameter of 48 mm in contact with the pleura is found in the right lower lobe S6. It is lobulated and accompanied by spicules. Low absorption inside. bronchial lucency and suspicion of cavities.No calcifications.”
- the end of the sentences for property items for which the presence or absence is clear is "-accepted.”, "-not admitted.”, and "accompanied.”
- properties whose presence or absence is unknown, that is, properties that are false-positive the sentence ends with "I doubt.”
- an expression vector of "suspicious" to the vector representation xt of the bronchial fluoroscopic image and the cavity input to the decoder 42 shown in FIG. ” can be generated.
- a vector representation in which the 1 component of the 1-hot representation of "bronchial lucid image” and "cavity” is changed according to the degree of confidence a finding sentence ending in "suspicious” is generated. It becomes possible to In this case, for example, if the 1-hot representation of the "bronchial lucid image" is (1, 0, 0), change the 1-hot representation to (0.5, 0, 0) according to the degree of confidence. do it.
- the learning model 23A generates sentences ending with a high degree of certainty for property items whose presence or absence is clear, but generates sentences ending with a low degree of certainty for property items whose presence or absence is unclear. is constructed.
- each item included in the diagnostic information and analysis results is given an order according to the degree of importance.
- the storage 13 stores a table that defines the degree of importance for each item of diagnostic information and analysis results.
- the sentence generator 23 refers to the table stored in the storage 13 and assigns an order according to the degree of importance to each item of the diagnostic information and the analysis result.
- the recurrent neural network 40 changes the importance according to whether the property is negative or positive and also according to the diagnostic information, and generates observation sentences including a predetermined number of analysis results in order of importance. is learned to build a learning model 23A.
- calcification is generally a benign property.
- negative attributes are not as important as positive attribute items. For this reason, the importance of calcification and negative attribute items included in the analysis results is lowered, the importance of positive attribute items is increased, and a predetermined number of attribute items with high importance are used to prepare the finding statement.
- the learning model 23A is constructed so as to generate Also, depending on the confirmed diagnosis result included in the diagnostic information, it may be better to describe a negative property item for a specific property item in the finding statement. In this case, the learning model 23A is constructed so as to generate a finding statement with a high degree of importance even if a specific property is negative according to diagnostic information.
- the diagnostic information labels are Nodule [mass] and Diameter [major diameter 24 mm]
- the analysis result labels are Segment [lower right lobe S6], Solid [solid type], Lobulated + [lobulated type].
- numbers are given to each item of diagnostic information and analysis results in order of importance.
- the learning model 23A is constructed so as to include the top n (for example, 5) labels with the highest degree of importance for diagnostic information and analysis results.
- HistoryOsteosarcoma (has a history of osteosarcoma) is added to the diagnostic information as a medical history of the first information D1. Benign-like calcifications may form if there is a history of osteosarcoma. For this reason, if the patient has a history of osteosarcoma, it is necessary to ensure that the doctor confirms the recurrence and metastasis of the osteosarcoma included in the medical image G0 when reading the findings.
- the learning model 23A is constructed so as to generate a finding text with an increased importance of the label "no calcification".
- the importance of "no calcification” is changed to 4, and the importance of the property items whose importance was 4 to 8 in FIG. is generated.
- the finding text generated by the text generation unit 23 is "A lobulated nodule with a long diameter of 24 mm is found in the right lower lobe S6. Osteosarcoma is found. No calcification is found.”
- the learning model 23A may be constructed so that the observation statement includes the fact that the patient has a history of osteosarcoma.
- the display control unit 24 displays the medical text generated by the text generation unit 23 on the display 14.
- FIG. 11 is a flowchart showing processing performed in the first embodiment.
- the information acquisition unit 21 acquires the medical image G0 and diagnosis information for which a finding sentence is to be generated (step ST1).
- the analysis unit 22 analyzes the medical image G0 to derive the analysis result of the medical image G0 (step ST2).
- the text generation unit 23 generates an observation text regarding the patient as a medical text (step ST3).
- the display control unit 24 displays the medical text on the display 14 (step ST4), and the process ends.
- FIG. 12 is a diagram showing the functional configuration of the information processing apparatus according to the second embodiment.
- the same reference numerals are assigned to the same configurations as in FIG. 3, and detailed description thereof will be omitted.
- the information processing apparatus according to the second embodiment differs from the above embodiments in that the text generation unit 23 includes a selection unit 25 and a learning model 23B.
- the selection unit 25 selects the analysis results derived by the analysis unit 22 based on the diagnostic information.
- the storage 13 stores a table defining rules for selecting analysis results according to diagnostic information.
- FIG. 13 is a diagram showing an example of a table defining rules.
- the table T1 defines each item of the analysis result in the horizontal direction and each item included in the diagnostic information in the vertical direction, and inputs each item of the diagnostic information and the analysis result to the learning model 23B. It defines the selection of whether to do or not.
- items of analysis results that are not selected for diagnostic information are marked with x, and items that are selected are marked with ⁇ .
- the absorption value solid type and ground glass type
- margin presence or absence of spicules
- internal properties presence or absence of calcification, presence or absence of cavity
- surroundings presence or absence of pleural invagination
- the diagnostic information area, diameter, definitive diagnosis of malignancy, content of treatment, and change in size are used as the diagnostic information. Only the diameter and the definitive diagnosis of malignancy are defined in order to be accurate. In addition, the diameter is classified into less than 5 mm ( ⁇ 5 mm) and 5 mm or more and less than 10 mm ( ⁇ 10 mm).
- the selection unit 25 refers to the table T1 to substantially select the analysis results according to the definitive diagnosis of malignancy.
- the learning model 23B in the second embodiment is a teacher model in which a combination of various analysis results and various diagnostic information after selection is associated with teacher sentences to be generated from the analysis results and diagnostic information. Using data, it is constructed by learning a neural network such as a recurrent neural network in the same manner as the learning model 23A.
- the learning model 23A in the first embodiment selects the input analysis results and generates an observation sentence, but in the second embodiment, the analysis results input to the learning model 23B have already been selected. there is For this reason, the learning model 23B uses the input analysis results and diagnostic information to generate observation sentences.
- the selection unit 25 refers to the table T1 and selects the analysis results. Since the diagnostic information shown in FIG. 5 includes a definitive diagnosis of malignancy, the selection unit 25 removes absorption values, margins, internal properties, and surrounding items from the analysis results. The analysis result is input to the learning model 23B. Specifically, the selection unit 25 includes the label Nodule for the mass, the label Diameter for the maximum diameter of 46 mm, the label Segment for the upper left segment, the label Malignant for primary lung cancer, and the label after Iressa treatment. Treated and Progress, which is the label of growth, are input to the learning model 23B.
- the learning model 23B outputs an observation sentence of "[Treated] for [Malignant] of [Segment]. [Nodule] is further increased to [Diameter]."
- the text generation unit 23 embeds the diagnostic information and analysis results in the label included in the finding text output by the learning model 23A, thereby generating the following message: "Primary lung cancer in the upper left ward after Iressa treatment. The tumor has a maximum diameter of 46 mm. It is increasing further.” is generated.
- the analysis result of the analysis unit 22 may be the property score itself for each property item.
- the selection unit 25 determines whether the property item is positive or negative by comparing the property score with a threshold value, but the threshold value may be changed according to the diagnostic information. For example, when the diagnostic information and the analysis result are as shown in FIG. 10, the threshold for determining calcification is decreased to determine that calcification is present. In this case, if the table T1 is derived so that the presence of calcification is left in the analysis results, an observation sentence including the content of the presence of calcification will be generated when there is a history of osteosarcoma. For this reason, an interpreting doctor who has seen the findings will interpret medical images with an emphasis on calcification.
- the technology of the present disclosure is applied when generating a finding sentence to be described in an interpretation report as a medical document, but it is not limited to this.
- the technology of the present disclosure may be applied when creating medical documents other than interpretation reports such as electronic charts and diagnosis reports, and other documents including character strings related to images.
- the diagnostic target is not limited to the lung.
- any part of the human body such as the heart, liver, brain, and limbs can be diagnosed.
- the processing of the analysis unit 22 in the information processing device 20 included in the interpretation WS3 may be performed by an external device such as another analysis server connected to the network 10, for example.
- the external device acquires the medical image G0 from the image server 5 and derives the analysis result by analyzing the medical image G0. Then, the information processing device 20 generates an observation sentence using the analysis result derived by the external device.
- the various processors include, in addition to the CPU, which is a general-purpose processor that executes software (programs) and functions as various processing units, circuits such as FPGAs (Field Programmable Gate Arrays), etc. Programmable Logic Device (PLD) which is a processor whose configuration can be changed, ASIC (Application Specific Integrated Circuit) etc. Circuits, etc. are included.
- the CPU which is a general-purpose processor that executes software (programs) and functions as various processing units, circuits such as FPGAs (Field Programmable Gate Arrays), etc.
- Programmable Logic Device PLD which is a processor whose configuration can be changed, ASIC (Application Specific Integrated Circuit) etc. Circuits, etc. are included.
- One processing unit may be configured with one of these various processors, or a combination of two or more processors of the same or different type (for example, a combination of multiple FPGAs or a combination of a CPU and an FPGA). ).
- a plurality of processing units may be configured by one processor.
- a plurality of processing units may be configured by one processor.
- this processor functions as a plurality of processing units.
- SoC System On Chip
- SoC System On Chip
- the various processing units are configured using one or more of the above various processors as a hardware structure.
- an electric circuit in which circuit elements such as semiconductor elements are combined can be used.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
Description
解析結果以外の患者の診断に関する診断情報を取得し、
解析結果および診断情報に基づいて、患者に関する医療文章を生成する。
取捨選択された解析結果を含む医療文章を生成するものであってもよい。
解析結果以外の患者の診断に関する診断情報を取得し、
解析結果および診断情報に基づいて、患者に関する医療文章を生成する。
2 撮影装置
3 読影WS
4 診療WS
5 画像サーバ
6 画像DB
7 レポートサーバ
8 レポートDB
10 ネットワーク
11 CPU
12 情報処理プログラム
13 ストレージ
14 ディスプレイ
15 入力部
16 メモリ
17 ネットワークI/F
18 バス
20 情報処理装置
21 情報取得部
22 解析部
22A 学習モデル
23 文章生成部
23A,23B 学習モデル
24 表示制御部
25 選択部
T1 テーブル
Claims (14)
- 少なくとも1つのプロセッサを備え、
前記プロセッサは、
患者の医用画像に関する1以上の解析結果を取得し、
前記解析結果以外の前記患者の診断に関する診断情報を取得し、
前記解析結果および前記診断情報に基づいて、前記患者に関する医療文章を生成する情報処理装置。 - 前記プロセッサは、前記診断情報に基づいて前記解析結果を取捨選択し、
前記取捨選択された解析結果を含む医療文章を生成する請求項1に記載の情報処理装置。 - 前記プロセッサは、前記診断情報に応じた優先度で前記解析結果を含む前記医療文章を生成する請求項1または2に記載の情報処理装置。
- 前記プロセッサは、前記診断情報および前記解析結果を含む前記医療文章を生成する請求項1から3のいずれか1項に記載の情報処理装置。
- 前記診断情報は、前記医用画像に含まれる病変に関して確定している第1の情報を含む請求項1から4のいずれか1項に記載の情報処理装置。
- 前記第1の情報は、前記病変の測定結果、前記病変についての確定診断結果および前記患者の既往歴の少なくとも1つを含む請求項5に記載の情報処理装置。
- 前記診断情報は、前記医用画像に含まれる病変に関する情報以外の確定している第2の情報を含む請求項1から6のいずれか1項に記載の情報処理装置。
- 前記第2の情報は、前記医用画像を取得した検査の目的および前記医用画像に関する画像条件の少なくとも1つを含む請求項7に記載の情報処理装置。
- 前記診断情報は、前記医用画像に対する読影医による判断結果を表す第3の情報を含む請求項1から8のいずれか1項に記載の情報処理装置。
- 前記第3の情報は、前記医用画像に関する未確定の診断結果、前記医用画像に含まれる病変と前記病変以外の他の組織との関連性、および前記読影医による前記解析結果の選択結果の少なくとも1つを含む請求項9に記載の情報処理装置。
- 前記診断情報は、前記患者に対して行った検査結果を表す第4の情報を含む請求項1から10のいずれか1項に記載の情報処理装置。
- 前記第4の情報は、前記患者に対する前記医用画像を取得する撮影装置とは異なる診断機器での検査結果、前記医用画像とは異なる種類の医用画像の解析結果、および前記患者の生体情報の検査結果の少なくとも1つを含む請求項11に記載の情報処理装置。
- 患者の医用画像に関する1以上の解析結果を取得し、
前記解析結果以外の前記患者の診断に関する診断情報を取得し、
前記解析結果および前記診断情報に基づいて、前記患者に関する医療文章を生成する情報処理方法。 - 患者の医用画像に関する1以上の解析結果を取得する手順と、
前記解析結果以外の前記患者の診断に関する診断情報を取得する手順と、
前記解析結果および前記診断情報に基づいて、前記患者に関する医療文章を生成する手順とをコンピュータに実行させる情報処理プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21923076.0A EP4287195A4 (en) | 2021-01-27 | 2021-11-11 | DEVICE, METHOD AND PROGRAM FOR INFORMATION PROCESSING |
JP2022578065A JPWO2022163071A1 (ja) | 2021-01-27 | 2021-11-11 | |
US18/355,397 US20230360213A1 (en) | 2021-01-27 | 2023-07-19 | Information processing apparatus, method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021011350 | 2021-01-27 | ||
JP2021-011350 | 2021-01-27 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/355,397 Continuation US20230360213A1 (en) | 2021-01-27 | 2023-07-19 | Information processing apparatus, method, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022163071A1 true WO2022163071A1 (ja) | 2022-08-04 |
Family
ID=82653158
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/041617 WO2022163071A1 (ja) | 2021-01-27 | 2021-11-11 | 情報処理装置、方法およびプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230360213A1 (ja) |
EP (1) | EP4287195A4 (ja) |
JP (1) | JPWO2022163071A1 (ja) |
WO (1) | WO2022163071A1 (ja) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023027663A (ja) * | 2021-08-17 | 2023-03-02 | 富士フイルム株式会社 | 学習装置、方法およびプログラム、並びに情報処理装置、方法およびプログラム |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017189237A (ja) * | 2016-04-11 | 2017-10-19 | 東芝メディカルシステムズ株式会社 | 読影支援装置 |
US10181098B2 (en) | 2014-06-06 | 2019-01-15 | Google Llc | Generating representations of input sequences using neural networks |
US10268671B2 (en) | 2015-12-31 | 2019-04-23 | Google Llc | Generating parse trees of text segments using neural networks |
JP2019149005A (ja) * | 2018-02-27 | 2019-09-05 | 富士フイルム株式会社 | 医療文書作成支援装置、方法およびプログラム |
JP2019153250A (ja) | 2018-03-06 | 2019-09-12 | 富士フイルム株式会社 | 医療文書作成支援装置、方法およびプログラム |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7212147B2 (ja) * | 2019-04-04 | 2023-01-24 | 富士フイルム株式会社 | 医療文書作成支援装置、方法およびプログラム |
-
2021
- 2021-11-11 JP JP2022578065A patent/JPWO2022163071A1/ja active Pending
- 2021-11-11 WO PCT/JP2021/041617 patent/WO2022163071A1/ja active Application Filing
- 2021-11-11 EP EP21923076.0A patent/EP4287195A4/en active Pending
-
2023
- 2023-07-19 US US18/355,397 patent/US20230360213A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10181098B2 (en) | 2014-06-06 | 2019-01-15 | Google Llc | Generating representations of input sequences using neural networks |
US10268671B2 (en) | 2015-12-31 | 2019-04-23 | Google Llc | Generating parse trees of text segments using neural networks |
JP2017189237A (ja) * | 2016-04-11 | 2017-10-19 | 東芝メディカルシステムズ株式会社 | 読影支援装置 |
JP2019149005A (ja) * | 2018-02-27 | 2019-09-05 | 富士フイルム株式会社 | 医療文書作成支援装置、方法およびプログラム |
JP2019153250A (ja) | 2018-03-06 | 2019-09-12 | 富士フイルム株式会社 | 医療文書作成支援装置、方法およびプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP4287195A4 |
Also Published As
Publication number | Publication date |
---|---|
US20230360213A1 (en) | 2023-11-09 |
JPWO2022163071A1 (ja) | 2022-08-04 |
EP4287195A4 (en) | 2024-07-17 |
EP4287195A1 (en) | 2023-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kim et al. | Deep learning in diagnosis of maxillary sinusitis using conventional radiography | |
Habuza et al. | AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine | |
EP3043318A1 (en) | Analysis of medical images and creation of a report | |
Al-Ghamdi et al. | Detection of Dental Diseases through X‐Ray Images Using Neural Search Architecture Network | |
Warin et al. | Maxillofacial fracture detection and classification in computed tomography images using convolutional neural network-based models | |
Yang et al. | Assessing inter-annotator agreement for medical image segmentation | |
JP2017532998A (ja) | 縦断的特徴に基づく関心組織の健康状態の分類 | |
WO2019193982A1 (ja) | 医療文書作成支援装置、医療文書作成支援方法、及び医療文書作成支援プログラム | |
Kim et al. | Applications of artificial intelligence in the thorax: a narrative review focusing on thoracic radiology | |
JP2024009342A (ja) | 文書作成支援装置、方法およびプログラム | |
JP7436636B2 (ja) | 文書作成支援装置、方法およびプログラム | |
US20230360213A1 (en) | Information processing apparatus, method, and program | |
JP7420914B2 (ja) | 情報処理装置、情報処理方法及び情報処理プログラム | |
WO2021107099A1 (ja) | 文書作成支援装置、文書作成支援方法及びプログラム | |
Son et al. | Deep learning-based quantitative estimation of lymphedema-induced fibrosis using three-dimensional computed tomography images | |
Alidoost et al. | Model utility of a deep learning-based segmentation is not Dice coefficient dependent: A case study in volumetric brain blood vessel segmentation | |
WO2022196106A1 (ja) | 文書作成装置、方法およびプログラム | |
Bi et al. | MIB-ANet: A novel multi-scale deep network for nasal endoscopy-based adenoid hypertrophy grading | |
JP7371220B2 (ja) | 情報処理装置、情報処理方法及び情報処理プログラム | |
JP7299314B2 (ja) | 医療文書作成装置、方法およびプログラム、学習装置、方法およびプログラム、並びに学習済みモデル | |
WO2021107098A1 (ja) | 文書作成支援装置、文書作成支援方法及び文書作成支援プログラム | |
Langius-Wiffen et al. | External validation of the RSNA 2020 pulmonary embolism detection challenge winning deep learning algorithm | |
WO2022113587A1 (ja) | 画像表示装置、方法およびプログラム | |
JP7368592B2 (ja) | 文書作成支援装置、方法およびプログラム | |
JP7436698B2 (ja) | 医用画像処理装置、方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21923076 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022578065 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021923076 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021923076 Country of ref document: EP Effective date: 20230828 |