WO2019176407A1 - Learning assisting device, learning assisting method, learning assisting program, region-of-interest discriminating device, region-of-interest discriminating method, region-of-interest discriminating program, and learned model - Google Patents

Learning assisting device, learning assisting method, learning assisting program, region-of-interest discriminating device, region-of-interest discriminating method, region-of-interest discriminating program, and learned model Download PDF

Info

Publication number
WO2019176407A1
WO2019176407A1 PCT/JP2019/004771 JP2019004771W WO2019176407A1 WO 2019176407 A1 WO2019176407 A1 WO 2019176407A1 JP 2019004771 W JP2019004771 W JP 2019004771W WO 2019176407 A1 WO2019176407 A1 WO 2019176407A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
interest
image
name
learning
Prior art date
Application number
PCT/JP2019/004771
Other languages
French (fr)
Japanese (ja)
Inventor
晶路 一ノ瀬
佳児 中村
嘉郎 北村
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to JP2020505688A priority Critical patent/JP7080304B2/en
Publication of WO2019176407A1 publication Critical patent/WO2019176407A1/en
Priority to US17/000,363 priority patent/US11468659B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to a learning support device, a learning support method, a learning support program, a region of interest discrimination device, a region of interest discrimination method, a region of interest discrimination program, and a learned model.
  • a region name in an anatomical organ name is specified and a disease name is diagnosed.
  • a plurality of regions are specified for a three-dimensional medical image such as a brain (region segmentation).
  • region segmentation as shown to the nonpatent literature 1, the semantic segmentation which grasps
  • the image of the region of interest is extracted from the image of the region of interest.
  • the name is being determined.
  • learning with a large amount of high-quality teacher data in line with the purpose is indispensable. It is considered that teacher data is acquired from a large amount of data such as an interpretation report stored in a medical information system such as PACS (Picture Archiving and Communication System).
  • PACS Picture Archiving and Communication System
  • the present invention can easily acquire a large amount of teacher data including the image of the region of interest and the name of the region of interest. It is an object to provide a learning support device, a learning support method, a learning support program, a region of interest discrimination device, a region of interest discrimination method, a region of interest discrimination program, and a learned model.
  • the learning support apparatus of the present invention includes a storage unit, an acquisition unit, a registration unit, and a learning unit.
  • the storage unit stores an interpretation report including image and character information.
  • the acquisition unit analyzes the image interpretation report to acquire the image of the region of interest included in the image and the name of the region of interest included in the character information.
  • the registration unit registers teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest.
  • the learning unit uses a plurality of teacher data registered in the registration unit to perform learning for generating a discrimination model that outputs an image of a region of interest and a name of the region of interest in response to input of an image of an interpretation report.
  • the acquisition unit acquires the position information of the region of interest by analyzing the interpretation report.
  • the image interpretation report has link information that associates the finding information included in the character information with the position information of the region of interest included in the image, and the acquisition unit preferably acquires the position information of the region of interest from the link information. .
  • the interpretation report has annotation information attached around the region of interest, and the acquisition unit acquires the position information of the region of interest from the annotation information.
  • the acquisition unit acquires the image of the region of interest and the position information of the region of interest by the region of interest determination unit that determines the region of interest from the image of the interpretation report.
  • the learning unit When the learning unit acquires the names of a plurality of regions of interest including a first name and a second name different from the first name by analyzing the interpretation report, the learning unit uses the first name as the region of interest. It is preferable to perform learning using the first teacher data including the name and the second teacher data including the second name as the name of the region of interest.
  • the learning unit When the names of a plurality of regions of interest are acquired by analyzing the interpretation report, the learning unit preferably performs learning using region position information related to the position information of the region of interest in addition to the teacher data. .
  • the acquisition unit refers to the hierarchical structure database that stores the names of the superordinate concepts or subordinate concepts corresponding to the superordinate concepts or subordinate concepts with respect to the name of the region of interest, and determines the names of the superordinate concepts or subordinate concepts from the names of the regions of interest. It is preferable that the learning unit performs learning using the teacher data including the image of the region of interest and the name of the superordinate concept or the subordinate concept.
  • the acquisition unit refers to a similar name database in which a plurality of similar names similar to each other for the name of the region of interest are stored in advance, determines a representative name from the plurality of similar names, and the learning unit It is preferable to perform learning using teacher data including an image of a region and a representative name.
  • the acquisition unit newly acquires the image of the region of interest and the name of the region of interest when the interpretation report is newly stored in the storage unit, and the registration unit acquires the image of the region of interest newly acquired by the acquisition unit and the interest
  • the learning unit is updated by re-learning using a plurality of teacher data including the new teacher data when the new teacher data is registered. It is preferable to generate a discriminant model.
  • the acquisition unit when a newly stored interpretation report and a past interpretation report for the same patient are stored in the storage unit, an image of the past interpretation report and a newly stored interpretation report By aligning the image with the newly acquired image of the region of interest, the image of the region of interest included in the image of the past interpretation report is acquired, and the registration unit obtains the region of interest acquired based on the past image interpretation report.
  • the past image teacher data consisting of the image and the name of the newly acquired region of interest is registered, and when the past image teacher data is registered, the learning unit uses a plurality of teacher data including the past image teacher data. It is preferable to generate an updated discrimination model by learning again.
  • the interpretation report preferably includes an electronic medical record.
  • the acquisition unit acquires an anatomical organ name, a zone name, a disease name, and a medical condition as the name of the region of interest. More specifically, the area name is preferably an area name of the lung, liver, or brain.
  • the learning support method of the present invention includes a storage unit, an acquisition unit, a registration unit, and a learning support method of a learning support device including a learning unit, and the acquisition unit includes the image by analyzing the interpretation report.
  • the learning support method of the present invention is a learning support program for causing a computer to function as a storage unit, an acquisition unit, a registration unit, and a learning unit.
  • Another learning support apparatus of the present invention is a computer having a memory and a processor, which stores an image interpretation report including an image and character information, and analyzes the image interpretation report to analyze a region of interest included in the image.
  • the image and the name of the region of interest included in the text information are acquired, the teacher data consisting of the acquired image of the region of interest and the name of the region of interest is registered, and the image of the interpretation report is obtained using a plurality of registered teacher data
  • a processor that performs learning to generate a discrimination model that outputs an image of the region of interest and a name of the region of interest.
  • the region-of-interest discrimination apparatus of the present invention includes a storage unit, an acquisition unit, a registration unit, a learning unit, and a discrimination unit.
  • the determination unit determines the image of the region of interest and the name of the region of interest using the determination model.
  • the region-of-interest discrimination method of the region-of-interest discrimination apparatus of the present invention includes an acquisition step, a registration step, and a region-of-interest discrimination device including a storage unit, an acquisition unit, a registration unit, a learning unit, and a discrimination unit.
  • the learning step includes a determination step of determining the image of the region of interest and the name of the region of interest using a determination model when the image of the interpretation report is input.
  • the region-of-interest discriminating program of the region-of-interest discriminating apparatus uses a discrimination model when a computer, a storage unit, an acquisition unit, a registration unit, a learning unit, and an image of an interpretation report are input. It functions as a discriminating unit that discriminates the image of the region and the name of the region of interest.
  • the learned model of the present invention includes a computer, a storage unit, an acquisition unit, a registration unit, a learning unit, and an image of an interpretation report. It functions as a discriminating unit for discriminating the name of each.
  • the present invention it is possible to provide a learning support apparatus, a learning support method, and a learning support program for easily acquiring teacher data necessary for learning in the medical field from an interpretation report and generating a discrimination model.
  • the medical information system 2 includes a modality 11, an interpretation doctor terminal 12, a clinician terminal 13, an image management server 14, an image database 15, an interpretation report server 16, an interpretation report database 17, A learning support device 18 and an order management server 19 are included.
  • the modality 11, the interpretation doctor terminal 12, the clinician terminal 13, the image management server 14, the image database 15, the interpretation report server 16, the interpretation report database 17, the learning support device 18, and the order management server 19 are laid in a medical facility.
  • a network 21 which is a local network such as a LAN (Local Area Network).
  • LAN Local Area Network
  • This medical information system 2 is based on examination orders from clinicians using a well-known ordering system, imaging and storage of a region to be examined of a subject, interpretation of images taken by an interpreting physician, and creation of an interpretation report And a system for browsing an interpretation report and a detailed observation of an image to be interpreted by a requesting clinician.
  • An application program for causing each device to function as a component of the medical information system 2 is installed.
  • the application program may be installed from a recording medium such as a CD-ROM or after being downloaded from a storage device of a server connected via a network such as the Internet.
  • the modality 11 shoots a region to be examined of a subject to generate an inspection image 22 representing the region, and adds incidental information defined by the DICOM (Digital Imaging and Communication Communication in Medical) standard to the image.
  • the modality 11 includes a device that captures an image having three-dimensional organ information as the examination image 22.
  • Specific examples include a CT (Computed Tomography) apparatus 11A, an MRI (Magnetic Resonance Imaging) apparatus 11B, a PET (Positron Emission Tomography) apparatus (not shown), an ultrasonic apparatus (not shown).
  • a lung is exemplified as an organ to be inspected that is imaged by the modality 11 and generates an image.
  • the radiology doctor terminal 12 is a computer used by radiology radiology doctors to interpret images and create interpretation reports.
  • a CPU Central Processing Unit
  • a main storage device a main storage device
  • an auxiliary storage device an input / output interface
  • a communication interface an input
  • a known hardware configuration such as a device, a display device, and a data bus is provided, a known operation system is installed, and the display device has one or more high-definition displays.
  • an image transmission request to the image management server 14, display of an image received from the image management server 14, automatic detection and highlighting of a portion that appears to be a lesion in the image, and creation and display of an interpretation report 23 Each process is performed by executing a software program for each process.
  • the interpretation doctor terminal 12 transfers the generated interpretation report 23 to the interpretation report server 16 via the network 21 and requests registration of the interpretation report in the interpretation report database 17.
  • the clinician terminal 13 is a computer used by doctors in the clinic for detailed observation of images, reading of interpretation reports, browsing and input of electronic medical records, and the like, CPU, main storage device, auxiliary storage device, input / output It has a well-known hardware configuration such as an interface, a communication interface, an input device, a display device, and a data bus. A well-known operation system is installed, and the display device has one or more high-definition displays. Yes.
  • an image browsing request to the image management server 14 display of an image received from the image management server 14, automatic detection or highlighting of a portion that appears to be a lesion in the image, and interpretation report to the interpretation report server 16.
  • Each process such as a browsing request and display of an interpretation report received from the interpretation report server 16 is performed by executing a software program for each process.
  • the image management server 14 incorporates a software program that provides a database management system (DBMS) function in a general-purpose computer.
  • the image management server 14 includes a large-capacity storage in which the image database 15 is configured.
  • This storage may be a large-capacity hard disk device connected to the image management server 14 through a data bus, or connected to a NAS (Network Attached Storage) or a SAN (Storage Area Network) connected to the network 21. It may be a disk device.
  • examination images (image data) 22 obtained by photographing a plurality of patients with the modality 11 and accompanying information are registered.
  • the incidental information includes, for example, an image ID (identification) for identifying individual images, a patient ID for identifying a subject, an examination ID for identifying examinations, and a unique ID assigned to each examination image 22 ( (UID: unique identification), examination date when the examination image 22 was generated, examination time, type of modality 11 used in the examination to obtain the examination image, patient information such as patient name, age and gender, examination Information such as a part (imaging part), an imaging condition (whether or not a contrast medium is used or a radiation dose), and a series number when a plurality of tomographic images are acquired in one examination is included.
  • the image management server 14 When the image management server 14 receives a browsing request from the interpretation doctor terminal 12 via the network 21, the image management server 14 searches for the examination image registered in the image database 15 and reads the extracted examination image as the interpretation of the request source. Transmit to the medical terminal 12.
  • the interpretation report server 16 When the interpretation report server 16 receives a registration request for the interpretation report 23 from the interpretation doctor terminal 12 when a software program for providing a database management system (DataBase Management System: DBMS) function is incorporated in a general-purpose computer, the interpretation report is received. 23 is prepared in a database format and registered in the interpretation report database 17.
  • DBMS Database Management System
  • the interpretation report database 17 includes, for example, an image ID for identifying an interpretation target image or a representative image, an interpretation doctor ID for identifying an interpretation doctor who has performed interpretation, a lesion name, lesion position information, findings, and belief in findings.
  • An interpretation report 23 in which information such as the degree is recorded is registered.
  • the order management server 19 receives the examination order issued by the clinician terminal 13 and manages the accepted examination order.
  • the examination order includes, for example, an order ID for identifying each examination order, a terminal ID or a clinician ID of the clinician terminal 13 that has issued the examination order, and a patient to be imaged by the examination order (hereinafter, referred to as “order ID”).
  • Patient ID of the subject examination purpose such as follow-up observation, imaging parts such as head and chest, orientation such as lying on the back and lying down.
  • the radiology examination engineer confirms the contents of the examination order with the order management server 19, sets the imaging condition corresponding to the confirmed examination order in the modality 11, and takes a medical image.
  • the region-of-interest discriminating device 25 is incorporated in, for example, the interpretation doctor terminal 12 and discriminates the image of each area of an organ or a part that appears to be a lesion in the examination image 22 and the name of each area of the organ and the region of interest.
  • the interpretation doctor terminal 12 performs color-coded display of each area of the organ or highlighting of the region of interest based on the determination result.
  • FIG. 2 is a functional block of the learning support device 18 constituting the region of interest discrimination device 25.
  • the region-of-interest discrimination device 25 of the present invention is used together with the learning support device 18 connected to the network 21 and the interpretation report database 17 (see FIG. 1).
  • the interpretation report database 17 functions as a storage unit of the present invention.
  • the learning support device 18 is composed of a general-purpose computer, and includes a main storage device such as a CPU, HDD (hard disc drive), and SSD (solid state drive), an auxiliary storage device, an input / output interface, a communication interface, an input device, and a display device.
  • a known hardware configuration such as a data bus is provided, and a known operation system is installed. Further, data is transmitted / received to / from the image database 15 and the interpretation report database 17 connected to the network 21 via the communication interface.
  • the learning support device 18 is provided independently of the interpretation doctor terminal 12, the clinician terminal 13, the image management server 14, and the interpretation report server 16, but is not limited thereto. You may provide in any one of these servers or a terminal.
  • the learning support device 18 includes an acquisition unit 26, a registration unit 27, a storage device 28, a learning unit 29, and a control unit 31.
  • the obtaining unit 26 analyzes the image interpretation report 23 to obtain the image of the region of interest included in the inspection image 22 of the image interpretation report 23 and the name of the region of interest included in the character information of the image interpretation report 23.
  • the name of the lung area is acquired as the name of the region of interest.
  • the lung as an anatomical organ is divided into lung lobes or lung areas.
  • the right lung RL is divided into right upper lobe RUL (rightrupper love), right middle lobe RML (right middle love), and right lower lobe RLL (rightLlowerrlove) as the lung lobe
  • the left lung LL is the upper left lobe as the lung lobe It is divided into LUL (left upper love) and left lower leaf LLL (left lower love).
  • the right upper lobe RUL is the right pulmonary apex S1 (hereinafter abbreviated as right S1. The same applies to the following lung areas), the right rear upper lobe S2, and the right front upper lobe. It is divided into S3.
  • the right middle lobe RML is divided into a right outer middle lobe section S4 and a right inner middle lobe section S5 as lung sections.
  • the lower right lobe RLL is divided into upper right lobe / lower lobe section S6, right inner lung base section S7, right front lung base section S8, right outer lung base section S9, and right rear lung base section S10 as lung sections.
  • the left upper lobe LUL is divided into a left posterior apex segment S1 + 2, a left anterior upper lobe segment S3, a left upper tongue segment S4, and a left lower tongue segment S5 as lung segments.
  • the lower left lobe LLL is divided into a left upper lobe / lower lobe segment S6, a left anterior lung floor segment S8, a left outer lung floor segment S9, and a left rear lung floor segment S10.
  • the control unit 31 controls the processing flow of the acquisition unit 26, the registration unit 27, and the learning unit 29.
  • a process in which the acquisition unit 26 acquires the image of the region of interest included in the examination image 22 of the interpretation report 23 and the name of the region of interest included in the character information of the interpretation report 23 will be described with reference to FIGS. 4 and 5. .
  • the interpretation report 23 includes an inspection image 22 to be interpreted, supplementary information 23A, finding information 23B, and link information 23C.
  • the accompanying information 23A is character information attached to the examination image 22 to be interpreted, such as a patient ID, examination ID, and examination date.
  • the finding information 23 ⁇ / b> B is obtained by editing the findings of the interpreting doctor who has interpreted the examination image 22 to be interpreted, and is character information input by the interpreting doctor terminal 12.
  • the link information 23C is used when the interpretation report 23 is displayed on the display as will be described later, and is link information that associates the finding information 23B with the position information of the region of interest included in the examination image 22.
  • FIG. 5 is an example of a display screen 32 when the interpretation report 23 is displayed on the display of the interpretation doctor terminal 12 or the clinician terminal 13.
  • an auxiliary information display field 32A in which the auxiliary information 23A is displayed in which the auxiliary information 23A is displayed, an observation field 32B in which the observation information 23B is displayed, and an image display in which thumbnail images of the examination image 22 to be interpreted are displayed.
  • “pulmonary nodule” indicating the lesion name of the region of interest
  • “right S1” which is the lung area name of the region of interest
  • the interpretation report 23 includes link information 23C.
  • This link information 23C is included in the wording of “pulmonary nodule” indicating the lesion name and the examination image 22 in the observation information 23B. Associate with the location information of the region of interest included. Specifically, the position information of the region of interest is the coordinates of the region of interest in the inspection image 22 and a range centered on the coordinates.
  • the link information 23C when the interpreting doctor terminal 12 or the clinician terminal 13 is operated and “pulmonary nodule” highlighted in the display screen 32 is selected, it is based on the position information of the related region of interest.
  • the image 22A of the region of interest can be displayed on the display. In the example shown in FIG.
  • An image 32D enlarged by centering on the position information of the region of interest included in the link information 23C is displayed.
  • the acquisition unit 26 analyzes the characters of the finding information 23 ⁇ / b> B of the interpretation report 23, acquires “right S ⁇ b> 1” indicating the name of the lung region as the name of the region of interest, and interprets the interpretation report
  • the registration unit 27 registers the teacher data 33 including the region of interest image 22 ⁇ / b> A acquired by the acquisition unit 26 and the name of the region of interest in the storage device 28.
  • the storage device 28 may be a part of a storage device such as an HDD (hard disc drive) or SSD (solid state drive) provided in the learning support device 18, or may be a storage device connected via the network 21. Good.
  • the registration unit 27 registers a plurality of teacher data 33 and, for example, until a predetermined number of teacher data 33 is registered in the storage device 28 for machine learning described later, or an interpretation report. Until the teacher data 33 based on all the interpretation reports 23 registered in the database 17 is registered, the acquisition of the image 22A of the region of interest and the name of the region of interest from the interpretation report 23 and the registration of the teacher data 33 are repeated.
  • the learning unit 29 uses a plurality of teacher data 33 registered in the storage device 28 and outputs a discrimination model 34 that outputs an image 22A of the region of interest and the name of the region of interest in response to the input of the inspection image 22 of the interpretation report 23. Learning to generate.
  • the discrimination model 34 is generated using a machine learning method such as deep learning. For example, a plurality of teacher data 33 are input, and the machine learning algorithm is made to learn the relationship between the position information of the region of interest and the feature amount (pixel value, etc.) of each voxel.
  • machine learning is performed so that the error between the position information obtained when a feature amount around the region of interest among the feature amounts of each voxel is input and the position information of the region of interest in the teacher data 33 is minimized.
  • the learning weighting coefficient used in the algorithm is updated.
  • the discrimination model 34 generated by the learning unit 29 is transmitted to the region of interest discrimination device 25.
  • the determination unit 35 of the region-of-interest determination device 25 outputs the image 22A of the region of interest in the inspection image 22 and the name of the region of interest using the determination model 34. .
  • the discrimination model 34 includes a weighting coefficient determined by using the above-described machine learning method.
  • the discrimination model 34 is used for discriminating the region of interest image 22A and the region of interest name. It is done.
  • the discriminating unit 35 discriminates the image of the region of interest and the name of the region of interest in the examination image using the discrimination model 34.
  • the discriminating unit 35 displays, for example, the discriminated region-of-interest image 36 and the region-of-interest name 37 on the display of the region-of-interest discriminating apparatus 25.
  • the learning support device 18 first reads the interpretation report 23 regarding the lung disease from the interpretation report database 17 (S101).
  • the acquisition unit 26 acquires the image of the region of interest included in the examination image 22 of the interpretation report 23 and the name of the region of interest included in the character information of the interpretation report 23 (S102), and the registration unit 27 acquires The teacher data 33 including the image of the region of interest acquired by the unit 26 and the name of the region of interest is registered in the storage device 28 (S103).
  • a certain “right S1” the position information of the region of interest included in the link information 23C is acquired.
  • the learning unit 29 performs learning for generating the discrimination model 34 using a plurality of registered teacher data 33 (S104). ).
  • the generated discrimination model 34 is transmitted to the region of interest discrimination device 25.
  • the discrimination unit 35 When a new examination image 22 is input to the region of interest discrimination device 25 (YES in S105), the discrimination unit 35 outputs the image of the region of interest and the name of the region of interest in the examination image 22 using the discrimination model 34. . For example, color-coded display of lung areas or name display of lung areas is performed (S106).
  • the learning support device 18 performs learning based on the teacher data including the image of the region of interest acquired from the image interpretation report 23 and the name of the region of interest, and thus has been conventionally used in medical information systems.
  • Teacher data can be obtained from the interpretation report, and teacher data necessary for learning can be easily obtained.
  • the discrimination model 34 can be generated using the teacher data and the discrimination model 34 is generated based on the interpretation report in which correct information is recorded, the discrimination accuracy can be easily improved.
  • the region-of-interest discriminating device 25 capable of outputting “right S1” as the region of interest is illustrated, but the present invention is not limited to this, and a plurality of regions of interest can be discriminated simultaneously. That is, a discriminant model that can discriminate a plurality of regions of interest at the same time, such as “right S1” for a certain range of voxels in the input inspection image and “right S2” for another range of voxels. You may create it. Further, such a discrimination model may be applied to the region of interest discrimination device so that a plurality of regions of interest can be discriminated simultaneously.
  • the acquisition unit 26 may acquire the position information of the region of interest from the annotation information.
  • the annotation information is information added as annotation to image data or the like.
  • an arrow 38 as annotation information is included in the inspection image 22, and the arrow 38 is around the region of interest. It is attached to.
  • the acquisition unit 26 acquires, for example, the coordinates of the tip position of the arrow 38 in the inspection image 22 as position information.
  • the acquisition unit 26 may include a region of interest determination unit, and the region of interest determination unit may determine the region of interest from the inspection image 22 of the interpretation report 23.
  • the region-of-interest discriminating unit has the same configuration as the region-of-interest discriminator 25 of the above-described embodiment. Is registered and registered as new teacher data.
  • the acquisition unit 26 analyzes the image interpretation report 23 and acquires the names of a plurality of regions of interest including a first name and a second name different from the first name.
  • the learning unit 29 performs learning using the first teacher data including the first name as the name of the region of interest and the second teacher data including the second name as the name of the region of interest.
  • the registration unit 27 registers the teacher data 33 using the name “right S1” and the teacher data 33 using the name “right S2”.
  • the learning unit 29 receives an error in the position information of the region of interest when the teacher data 33 using the name “right S1” is input and the region of interest when the teacher data 33 using the name “right S2” is input.
  • the weighting coefficient for learning used in the machine learning algorithm is updated so that both of the position information errors of the machine learning algorithm are reduced. Thereby, the accuracy of learning can be improved.
  • the learning unit 29 obtains region position information related to the region information of the region of interest in addition to the teacher data 33. It is good also as a structure which uses and learns.
  • the learning unit 29 stores in advance region position information, for example, that the right S4 exists on the side of the lung and the right S5 exists on the inside as the region position information.
  • the acquisition unit 26 analyzes the interpretation report 23 and acquires two names “right S4” and “right S5”
  • the learning unit 29 uses the region position information to acquire the lung outer surface of the region of interest. Half of the direction is learned as “right S4”, and half of the lung inner surface direction is learned as “S5”.
  • the acquisition unit 26 refers to the hierarchical structure database to acquire the name of the superordinate concept or the subordinate concept
  • the learning unit 29 uses the teacher data 33 including the image of the region of interest and the name of the superordinate concept or the subordinate concept.
  • the hierarchical structure database may be, for example, a part of a storage device provided in the learning support device 18 or a database connected via the network 21.
  • the hierarchical structure database stores the names of the superordinate concepts or subordinate concepts corresponding to the superordinate concepts or subordinate concepts for the name of the region of interest.
  • the classification of lung lobes and lung areas is stored in the hierarchical structure database, and the hierarchical structure of right lung> upper right lobe> right S1 is stored in order from the higher concept to the lower concept.
  • the learning unit 29 refers to the hierarchical structure database and acquires “upper right lobe” and “right lung”, which are the names of the upper concepts.
  • the names “upper right lobe” and “right lung” are also learned as teacher data 33 in the same manner as in the above embodiment.
  • the acquisition unit 26 refers to the similar name database 39 to determine a representative name from a plurality of similar names, and the learning unit 29 receives the teacher data 33 including the image of the region of interest and the representative name. It is good also as a structure which uses and learns.
  • the similar name database 39 may be, for example, a part of the storage device provided in the learning support device 18 or a database connected via the network 21.
  • the similar name database 39 is a database in which a plurality of similar names that are similar to each other with respect to the name of the region of interest are stored in advance.
  • the similar name database 39 includes a plurality of similar names such as area names; The name is stored.
  • the acquisition unit 26 determines “right S3” as the representative name, and the learning unit 29 uses the teacher data 33 including the image of the region of interest and “right S3” as the name. Learning similar to the embodiment is performed.
  • the acquisition unit 26 newly acquires the image of the region of interest and the name of the region of interest
  • the registration unit 27 newly uses the acquisition unit 26.
  • New teacher data 33 including the acquired image of the region of interest and the name of the region of interest is registered
  • the learning unit 29 includes the teacher data including the new teacher data 33 when the new teacher data 33 is registered. It is preferable to generate the updated discriminant model 34 by re-learning using a plurality of. Thereby, the discrimination accuracy of the discrimination model 34 can be improved.
  • the interpretation report 23 created by the interpretation doctor using the interpretation doctor terminal 12 or the like is taken as an example, but an image and character information similar to the interpretation report 23 of the above embodiment, such as an electronic medical record, are included. If it is.
  • information about the region of interest is acquired from the interpretation report and registered as teacher data.
  • the present invention is not limited to this, and past interpretation reports that do not include information about the region of interest are stored. If this is the case, teacher data may be registered using this past interpretation report and a newly stored interpretation report.
  • the acquisition unit 26 stores the previously stored interpretation report 23 and the past interpretation report 43 targeting the same patient in the interpretation report database 17.
  • the image of the past interpretation report 43, the newly stored examination image 22 of the interpretation report 23, and the newly acquired image 22A of the region of interest are aligned by image processing or the like.
  • An image 42B of the region of interest included in the inspection image 42 is acquired.
  • the acquisition unit 26 determines that the interpretation reports 23 and 43 are for the same patient from the patient IDs included in the incidental information 23A and 43A.
  • the registration unit 27 registers the past image teacher data 44 including the region of interest image 42B acquired based on the past interpretation report 43 and the newly acquired region of interest name, and the learning unit 29 stores the past image teacher data.
  • an updated discrimination model 34 is generated by re-learning using a plurality of teacher data 33 including the past image teacher data 44. Thereby, since the number of teacher data can be further increased, the discrimination accuracy of the discrimination model 34 can be improved.
  • the acquisition unit 26 uses the anatomical organ name as the name of the region of interest and the lung as the area name as an example.
  • the acquisition unit 26 is not limited thereto, and as illustrated in FIG. It may be applied to leaves S1, left leaves S2, S3, S4, right leaves S5, S6, S7, etc.), or may be applied to each area of the brain.
  • the acquisition unit 26 may acquire a disease name and a medical condition as the region of interest.
  • the learning support device 18 is provided separately from the region-of-interest discriminating device 25.
  • the learning support device 18 may be integrated into the region-of-interest discriminating device 25.
  • the hardware structure of the processing unit that executes various processes such as the acquisition unit 26, the registration unit 27, the learning unit 29, and the determination unit 35 is, for example, as described above.
  • the CPU is a general-purpose processor that executes programs and functions as various processing units. Various processors may be used instead of all or part of the functions realized by the CPU. More specifically, the hardware structure of these various processors is an electric circuit (circuitry) in which circuit elements such as semiconductor elements are combined.
  • the hardware structure of the storage unit is a storage device such as a hard disk drive (HDD) or a solid state drive (SSD).
  • the present invention extends to a storage medium that stores the program.
  • the present invention can also be applied to fields other than the medical field using interpretation reports. Any specific information including character information and image information may be used.
  • the information may be applied to information using a photographic image (image data) including character information or information using SNS (social networking service) information.
  • a storage unit for storing specific information including image and character information; By acquiring the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the text information, A registration unit for registering teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest; A learning unit for generating a discrimination model that outputs the image of the region of interest and the name of the region of interest in response to the input of the image of the specific information using the teacher data registered in the registration unit; Learning support device.
  • the specific information includes character information included in a photographic image.
  • the specific information is SNS information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Pathology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a learning assisting device, a learning assisting method, a learning assisting program, a region-of-interest discriminating device, a region-of-interest discriminating method, a region-of-interest discriminating program and a learned model for easily acquiring the teacher data required for learning in a medical field from radiogram interpretation reports and generating a discriminant model. A learning assisting device 18 is composed of an acquisition unit 26, a registration unit 27, a storage device 28, a learning unit 29, and a control unit 31. The acquisition unit 26 analyzes a radiogram interpretation report 23 and thereby acquires an image of a region-of-interest and the name of the region of interest. The registration unit 27 registers teacher data comprising the image of the region-of-interest and the name of the region of interest acquired by the acquisition unit 26 in the storage device 28. The learning unit 29 performs, using a plurality of teacher data 33 registered in the storage device 28, learning for generating a discriminant model 34 that outputs the image of the region-of-interest and the name of the region of interest for the input of an inspection image 22 of a radiogram interpretation report 23.

Description

学習支援装置、学習支援方法、学習支援プログラム、関心領域判別装置、関心領域判別方法、関心領域判別プログラム及び学習済みモデルLearning support device, learning support method, learning support program, region of interest discrimination device, region of interest discrimination method, region of interest discrimination program, and learned model
 本発明は、学習支援装置、学習支援方法、学習支援プログラム、関心領域判別装置、関心領域判別方法、関心領域判別プログラム及び学習済みモデルに関する。 The present invention relates to a learning support device, a learning support method, a learning support program, a region of interest discrimination device, a region of interest discrimination method, a region of interest discrimination program, and a learned model.
 医療分野においては、解剖学的臓器名における領域名称を特定し、病名を診断することが行われている。例えば、特許文献1では、脳などの3次元医療画像に対して、複数の領域を特定することが行われている(領域セグメンテーション)。なお、領域のセグメンテーションとしては、例えば、非特許文献1に示すように、画像を画素レベルで把握するセマンティックセグメンテーションが用いられる場合がある。 In the medical field, a region name in an anatomical organ name is specified and a disease name is diagnosed. For example, in Patent Document 1, a plurality of regions are specified for a three-dimensional medical image such as a brain (region segmentation). In addition, as segmentation of an area | region, as shown to the nonpatent literature 1, the semantic segmentation which grasps | ascertains an image at a pixel level may be used, for example.
特開2016-202904号公報JP 2016-202904 A
 解剖学的臓器名、区域名、病名、病状など関心領域の名称とその関心領域の画像とを、ディープラーニングなどの機械学習を用いて、学習させた上で、関心領域の画像から関心領域の名称を判定することが行われつつある。関心領域の名称を正確に判定するためには、ディープラーニングにおいて、目的に沿った大量かつ良質の教師データによる学習が不可欠である。教師データについては、PACS(Picture Archiving and Communication System)などの医療情報システムに保管されている読影レポートなどの大量のデータから取得することが検討されている。しかしながら、大量のデータの中から教師データに役に立つデータを手動で判別するのは合理的ではない。 After learning the name of the region of interest such as an anatomical organ name, area name, disease name, and disease state and the image of the region of interest using machine learning such as deep learning, the image of the region of interest is extracted from the image of the region of interest. The name is being determined. In order to accurately determine the name of the region of interest, in deep learning, learning with a large amount of high-quality teacher data in line with the purpose is indispensable. It is considered that teacher data is acquired from a large amount of data such as an interpretation report stored in a medical information system such as PACS (Picture Archiving and Communication System). However, it is not reasonable to manually determine useful data for teacher data from a large amount of data.
 本発明は、関心領域の画像から関心領域の名称等を判定するための判別モデルを生成する場合において、関心領域の画像と関心領域の名称とからなる教師データを大量に且つ容易に取得することができる学習支援装置、学習支援方法、学習支援プログラム、関心領域判別装置、関心領域判別方法、関心領域判別プログラム及び学習済みモデルを提供することを目的とする。 When generating a discrimination model for determining the name of a region of interest from an image of the region of interest, the present invention can easily acquire a large amount of teacher data including the image of the region of interest and the name of the region of interest. It is an object to provide a learning support device, a learning support method, a learning support program, a region of interest discrimination device, a region of interest discrimination method, a region of interest discrimination program, and a learned model.
 上記目的を達成するために、本発明の学習支援装置は、記憶部と、取得部と、登録部と、学習部とを備えている。記憶部は、画像及び文字情報を含む読影レポートを記憶している。取得部は、読影レポートを解析することにより、画像に含まれる関心領域の画像と文字情報に含まれる関心領域の名称とを取得する。登録部は、取得部により取得した関心領域の画像と関心領域の名称とからなる教師データを登録する。学習部は、登録部に登録された教師データを複数用いて、読影レポートの画像の入力に対して、関心領域の画像及び関心領域の名称を出力する判別モデルを生成するための学習を行う。 In order to achieve the above object, the learning support apparatus of the present invention includes a storage unit, an acquisition unit, a registration unit, and a learning unit. The storage unit stores an interpretation report including image and character information. The acquisition unit analyzes the image interpretation report to acquire the image of the region of interest included in the image and the name of the region of interest included in the character information. The registration unit registers teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest. The learning unit uses a plurality of teacher data registered in the registration unit to perform learning for generating a discrimination model that outputs an image of a region of interest and a name of the region of interest in response to input of an image of an interpretation report.
 取得部は、読影レポートを解析することにより、関心領域の位置情報を取得することが好ましい。 It is preferable that the acquisition unit acquires the position information of the region of interest by analyzing the interpretation report.
 読影レポートは、文字情報に含まれる所見情報と、画像に含まれる関心領域の位置情報とを関連付けるリンク情報を有し、取得部は、リンク情報から、関心領域の位置情報を取得することが好ましい。 The image interpretation report has link information that associates the finding information included in the character information with the position information of the region of interest included in the image, and the acquisition unit preferably acquires the position information of the region of interest from the link information. .
 読影レポートには、関心領域の周囲に付されたアノテーション情報を有し、取得部は、アノテーション情報から、関心領域の位置情報を取得することが好ましい。 It is preferable that the interpretation report has annotation information attached around the region of interest, and the acquisition unit acquires the position information of the region of interest from the annotation information.
 取得部は、読影レポートの画像から関心領域を判別する関心領域判別部によって、関心領域の画像及び関心領域の位置情報を取得することが好ましい。 It is preferable that the acquisition unit acquires the image of the region of interest and the position information of the region of interest by the region of interest determination unit that determines the region of interest from the image of the interpretation report.
 学習部は、読影レポートを解析することにより、第1の名称と第1の名称と異なる第2の名称を含む複数の関心領域の名称を取得した場合には、関心領域の名称として第1の名称を含む第1の教師データと、関心領域の名称として第2の名称を含む第2の教師データとを用いて、学習を行うことが好ましい。 When the learning unit acquires the names of a plurality of regions of interest including a first name and a second name different from the first name by analyzing the interpretation report, the learning unit uses the first name as the region of interest. It is preferable to perform learning using the first teacher data including the name and the second teacher data including the second name as the name of the region of interest.
 読影レポートを解析することにより、複数の関心領域の名称を取得した場合には、学習部は、教師データに加えて、関心領域の位置情報に関する領域位置情報を用いて、学習を行うことが好ましい。 When the names of a plurality of regions of interest are acquired by analyzing the interpretation report, the learning unit preferably performs learning using region position information related to the position information of the region of interest in addition to the teacher data. .
 取得部は、関心領域の名称に対して上位概念又は下位概念に対応する上位概念又は下位概念の名称を記憶する階層構造データベースを参照して、関心領域の名称から、上位概念又は下位概念の名称を取得し、学習部は、関心領域の画像と上位概念又は下位概念の名称とからなる教師データを用いて、学習を行うことが好ましい。 The acquisition unit refers to the hierarchical structure database that stores the names of the superordinate concepts or subordinate concepts corresponding to the superordinate concepts or subordinate concepts with respect to the name of the region of interest, and determines the names of the superordinate concepts or subordinate concepts from the names of the regions of interest. It is preferable that the learning unit performs learning using the teacher data including the image of the region of interest and the name of the superordinate concept or the subordinate concept.
 取得部は、関心領域の名称に対して互いに類似する複数の類似の名称を予め記憶した類似名称データベースを参照して、複数の類似の名称から、代表の名称を決定し、学習部は、関心領域の画像と代表の名称とからなる教師データを用いて、学習を行うことが好ましい。 The acquisition unit refers to a similar name database in which a plurality of similar names similar to each other for the name of the region of interest are stored in advance, determines a representative name from the plurality of similar names, and the learning unit It is preferable to perform learning using teacher data including an image of a region and a representative name.
 取得部は、記憶部に読影レポートが新規に記憶された場合に、関心領域の画像および関心領域の名称を新規に取得し、登録部は、取得部により新規に取得した関心領域の画像と関心領域の名称とからなる新規の教師データを登録し、学習部は、新規の教師データが登録された場合に、新規の教師データを含めた教師データを複数用いて再度学習することにより、更新された判別モデルを生成することが好ましい。 The acquisition unit newly acquires the image of the region of interest and the name of the region of interest when the interpretation report is newly stored in the storage unit, and the registration unit acquires the image of the region of interest newly acquired by the acquisition unit and the interest When the new teacher data is registered, the learning unit is updated by re-learning using a plurality of teacher data including the new teacher data when the new teacher data is registered. It is preferable to generate a discriminant model.
 取得部は、新規に記憶された読影レポートと、同一の患者を対象とする過去の読影レポートが記憶部に記憶されている場合、過去の読影レポートの画像と、新規に記憶された読影レポートの画像と、新規に取得した関心領域の画像を位置合わせすることにより、過去の読影レポートの画像に含まれる関心領域の画像を取得し、登録部は、過去の読影レポートに基づき取得した関心領域の画像と、新規に取得した関心領域の名称とからなる過去画像教師データを登録し、学習部は、過去画像教師データが登録された場合に、過去画像教師データを含めた教師データを複数用いて再度学習することにより、更新された判別モデルを生成することが好ましい。 The acquisition unit, when a newly stored interpretation report and a past interpretation report for the same patient are stored in the storage unit, an image of the past interpretation report and a newly stored interpretation report By aligning the image with the newly acquired image of the region of interest, the image of the region of interest included in the image of the past interpretation report is acquired, and the registration unit obtains the region of interest acquired based on the past image interpretation report. The past image teacher data consisting of the image and the name of the newly acquired region of interest is registered, and when the past image teacher data is registered, the learning unit uses a plurality of teacher data including the past image teacher data. It is preferable to generate an updated discrimination model by learning again.
 読影レポートには、電子カルテが含まれることが好ましい。 The interpretation report preferably includes an electronic medical record.
 取得部は、関心領域の名称として、解剖学的臓器名、区域名、病名、病状を取得することが好ましい。区域名としては、より具体的には、肺、肝臓、脳の区域名であることが好ましい。 It is preferable that the acquisition unit acquires an anatomical organ name, a zone name, a disease name, and a medical condition as the name of the region of interest. More specifically, the area name is preferably an area name of the lung, liver, or brain.
 本発明の学習支援方法は、記憶部と、取得部と、登録部と、学習部とを備えた学習支援装置の学習支援方法において、取得部が、読影レポートを解析することにより、画像に含まれる関心領域の画像と文字情報に含まれる関心領域の名称とを取得する取得ステップと、登録部が、取得部により取得した関心領域の画像と関心領域の名称とからなる教師データを登録する登録ステップと、教師データを複数用いて、読影レポートの画像の入力に対して、関心領域の画像及び関心領域の名称を出力する判別モデルを生成するための学習を行う学習ステップとを備えている。 The learning support method of the present invention includes a storage unit, an acquisition unit, a registration unit, and a learning support method of a learning support device including a learning unit, and the acquisition unit includes the image by analyzing the interpretation report. Registration for registering teacher data consisting of the image of the region of interest and the name of the region of interest acquired by the acquisition unit, and an acquisition step of acquiring the region of interest image and the name of the region of interest included in the character information A learning step for performing learning to generate a discrimination model that outputs an image of a region of interest and a name of the region of interest with respect to input of an image of an interpretation report using a plurality of teacher data.
 本発明の学習支援方法は、コンピュータを、記憶部と、取得部と、登録部と、学習部として機能させるための学習支援プログラムである。また、発明の他の学習支援装置は、メモリーとプロセッサを有するコンピュータであって、画像及び文字情報を含む読影レポートを記憶したメモリーと、読影レポートを解析することにより、画像に含まれる関心領域の画像と文字情報に含まれる関心領域の名称とを取得し、取得した関心領域の画像と関心領域の名称とからなる教師データを登録し、登録された教師データを複数用いて、読影レポートの画像の入力に対して、関心領域の画像及び関心領域の名称を出力する判別モデルを生成するための学習を行うプロセッサとを備えた学習支援装置である。 The learning support method of the present invention is a learning support program for causing a computer to function as a storage unit, an acquisition unit, a registration unit, and a learning unit. Another learning support apparatus of the present invention is a computer having a memory and a processor, which stores an image interpretation report including an image and character information, and analyzes the image interpretation report to analyze a region of interest included in the image. The image and the name of the region of interest included in the text information are acquired, the teacher data consisting of the acquired image of the region of interest and the name of the region of interest is registered, and the image of the interpretation report is obtained using a plurality of registered teacher data And a processor that performs learning to generate a discrimination model that outputs an image of the region of interest and a name of the region of interest.
 本発明の関心領域判別装置は、記憶部と、取得部と、登録部と、学習部と、判別部とを備えている。判別部は、読影レポートの画像が入力された場合、判別モデルを用いて、関心領域の画像及び関心領域の名称を判別する。 The region-of-interest discrimination apparatus of the present invention includes a storage unit, an acquisition unit, a registration unit, a learning unit, and a discrimination unit. When the image of the interpretation report is input, the determination unit determines the image of the region of interest and the name of the region of interest using the determination model.
 本発明の関心領域判別装置の関心領域判別方法は、記憶部と、取得部と、登録部と、学習部と、判別部とを備えた関心領域判別装置において、取得ステップと、登録ステップと、学習ステップと、判別部が、読影レポートの画像が入力された場合、判別モデルを用いて、関心領域の画像及び関心領域の名称を判別する判別ステップとを備えている。 The region-of-interest discrimination method of the region-of-interest discrimination apparatus of the present invention includes an acquisition step, a registration step, and a region-of-interest discrimination device including a storage unit, an acquisition unit, a registration unit, a learning unit, and a discrimination unit. The learning step includes a determination step of determining the image of the region of interest and the name of the region of interest using a determination model when the image of the interpretation report is input.
 本発明の関心領域判別装置の関心領域判別プログラムは、コンピュータを、記憶部と、取得部と、登録部と、学習部と、読影レポートの画像が入力された場合、判別モデルを用いて、関心領域の画像及び関心領域の名称を判別する判別部として機能させる。 The region-of-interest discriminating program of the region-of-interest discriminating apparatus according to the present invention uses a discrimination model when a computer, a storage unit, an acquisition unit, a registration unit, a learning unit, and an image of an interpretation report are input. It functions as a discriminating unit that discriminates the image of the region and the name of the region of interest.
 本発明の学習済みモデルは、コンピュータを、記憶部と、取得部と、登録部と、学習部と、読影レポートの画像が入力された場合、判別モデルを用いて、関心領域の画像及び関心領域の名称を判別する判別部として機能させる。 The learned model of the present invention includes a computer, a storage unit, an acquisition unit, a registration unit, a learning unit, and an image of an interpretation report. It functions as a discriminating unit for discriminating the name of each.
 本発明によれば、読影レポートから医療分野において学習に必要な教師データを容易に取得し、判別モデルを生成するための学習支援装置および学習支援方法並びに学習支援プログラムを提供することができる。 According to the present invention, it is possible to provide a learning support apparatus, a learning support method, and a learning support program for easily acquiring teacher data necessary for learning in the medical field from an interpretation report and generating a discrimination model.
医療情報システムの概略構成を表す図である。It is a figure showing schematic structure of a medical information system. 本発明の学習支援装置の概略構成を表す図である。It is a figure showing schematic structure of the learning assistance apparatus of this invention. 肺の各区域を模式的に表した説明図である。It is explanatory drawing which represented each area of the lung typically. 読影レポートから教師データを取得し、学習データを生成する方法を説明する説明図である。It is explanatory drawing explaining the method of acquiring teacher data from an interpretation report and producing | generating learning data. 読影レポートから関心領域の名称及び関心領域の位置情報を取得する方法を説明する説明図である。It is explanatory drawing explaining the method to acquire the name of a region of interest and the positional information on a region of interest from an interpretation report. 本発明の学習支援装置および関心領域判別装置の作用を説明するためのフローチャートである。It is a flowchart for demonstrating the effect | action of the learning assistance apparatus and region-of-interest discrimination | determination apparatus of this invention. アノテーション情報から関心領域の位置情報を取得する変形例を説明する説明図である。It is explanatory drawing explaining the modification which acquires the positional information on a region of interest from annotation information. 領域位置情報を用いて関心領域の位置情報を取得する変形例を説明する説明図である。It is explanatory drawing explaining the modification which acquires the positional information on a region of interest using area | region positional information. 類似名称データベースを参照して関心領域の位置情報を取得する変形例を説明する説明図である。It is explanatory drawing explaining the modification which acquires the positional information on a region of interest with reference to a similar name database. 過去の読影レポートをから関心領域の位置情報を取得する変形例を説明する説明図である。It is explanatory drawing explaining the modification which acquires the positional information on a region of interest from the past interpretation report. 肝臓の各区域を模式的に表した説明図である。It is explanatory drawing which represented each area of the liver typically.
 図1において、医療情報システム2は、モダリティ11と、読影医端末12と、診療科医端末13と、画像管理サーバ14と、画像データベース15と、読影レポートサーバ16と、読影レポートデータベース17と、学習支援装置18と、オーダ管理サーバ19とで構成される。これらモダリティ11、読影医端末12、診療科医端末13、画像管理サーバ14、画像データベース15、読影レポートサーバ16、読影レポートデータベース17、学習支援装置18およびオーダ管理サーバ19は、医療施設内に敷設されたLAN(Local Area Network)等のローカルネットワークであるネットワーク21を介して通信可能な状態で相互接続されている。なお、読影医端末12が他の病院あるいは診療所に設置されている場合には、各病院のローカルネットワーク同士をインターネットまたは専用回線で接続した構成としてもよい。 In FIG. 1, the medical information system 2 includes a modality 11, an interpretation doctor terminal 12, a clinician terminal 13, an image management server 14, an image database 15, an interpretation report server 16, an interpretation report database 17, A learning support device 18 and an order management server 19 are included. The modality 11, the interpretation doctor terminal 12, the clinician terminal 13, the image management server 14, the image database 15, the interpretation report server 16, the interpretation report database 17, the learning support device 18, and the order management server 19 are laid in a medical facility. Are interconnected in a communicable state via a network 21 which is a local network such as a LAN (Local Area Network). When the radiogram interpretation terminal 12 is installed in another hospital or clinic, the local networks of each hospital may be connected to each other via the Internet or a dedicated line.
 この医療情報システム2は、公知のオーダリングシステムを用いた診療科医からの検査オーダに基づいて、被検体の検査対象部位の撮影および保管、読影医による撮影された画像の読影と読影レポートの作成、および、依頼元の診療科医による読影レポートの閲覧と読影対象の画像の詳細観察を行うためのシステムである。 This medical information system 2 is based on examination orders from clinicians using a well-known ordering system, imaging and storage of a region to be examined of a subject, interpretation of images taken by an interpreting physician, and creation of an interpretation report And a system for browsing an interpretation report and a detailed observation of an image to be interpreted by a requesting clinician.
 各機器は、医療情報システム2の構成要素として機能させるためのアプリケーションプログラムがインストールされている。また、アプリケーションプログラムは、CD-ROM等の記録媒体からインストール、または、インターネット等のネットワーク経由で接続されたサーバの記憶装置からダウンロードされた後にインストールされてもよい。 An application program for causing each device to function as a component of the medical information system 2 is installed. The application program may be installed from a recording medium such as a CD-ROM or after being downloaded from a storage device of a server connected via a network such as the Internet.
 モダリティ11には、被検体の検査対象部位を撮影することにより、その部位を表す検査画像22を生成し、その画像にDICOM(Digital Imaging and Communication in Medicine)規格で規定された付帯情報を付加して出力する装置が含まれる。また、モダリティ11には、臓器の三次元情報をもつ画像を検査画像22として撮影する装置が含まれる。具体例としては、CT(Computed Tomography:コンピュータ断層撮影装置)装置11A、MRI(magnetic resonance imaging:磁気共鳴画像撮影装置)装置11B、PET(Positron Emission Tomography)装置(図示せず)、超音波装置(図示せず)、または、平面X線検出器(FPD:flat panel detector)を用いたCR(Computed Radiography:コンピュータX線撮影)装置11Cなどが挙げられる。なお、以下では、モダリティ11で撮影し、画像を生成する検査対象の臓器として肺を例示する。 The modality 11 shoots a region to be examined of a subject to generate an inspection image 22 representing the region, and adds incidental information defined by the DICOM (Digital Imaging and Communication Communication in Medical) standard to the image. Output device. The modality 11 includes a device that captures an image having three-dimensional organ information as the examination image 22. Specific examples include a CT (Computed Tomography) apparatus 11A, an MRI (Magnetic Resonance Imaging) apparatus 11B, a PET (Positron Emission Tomography) apparatus (not shown), an ultrasonic apparatus (not shown). Or a CR (Computed Radiography) apparatus 11C using a flat panel X-ray detector (FPD: flat panel detector). In the following, a lung is exemplified as an organ to be inspected that is imaged by the modality 11 and generates an image.
 読影医端末12は、放射線科の読影医が画像の読影や読影レポートの作成に利用するコンピュータであり、CPU(Central Processing Unit)、主記憶装置、補助記憶装置、入出力インターフェース、通信インターフェース、入力装置、表示装置、および、データバス等の周知のハードウェア構成を備え、周知のオペレーションシステム等がインストールされ、表示装置として1台または複数台の高精細ディスプレイを有している。この読影医端末12では、画像管理サーバ14に対する画像の送信要求、画像管理サーバ14から受信した画像の表示、画像中の病変らしき部分の自動検出および強調表示、および、読影レポート23の作成と表示等の各処理が、各処理のためのソフトウェアプログラムを実行することにより行われる。また、読影医端末12は、生成した読影レポート23を、ネットワーク21を介して読影レポートサーバ16に転送し、その読影レポートの読影レポートデータベース17への登録を要求する。 The radiology doctor terminal 12 is a computer used by radiology radiology doctors to interpret images and create interpretation reports. A CPU (Central Processing Unit), a main storage device, an auxiliary storage device, an input / output interface, a communication interface, an input A known hardware configuration such as a device, a display device, and a data bus is provided, a known operation system is installed, and the display device has one or more high-definition displays. In this interpretation doctor terminal 12, an image transmission request to the image management server 14, display of an image received from the image management server 14, automatic detection and highlighting of a portion that appears to be a lesion in the image, and creation and display of an interpretation report 23 Each process is performed by executing a software program for each process. The interpretation doctor terminal 12 transfers the generated interpretation report 23 to the interpretation report server 16 via the network 21 and requests registration of the interpretation report in the interpretation report database 17.
 診療科医端末13は、診療科の医師が画像の詳細観察や読影レポートの閲覧、および、電子カルテの閲覧および入力等に利用するコンピュータであり、CPU、主記憶装置、補助記憶装置、入出力インターフェース、通信インターフェース、入力装置、表示装置、および、データバス等の周知のハードウェア構成を備え、周知のオペレーションシステム等がインストールされ、表示装置として1台または複数台の高精細ディスプレイを有している。この診療科医端末13では、画像管理サーバ14に対する画像の閲覧要求、画像管理サーバ14から受信した画像の表示、画像中の病変らしき部分の自動検出または強調表示、読影レポートサーバ16に対する読影レポートの閲覧要求、および、読影レポートサーバ16から受信した読影レポートの表示等の各処理が、各処理のためのソフトウェアプログラムの実行により行われる。 The clinician terminal 13 is a computer used by doctors in the clinic for detailed observation of images, reading of interpretation reports, browsing and input of electronic medical records, and the like, CPU, main storage device, auxiliary storage device, input / output It has a well-known hardware configuration such as an interface, a communication interface, an input device, a display device, and a data bus. A well-known operation system is installed, and the display device has one or more high-definition displays. Yes. In this clinician terminal 13, an image browsing request to the image management server 14, display of an image received from the image management server 14, automatic detection or highlighting of a portion that appears to be a lesion in the image, and interpretation report to the interpretation report server 16. Each process such as a browsing request and display of an interpretation report received from the interpretation report server 16 is performed by executing a software program for each process.
 画像管理サーバ14は、汎用のコンピュータにデータベース管理システム(Data Base Management System:DBMS)の機能を提供するソフトウェアプログラムが組み込まれている。画像管理サーバ14は、画像データベース15が構成される大容量ストレージを備えている。このストレージは、画像管理サーバ14とデータバスによって接続された大容量のハードディスク装置であってもよいし、ネットワーク21に接続されているNAS(Network Attached Storage)やSAN(Storage Area Network)に接続されたディスク装置であってもよい。 The image management server 14 incorporates a software program that provides a database management system (DBMS) function in a general-purpose computer. The image management server 14 includes a large-capacity storage in which the image database 15 is configured. This storage may be a large-capacity hard disk device connected to the image management server 14 through a data bus, or connected to a NAS (Network Attached Storage) or a SAN (Storage Area Network) connected to the network 21. It may be a disk device.
 画像データベース15には、複数の患者をモダリティ11で撮影した検査画像(画像データ)22と付帯情報が登録される。付帯情報には、例えば、個々の画像を識別するための画像ID(identification)、被写体を識別するための患者ID、検査を識別するための検査ID、検査画像22ごとに割り振られるユニークなID(UID:unique identification)、その検査画像22が生成された検査日、検査時刻、その検査画像を取得するための検査で使用されたモダリティ11の種類、患者氏名と年齢と性別などの患者情報、検査部位(撮影部位)、撮影条件(造影剤の使用有無または放射線量など)、および、1回の検査で複数の断層画像を取得したときのシリーズ番号などの情報が含まれる。 In the image database 15, examination images (image data) 22 obtained by photographing a plurality of patients with the modality 11 and accompanying information are registered. The incidental information includes, for example, an image ID (identification) for identifying individual images, a patient ID for identifying a subject, an examination ID for identifying examinations, and a unique ID assigned to each examination image 22 ( (UID: unique identification), examination date when the examination image 22 was generated, examination time, type of modality 11 used in the examination to obtain the examination image, patient information such as patient name, age and gender, examination Information such as a part (imaging part), an imaging condition (whether or not a contrast medium is used or a radiation dose), and a series number when a plurality of tomographic images are acquired in one examination is included.
 また、画像管理サーバ14は、読影医端末12からの閲覧要求をネットワーク21経由で受信すると、上述の画像データベース15に登録されている検査画像を検索し、抽出された検査画像を要求元の読影医端末12に送信する。 When the image management server 14 receives a browsing request from the interpretation doctor terminal 12 via the network 21, the image management server 14 searches for the examination image registered in the image database 15 and reads the extracted examination image as the interpretation of the request source. Transmit to the medical terminal 12.
 読影レポートサーバ16は、汎用のコンピュータにデータベース管理システム(DataBase Management System: DBMS)の機能を提供するソフトウェアプログラムが組み込まれ、読影医端末12からの読影レポート23の登録要求を受け付けると、その読影レポート23をデータベース用のフォーマットに整えて読影レポートデータベース17に登録する。 When the interpretation report server 16 receives a registration request for the interpretation report 23 from the interpretation doctor terminal 12 when a software program for providing a database management system (DataBase Management System: DBMS) function is incorporated in a general-purpose computer, the interpretation report is received. 23 is prepared in a database format and registered in the interpretation report database 17.
 読影レポートデータベース17には、例えば、読影対象画像もしくは代表画像を識別する画像IDや、読影を行った読影医を識別するための読影医ID、病変名、病変の位置情報、所見、所見の確信度といった情報が記録された読影レポート23が登録される。 The interpretation report database 17 includes, for example, an image ID for identifying an interpretation target image or a representative image, an interpretation doctor ID for identifying an interpretation doctor who has performed interpretation, a lesion name, lesion position information, findings, and belief in findings. An interpretation report 23 in which information such as the degree is recorded is registered.
 オーダ管理サーバ19は、診療科医端末13で発行された検査オーダを受け付けて、受け付けた検査オーダを管理する。検査オーダは、例えば、個々の検査オーダを識別するためのオーダID、当該検査オーダを発行した診療科医端末13の端末IDまたは診療科医ID、当該検査オーダによる撮影の対象の患者(以下、対象患者)の患者ID、経過観察等の検査目的、頭部、胸部等の撮影部位、仰向け、うつ伏せ等の向きといった各種項目を有する。放射線科の検査技師は、オーダ管理サーバ19で検査オーダの内容を確認し、確認した検査オーダに応じた撮影条件をモダリティ11に設定して医用画像の撮影を行う。 The order management server 19 receives the examination order issued by the clinician terminal 13 and manages the accepted examination order. The examination order includes, for example, an order ID for identifying each examination order, a terminal ID or a clinician ID of the clinician terminal 13 that has issued the examination order, and a patient to be imaged by the examination order (hereinafter, referred to as “order ID”). Patient ID of the subject), examination purpose such as follow-up observation, imaging parts such as head and chest, orientation such as lying on the back and lying down. The radiology examination engineer confirms the contents of the examination order with the order management server 19, sets the imaging condition corresponding to the confirmed examination order in the modality 11, and takes a medical image.
 次に、本発明の関心領域判別装置25について、図1および図2を用いて説明する。関心領域判別装置25は、例えば、読影医端末12に組み込まれ、検査画像22内における臓器の各区域または病変らしき部分の画像と、臓器の各区域および関心領域の名称を判別する。読影医端末12は、判別結果に基づき、臓器の各区域の色分け表示、または関心領域の強調表示などを行う。 Next, the region of interest discrimination device 25 of the present invention will be described with reference to FIGS. The region-of-interest discriminating device 25 is incorporated in, for example, the interpretation doctor terminal 12 and discriminates the image of each area of an organ or a part that appears to be a lesion in the examination image 22 and the name of each area of the organ and the region of interest. The interpretation doctor terminal 12 performs color-coded display of each area of the organ or highlighting of the region of interest based on the determination result.
 図2は、関心領域判別装置25を構成する学習支援装置18の機能ブロックである。本発明の関心領域判別装置25は、ネットワーク21に接続された学習支援装置18、および読影レポートデータベース17とともに使用される(図1参照)。なお、読影レポートデータベース17が本発明の記憶部として機能する。 FIG. 2 is a functional block of the learning support device 18 constituting the region of interest discrimination device 25. The region-of-interest discrimination device 25 of the present invention is used together with the learning support device 18 connected to the network 21 and the interpretation report database 17 (see FIG. 1). The interpretation report database 17 functions as a storage unit of the present invention.
 学習支援装置18は、汎用のコンピュータで構成され、CPU、HDD(hard disc drive)やSSD(solid state drive)等の主記憶装置、補助記憶装置、入出力インターフェース、通信インターフェース、入力装置、表示装置、および、データバス等の周知のハードウェア構成を備え、周知のオペレーションシステム等がインストールされている。また、通信インターフェースを介して、ネットワーク21に接続された画像データベース15、および読影レポートデータベース17とデータの送受信を行う。 The learning support device 18 is composed of a general-purpose computer, and includes a main storage device such as a CPU, HDD (hard disc drive), and SSD (solid state drive), an auxiliary storage device, an input / output interface, a communication interface, an input device, and a display device. In addition, a known hardware configuration such as a data bus is provided, and a known operation system is installed. Further, data is transmitted / received to / from the image database 15 and the interpretation report database 17 connected to the network 21 via the communication interface.
 なお、本実施形態では、学習支援装置18は、読影医端末12、診療科医端末13、画像管理サーバ14、および読影レポートサーバ16とは独立して設けられているが、これに限らず、これらのサーバ又は端末のいずれか1つに設けてもよい。 In this embodiment, the learning support device 18 is provided independently of the interpretation doctor terminal 12, the clinician terminal 13, the image management server 14, and the interpretation report server 16, but is not limited thereto. You may provide in any one of these servers or a terminal.
 図2に示すように、学習支援装置18は、取得部26、登録部27、記憶装置28、学習部29、および制御部31で構成される。 2, the learning support device 18 includes an acquisition unit 26, a registration unit 27, a storage device 28, a learning unit 29, and a control unit 31.
 取得部26は、読影レポート23を解析することにより、読影レポート23の検査画像22に含まれる関心領域の画像と、読影レポート23の文字情報に含まれる関心領域の名称とを取得する。本実施形態では、関心領域の名称として、肺区域の名称を取得する。 The obtaining unit 26 analyzes the image interpretation report 23 to obtain the image of the region of interest included in the inspection image 22 of the image interpretation report 23 and the name of the region of interest included in the character information of the image interpretation report 23. In the present embodiment, the name of the lung area is acquired as the name of the region of interest.
 図3に示すように、解剖学的臓器としての肺は、肺葉または肺区域に区分される。右肺RLは、肺葉としての右上葉RUL(right upper lobe)、右中葉RML(right middle lobe)、及び右下葉RLL(right lower lobe)に分けられ、左肺LLは、肺葉としての左上葉LUL(left upper lobe)、及び左下葉LLL(left lower lobe)に分けられる。 As shown in FIG. 3, the lung as an anatomical organ is divided into lung lobes or lung areas. The right lung RL is divided into right upper lobe RUL (rightrupper love), right middle lobe RML (right middle love), and right lower lobe RLL (rightLlowerrlove) as the lung lobe, and the left lung LL is the upper left lobe as the lung lobe It is divided into LUL (left upper love) and left lower leaf LLL (left lower love).
 また、右上葉RULは、肺区域としての右肺尖区S1(以下、右S1と省略する。以下の肺区域についても同様に省略する。)、右後上葉区S2、及び右前上葉区S3に区分される。右中葉RMLは、肺区域としての右外側中葉区S4及び右内側中葉区S5に区分される。右下葉RLLは、肺区域としての右上葉・下葉区S6、右内側肺底区S7、右前肺底区S8、右外側肺底区S9、及び右後肺底区S10に区分される。 The right upper lobe RUL is the right pulmonary apex S1 (hereinafter abbreviated as right S1. The same applies to the following lung areas), the right rear upper lobe S2, and the right front upper lobe. It is divided into S3. The right middle lobe RML is divided into a right outer middle lobe section S4 and a right inner middle lobe section S5 as lung sections. The lower right lobe RLL is divided into upper right lobe / lower lobe section S6, right inner lung base section S7, right front lung base section S8, right outer lung base section S9, and right rear lung base section S10 as lung sections.
 さらに、左上葉LULは、肺区域としての左肺尖後区S1+2、左前上葉区S3、左上舌区S4、及び左下舌区S5に区分される。左下葉LLLは、肺区域としての左上葉・下葉区S6、左前肺底区S8、左外側肺底区S9、及び左後肺底区S10に区分される。 Further, the left upper lobe LUL is divided into a left posterior apex segment S1 + 2, a left anterior upper lobe segment S3, a left upper tongue segment S4, and a left lower tongue segment S5 as lung segments. The lower left lobe LLL is divided into a left upper lobe / lower lobe segment S6, a left anterior lung floor segment S8, a left outer lung floor segment S9, and a left rear lung floor segment S10.
 制御部31は、取得部26、登録部27、及び学習部29の処理の流れを制御する。取得部26が、読影レポート23の検査画像22に含まれる関心領域の画像と、読影レポート23の文字情報に含まれる関心領域の名称とを取得する処理について図4および図5を用いて説明する。 The control unit 31 controls the processing flow of the acquisition unit 26, the registration unit 27, and the learning unit 29. A process in which the acquisition unit 26 acquires the image of the region of interest included in the examination image 22 of the interpretation report 23 and the name of the region of interest included in the character information of the interpretation report 23 will be described with reference to FIGS. 4 and 5. .
 読影レポート23には、読影対象となる検査画像22と、付帯情報23Aと、所見情報23Bと、リンク情報23Cとが含まれる。付帯情報23Aは、患者ID、検査ID、検査日などの読影対象となる検査画像22に付帯された文字情報である。所見情報23Bは、読影対象となる検査画像22を読影した読影医の所見を編集したものであり、読影医端末12によって入力された文字情報である。リンク情報23Cは、後述するように読影レポート23をディスプレイに表示する際に用いられるものであり、所見情報23Bと、検査画像22に含まれる関心領域の位置情報とを関連付けるリンク情報である。 The interpretation report 23 includes an inspection image 22 to be interpreted, supplementary information 23A, finding information 23B, and link information 23C. The accompanying information 23A is character information attached to the examination image 22 to be interpreted, such as a patient ID, examination ID, and examination date. The finding information 23 </ b> B is obtained by editing the findings of the interpreting doctor who has interpreted the examination image 22 to be interpreted, and is character information input by the interpreting doctor terminal 12. The link information 23C is used when the interpretation report 23 is displayed on the display as will be described later, and is link information that associates the finding information 23B with the position information of the region of interest included in the examination image 22.
 図5は、読影医端末12又は診療科医端末13のディスプレイに読影レポート23を表示する際の表示画面32の一例である。この例では、上から順に、付帯情報23Aが表示される付帯情報表示欄32Aと、所見情報23Bが表示される所見欄32Bと、読影対象となる検査画像22のサムネイル画像が表示される画像表示欄32Cとを有する。 FIG. 5 is an example of a display screen 32 when the interpretation report 23 is displayed on the display of the interpretation doctor terminal 12 or the clinician terminal 13. In this example, in order from the top, an auxiliary information display field 32A in which the auxiliary information 23A is displayed, an observation field 32B in which the observation information 23B is displayed, and an image display in which thumbnail images of the examination image 22 to be interpreted are displayed. Column 32C.
 図5に示す例では、所見欄32Bに、「右S1にφ=30mmの境界明瞭な肺結節が認められる。」という所見情報23Bが表示されている。この場合、関心領域の病変名を示す「肺結節」と、関心領域の肺区域名である「右S1」と、関心領域の直径寸法をしめす「φ=30mm」が強調表示されている。 In the example shown in FIG. 5, the finding information 23 </ b> B is displayed in the finding column 32 </ b> B, “a clear lung nodule with φ = 30 mm is recognized in the right S <b> 1. In this case, “pulmonary nodule” indicating the lesion name of the region of interest, “right S1” which is the lung area name of the region of interest, and “φ = 30 mm” indicating the diameter dimension of the region of interest are highlighted.
 さらに、本実施形態では、読影レポート23には、リンク情報23Cが含まれており、このリンク情報23Cは、所見情報23Bのうち、病変名を示す「肺結節」の文言と、検査画像22に含まれる関心領域の位置情報とを関連付ける。この関心領域の位置情報とは、具体的には、検査画像22内における関心領域の座標とこの座標を中心とする範囲である。このリンク情報23Cを有することによって、読影医端末12又は診療科医端末13を操作し、表示画面32内で強調表示された「肺結節」を選択すると、関連付けられた関心領域の位置情報に基づく関心領域の画像22Aをディスプレイに表示させることができる。図5に示す例では、「肺結節」を選択した場合、検査画像22のうち、座標(X,Y,Z)を中心とする直径寸法がφ=30mmの関心領域(円で囲んだ部分)を含む範囲を切り出し、リンク情報23Cに含まれる関心領域の位置情報を中心にして拡大した画像32Dが表示される。 Further, in the present embodiment, the interpretation report 23 includes link information 23C. This link information 23C is included in the wording of “pulmonary nodule” indicating the lesion name and the examination image 22 in the observation information 23B. Associate with the location information of the region of interest included. Specifically, the position information of the region of interest is the coordinates of the region of interest in the inspection image 22 and a range centered on the coordinates. By having the link information 23C, when the interpreting doctor terminal 12 or the clinician terminal 13 is operated and “pulmonary nodule” highlighted in the display screen 32 is selected, it is based on the position information of the related region of interest. The image 22A of the region of interest can be displayed on the display. In the example shown in FIG. 5, when “pulmonary nodule” is selected, a region of interest (a portion surrounded by a circle) having a diameter dimension of φ = 30 mm around the coordinates (X, Y, Z) in the examination image 22. An image 32D enlarged by centering on the position information of the region of interest included in the link information 23C is displayed.
 図4及び図5に示す例では、取得部26は、読影レポート23の所見情報23Bの文字を解析し、肺区域の名称を示す「右S1」を関心領域の名称として取得するとともに、読影レポート23のリンク情報23Cから検査画像22内における関心領域の座標(X,Y,Z)と、関心領域の直径寸法を示す「φ=30mm」とからなる位置情報を取得する。 In the example shown in FIGS. 4 and 5, the acquisition unit 26 analyzes the characters of the finding information 23 </ b> B of the interpretation report 23, acquires “right S <b> 1” indicating the name of the lung region as the name of the region of interest, and interprets the interpretation report The position information including the coordinates (X, Y, Z) of the region of interest in the examination image 22 and “φ = 30 mm” indicating the diameter dimension of the region of interest is acquired from the link information 23C of 23.
 登録部27は、取得部26により取得した関心領域の画像22Aと関心領域の名称とからなる教師データ33を記憶装置28に登録する。図4に示す例では、関心領域の画像22Aと、肺区域の名称である「右S1」と、位置情報である座標(X,Y,Z)および直径φ=30mmとからなる教師データ33を登録する。記憶装置28は、例えば、学習支援装置18に設けられたHDD(hard disc drive)やSSD(solid state drive)等の記憶装置の一部でもよいし、ネットワーク21を介して接続された記憶装置でもよい。 The registration unit 27 registers the teacher data 33 including the region of interest image 22 </ b> A acquired by the acquisition unit 26 and the name of the region of interest in the storage device 28. In the example shown in FIG. 4, teacher data 33 including an image 22A of the region of interest, “Right S1” that is the name of the lung area, coordinates (X, Y, Z) that are position information, and a diameter φ = 30 mm are included. sign up. The storage device 28 may be a part of a storage device such as an HDD (hard disc drive) or SSD (solid state drive) provided in the learning support device 18, or may be a storage device connected via the network 21. Good.
 以上のようなプロセスで、登録部27は、複数の教師データ33を登録し、記憶装置28に後述する機械学習等のために、たとえば、所定個数の教師データ33を登録するまで、または読影レポートデータベース17に登録された全ての読影レポート23に基づく教師データ33を登録するまで、読影レポート23からの関心領域の画像22Aと関心領域の名称の取得、教師データ33の登録が繰り返される。 Through the process as described above, the registration unit 27 registers a plurality of teacher data 33 and, for example, until a predetermined number of teacher data 33 is registered in the storage device 28 for machine learning described later, or an interpretation report. Until the teacher data 33 based on all the interpretation reports 23 registered in the database 17 is registered, the acquisition of the image 22A of the region of interest and the name of the region of interest from the interpretation report 23 and the registration of the teacher data 33 are repeated.
 学習部29は、記憶装置28に登録された教師データ33を複数用いて、読影レポート23の検査画像22の入力に対して、関心領域の画像22A及び関心領域の名称を出力する判別モデル34を生成するための学習を行う。具体的には、ディープラーニングなどの機械学習手法を用いて判別モデル34を生成する。例えば、教師データ33を複数入力して、機械学習アルゴリズムに、関心領域の位置情報と各ボクセルの特徴量(画素値など)の関係性を学習させておく。具体的には、各ボクセルの特徴量のうち関心領域周辺の特徴量を入力したときに得られる位置情報と、教師データ33における関心領域の位置情報との誤差が最小になるように、機械学習アルゴリズムで用いる学習用重み付け係数を更新する。 The learning unit 29 uses a plurality of teacher data 33 registered in the storage device 28 and outputs a discrimination model 34 that outputs an image 22A of the region of interest and the name of the region of interest in response to the input of the inspection image 22 of the interpretation report 23. Learning to generate. Specifically, the discrimination model 34 is generated using a machine learning method such as deep learning. For example, a plurality of teacher data 33 are input, and the machine learning algorithm is made to learn the relationship between the position information of the region of interest and the feature amount (pixel value, etc.) of each voxel. Specifically, machine learning is performed so that the error between the position information obtained when a feature amount around the region of interest among the feature amounts of each voxel is input and the position information of the region of interest in the teacher data 33 is minimized. The learning weighting coefficient used in the algorithm is updated.
 以上のように、学習部29で生成された判別モデル34は、関心領域判別装置25に送信される。関心領域判別装置25の判別部35では、読影レポート23などの検査画像22が入力された際、判別モデル34を用いて、検査画像22内における関心領域の画像22A及び関心領域の名称を出力する。 As described above, the discrimination model 34 generated by the learning unit 29 is transmitted to the region of interest discrimination device 25. When the inspection image 22 such as the interpretation report 23 is input, the determination unit 35 of the region-of-interest determination device 25 outputs the image 22A of the region of interest in the inspection image 22 and the name of the region of interest using the determination model 34. .
 判別モデル34は、上述した機械学習手法を用いて決定された重み付け係数を含むものであり、検査画像22が入力された場合、関心領域の画像22A及び関心領域の名称の判別のために、用いられる。 The discrimination model 34 includes a weighting coefficient determined by using the above-described machine learning method. When the inspection image 22 is input, the discrimination model 34 is used for discriminating the region of interest image 22A and the region of interest name. It is done.
 判別部35は、判別モデル34を用いて検査画像内における関心領域の画像及び関心領域の名称を判別する。判別部35は、例えば判別した関心領域の画像36及び関心領域の名称37などを関心領域判別装置25のディスプレイなどに表示させる。 The discriminating unit 35 discriminates the image of the region of interest and the name of the region of interest in the examination image using the discrimination model 34. The discriminating unit 35 displays, for example, the discriminated region-of-interest image 36 and the region-of-interest name 37 on the display of the region-of-interest discriminating apparatus 25.
 以下、肺の検査画像22を有する読影レポート23から判別モデル34が生成され、関心領域の画像及び関心領域の名称が判別されるプロセスを、図6のフローチャートを用いて説明する。 Hereinafter, the process in which the discrimination model 34 is generated from the interpretation report 23 having the lung examination image 22 and the image of the region of interest and the name of the region of interest are discriminated will be described with reference to the flowchart of FIG.
 学習支援装置18では、まず、肺疾患に関する読影レポート23を読影レポートデータベース17から読み出す(S101)。 The learning support device 18 first reads the interpretation report 23 regarding the lung disease from the interpretation report database 17 (S101).
 次に、取得部26が読影レポート23の検査画像22に含まれる関心領域の画像と、読影レポート23の文字情報に含まれる関心領域の名称とを取得し(S102)、登録部27が、取得部26により取得した関心領域の画像と関心領域の名称とからなる教師データ33を記憶装置28に登録する(S103)。上述したように、読影レポート23に「右S1にφ=30mmの境界明瞭な肺結節が認められる。」という所見情報23B、リンク情報23Cが含まれている場合は、関心領域の肺区域名である「右S1」、リンク情報23Cに含まれる関心領域の位置情報を取得する。 Next, the acquisition unit 26 acquires the image of the region of interest included in the examination image 22 of the interpretation report 23 and the name of the region of interest included in the character information of the interpretation report 23 (S102), and the registration unit 27 acquires The teacher data 33 including the image of the region of interest acquired by the unit 26 and the name of the region of interest is registered in the storage device 28 (S103). As described above, when the interpretation report 23 includes the observation information 23B and link information 23C that “a clear lung nodule of φ = 30 mm is recognized in the right S1”, the lung section name of the region of interest is used. A certain “right S1”, the position information of the region of interest included in the link information 23C is acquired.
 そして、機械学習等のための教師データ33が記憶装置28に登録されると、学習部29が、登録された教師データ33を複数用いて、判別モデル34を生成するための学習を行う(S104)。生成された判別モデル34は、関心領域判別装置25に送信される。 Then, when the teacher data 33 for machine learning or the like is registered in the storage device 28, the learning unit 29 performs learning for generating the discrimination model 34 using a plurality of registered teacher data 33 (S104). ). The generated discrimination model 34 is transmitted to the region of interest discrimination device 25.
 関心領域判別装置25に新たな検査画像22が入力された場合(S105でYES)、判別部35は、判別モデル34を用いて検査画像22内における関心領域の画像及び関心領域の名称を出力する。例えば、肺区域の色分け表示、または肺区域の名称表示などを行う(S106)。 When a new examination image 22 is input to the region of interest discrimination device 25 (YES in S105), the discrimination unit 35 outputs the image of the region of interest and the name of the region of interest in the examination image 22 using the discrimination model 34. . For example, color-coded display of lung areas or name display of lung areas is performed (S106).
 以上のように、学習支援装置18は、読影レポート23から取得した関心領域の画像および関心領域の名称からなる教師データに基づいて学習を行っているので、医療情報システムにおいて、従来から使用されていた読影レポートから教師データを取得可能であり、学習に必要な教師データを容易に取得することができる。さらに、教師データを用いて判別モデル34を生成可能であり、正しい情報が記録されている読影レポートに基づいて判別モデル34が生成されているため、判別の精度を容易に向上させることができる。 As described above, the learning support device 18 performs learning based on the teacher data including the image of the region of interest acquired from the image interpretation report 23 and the name of the region of interest, and thus has been conventionally used in medical information systems. Teacher data can be obtained from the interpretation report, and teacher data necessary for learning can be easily obtained. Furthermore, since the discrimination model 34 can be generated using the teacher data and the discrimination model 34 is generated based on the interpretation report in which correct information is recorded, the discrimination accuracy can be easily improved.
 なお、本実施形態では、関心領域として「右S1」を出力可能な関心領域判別装置25を例示しているが、本発明はこれに限定するものではなく、複数の関心領域を同時に判別可能とするもの、すなわち、入力された検査画像内のある範囲のボクセルは「右S1」であり、別の範囲のボクセルは「右S2」というように、複数の関心領域を同時に判別可能な判別モデルを作成してもよい。また、このような判別モデルを関心領域判別装置に適用し、複数の関心領域を同時に判別可能としてもよい。 In the present embodiment, the region-of-interest discriminating device 25 capable of outputting “right S1” as the region of interest is illustrated, but the present invention is not limited to this, and a plurality of regions of interest can be discriminated simultaneously. That is, a discriminant model that can discriminate a plurality of regions of interest at the same time, such as “right S1” for a certain range of voxels in the input inspection image and “right S2” for another range of voxels. You may create it. Further, such a discrimination model may be applied to the region of interest discrimination device so that a plurality of regions of interest can be discriminated simultaneously.
 上記実施形態では、読影レポート23に含まれるリンク情報23Cから、関心領域の位置情報を取得する例を上げたが、本発明はこれに限らず、読影レポート23が、アノテーション情報を有している場合、取得部26は、アノテーション情報から、関心領域の位置情報を取得してもよい。アノテーション情報とは、画像データなどに注釈として付与された情報であり、図7に示す例では、アノテーション情報としての矢印38が検査画像22に含まれており、この矢印38は、関心領域の周囲に付されている。この場合、取得部26は、例えば、検査画像22内における矢印38の先端位置の座標を位置情報として取得する。 In the above-described embodiment, the example in which the position information of the region of interest is acquired from the link information 23C included in the interpretation report 23 has been described. However, the present invention is not limited to this, and the interpretation report 23 has annotation information. In this case, the acquisition unit 26 may acquire the position information of the region of interest from the annotation information. The annotation information is information added as annotation to image data or the like. In the example shown in FIG. 7, an arrow 38 as annotation information is included in the inspection image 22, and the arrow 38 is around the region of interest. It is attached to. In this case, the acquisition unit 26 acquires, for example, the coordinates of the tip position of the arrow 38 in the inspection image 22 as position information.
 また、取得部26は、関心領域判別部を備え、この関心領域判別部により読影レポート23の検査画像22から関心領域を判別してもよい。この場合、関心領域判別部の構成としては、上記実施形態の関心領域判別装置25と同様の構成であり、すなわち、学習生成済みの判別モデル34を用いて読影レポート23の検査画像22から関心領域を判別し、新たな教師データとして登録する構成である。 The acquisition unit 26 may include a region of interest determination unit, and the region of interest determination unit may determine the region of interest from the inspection image 22 of the interpretation report 23. In this case, the region-of-interest discriminating unit has the same configuration as the region-of-interest discriminator 25 of the above-described embodiment. Is registered and registered as new teacher data.
 また、別の変形例として、取得部26が読影レポート23を解析することにより、第1の名称と、第1の名称と異なる第2の名称を含む複数の関心領域の名称を取得した場合には、学習部29は、関心領域の名称として第1の名称を含む第1の教師データと、関心領域の名称として第2の名称を含む第2の教師データとを用いて学習を行う。この場合、例えば、読影レポート23の中に「右S1/右S2に肺結節が認められる・・・」という文字情報から、「右S1」および「右S2」という2つの名称を取得した場合、登録部27は、「右S1」という名称を用いた教師データ33と、「右S2」という名称を用いた教師データ33をそれぞれ登録する。 As another modification, when the acquisition unit 26 analyzes the image interpretation report 23 and acquires the names of a plurality of regions of interest including a first name and a second name different from the first name. The learning unit 29 performs learning using the first teacher data including the first name as the name of the region of interest and the second teacher data including the second name as the name of the region of interest. In this case, for example, when two names “right S1” and “right S2” are acquired from the text information “Lung nodule is recognized in right S1 / right S2... The registration unit 27 registers the teacher data 33 using the name “right S1” and the teacher data 33 using the name “right S2”.
 学習部29は、「右S1」という名称を用いた教師データ33を入力した場合における関心領域の位置情報の誤差と、「右S2」という名称を用いた教師データ33を入力した場合における関心領域の位置情報の誤差の両方が小さくなるように、機械学習アルゴリズムで用いる学習用重み付け係数を更新する。これにより、学習の精度を向上させることができる。 The learning unit 29 receives an error in the position information of the region of interest when the teacher data 33 using the name “right S1” is input and the region of interest when the teacher data 33 using the name “right S2” is input. The weighting coefficient for learning used in the machine learning algorithm is updated so that both of the position information errors of the machine learning algorithm are reduced. Thereby, the accuracy of learning can be improved.
 また、取得部26が読影レポート23を解析することにより、複数の関心領域の名称を取得した場合には、学習部29は、教師データ33に加えて、関心領域の位置情報に関する領域位置情報を用いて、学習を行う構成としてもよい。この場合、図8に示すように、学習部29は、領域位置情報として、例えば、右S4は肺の側面に、右S5は内側に存在するという領域位置情報を予め記憶しておく。そして、取得部26が読影レポート23を解析することにより「右S4」および「右S5」という2つの名称を取得した場合、学習部29は、領域位置情報を用いて、関心領域の肺外側面方向の半分を「右S4」、肺内側面方向の半分を「S5」として学習させる。 In addition, when the acquisition unit 26 analyzes the interpretation report 23 and acquires names of a plurality of regions of interest, the learning unit 29 obtains region position information related to the region information of the region of interest in addition to the teacher data 33. It is good also as a structure which uses and learns. In this case, as illustrated in FIG. 8, the learning unit 29 stores in advance region position information, for example, that the right S4 exists on the side of the lung and the right S5 exists on the inside as the region position information. When the acquisition unit 26 analyzes the interpretation report 23 and acquires two names “right S4” and “right S5”, the learning unit 29 uses the region position information to acquire the lung outer surface of the region of interest. Half of the direction is learned as “right S4”, and half of the lung inner surface direction is learned as “S5”.
 また、取得部26は、階層構造データベースを参照して上位概念又は下位概念の名称を取得し、学習部29は、関心領域の画像と上位概念又は下位概念の名称とからなる教師データ33を用いて、学習を行う構成にしてもよい。階層構造データベースは、例えば、学習支援装置18に設けられた記憶装置の一部を用いたものでもよいし、ネットワーク21を介して接続されたデータベースでもよい。階層構造データベースは、関心領域の名称に対して上位概念又は下位概念に対応する上位概念又は下位概念の名称を記憶する。例えば、肺については、肺葉および肺区域の区分などが階層構造データベースに記憶されており、上位概念から下位概念の順に、右肺>右上葉>右S1という階層構造が記憶されている。 The acquisition unit 26 refers to the hierarchical structure database to acquire the name of the superordinate concept or the subordinate concept, and the learning unit 29 uses the teacher data 33 including the image of the region of interest and the name of the superordinate concept or the subordinate concept. Thus, the learning may be configured. The hierarchical structure database may be, for example, a part of a storage device provided in the learning support device 18 or a database connected via the network 21. The hierarchical structure database stores the names of the superordinate concepts or subordinate concepts corresponding to the superordinate concepts or subordinate concepts for the name of the region of interest. For example, for the lung, the classification of lung lobes and lung areas is stored in the hierarchical structure database, and the hierarchical structure of right lung> upper right lobe> right S1 is stored in order from the higher concept to the lower concept.
 この場合、例えば、取得部26が「右S1」という名称を取得すると、学習部29は、階層構造データベースを参照し上位概念の名称である「右上葉」及び「右肺」を取得し、「右S1」に加えて「右上葉」及び「右肺」という名称についても教師データ33として上記実施形態と同様の学習を行う。 In this case, for example, when the acquisition unit 26 acquires the name “right S1”, the learning unit 29 refers to the hierarchical structure database and acquires “upper right lobe” and “right lung”, which are the names of the upper concepts. In addition to “Right S1”, the names “upper right lobe” and “right lung” are also learned as teacher data 33 in the same manner as in the above embodiment.
 また、取得部26は、類似名称データベース39を参照して、複数の類似の名称から、代表の名称を決定し、学習部29は、関心領域の画像と代表の名称とからなる教師データ33を用いて、学習を行う構成としてもよい。図9に示すように、類似名称データベース39は、例えば、学習支援装置18に設けられた記憶装置の一部を用いたものでもよいし、ネットワーク21を介して接続されたデータベースでもよい。類似名称データベース39は、関心領域の名称に対して互いに類似する複数の類似の名称を予め記憶したデータベースであり、例えば、区域名;「右S3」および「右肺尖区」という複数の類似の名称が記憶されている。この例では、取得部26は、代表の名称として「右S3」に決定し、学習部29は、関心領域の画像と、名称としての「右S3」とからなる教師データ33を用いて、上記実施形態と同様の学習を行う。 The acquisition unit 26 refers to the similar name database 39 to determine a representative name from a plurality of similar names, and the learning unit 29 receives the teacher data 33 including the image of the region of interest and the representative name. It is good also as a structure which uses and learns. As illustrated in FIG. 9, the similar name database 39 may be, for example, a part of the storage device provided in the learning support device 18 or a database connected via the network 21. The similar name database 39 is a database in which a plurality of similar names that are similar to each other with respect to the name of the region of interest are stored in advance. For example, the similar name database 39 includes a plurality of similar names such as area names; The name is stored. In this example, the acquisition unit 26 determines “right S3” as the representative name, and the learning unit 29 uses the teacher data 33 including the image of the region of interest and “right S3” as the name. Learning similar to the embodiment is performed.
 また、取得部26は、読影レポートデータベース17に読影レポート23が新規に記憶された場合に、関心領域の画像および関心領域の名称を新規に取得し、登録部27は、取得部26により新規に取得した関心領域の画像と関心領域の名称とからなる新規の教師データ33を登録し、学習部29は、新規の教師データ33が登録された場合に、新規の教師データ33を含めた教師データを複数用いて再度学習することにより、更新された判別モデル34を生成することが好ましい。これにより、判別モデル34の判別精度を向上させることができる。 In addition, when the interpretation report 23 is newly stored in the interpretation report database 17, the acquisition unit 26 newly acquires the image of the region of interest and the name of the region of interest, and the registration unit 27 newly uses the acquisition unit 26. New teacher data 33 including the acquired image of the region of interest and the name of the region of interest is registered, and the learning unit 29 includes the teacher data including the new teacher data 33 when the new teacher data 33 is registered. It is preferable to generate the updated discriminant model 34 by re-learning using a plurality of. Thereby, the discrimination accuracy of the discrimination model 34 can be improved.
 上記実施形態では、読影医が読影医端末12などにより作成した読影レポート23を例に上げているが、電子カルテなど、上記実施形態の読影レポート23と同様の画像と文字情報が含まれたものであればよい。 In the above embodiment, the interpretation report 23 created by the interpretation doctor using the interpretation doctor terminal 12 or the like is taken as an example, but an image and character information similar to the interpretation report 23 of the above embodiment, such as an electronic medical record, are included. If it is.
 また、上記実施形態では、読影レポートから関心領域についての情報を取得し、教師データとして登録しているが、これに限らず、関心領域についての情報が含まれていない過去の読影レポートが記憶されている場合、この過去の読影レポートと、新規に記憶された読影レポートを利用して教師データを登録してもよい。この場合、図10に示すように、取得部26は、新規に記憶された読影レポート23と、同一の患者を対象とする過去の読影レポート43が読影レポートデータベース17に記憶されている場合、過去の読影レポート43の検査画像42と、新規に記憶された読影レポート23の検査画像22と、新規に取得した関心領域の画像22Aを画像処理等によって位置合わせすることにより、過去の読影レポート43の検査画像42に含まれる関心領域の画像42Bを取得する。この場合、取得部26は、例えば、付帯情報23A,43Aに含まれる患者IDから同一の患者を対象とする読影レポート23,43であると判断する。 Further, in the above embodiment, information about the region of interest is acquired from the interpretation report and registered as teacher data. However, the present invention is not limited to this, and past interpretation reports that do not include information about the region of interest are stored. If this is the case, teacher data may be registered using this past interpretation report and a newly stored interpretation report. In this case, as illustrated in FIG. 10, the acquisition unit 26 stores the previously stored interpretation report 23 and the past interpretation report 43 targeting the same patient in the interpretation report database 17. The image of the past interpretation report 43, the newly stored examination image 22 of the interpretation report 23, and the newly acquired image 22A of the region of interest are aligned by image processing or the like. An image 42B of the region of interest included in the inspection image 42 is acquired. In this case, for example, the acquisition unit 26 determines that the interpretation reports 23 and 43 are for the same patient from the patient IDs included in the incidental information 23A and 43A.
 登録部27は、過去の読影レポート43に基づき取得した関心領域の画像42Bと、新規に取得した関心領域の名称とからなる過去画像教師データ44を登録し、学習部29は、過去画像教師データ44が登録された場合に、過去画像教師データ44を含めた教師データ33を複数用いて再度学習することにより、更新された判別モデル34を生成する。これにより、教師データの数をさらに増やすことができるため、判別モデル34の判別精度を向上させることができる。 The registration unit 27 registers the past image teacher data 44 including the region of interest image 42B acquired based on the past interpretation report 43 and the newly acquired region of interest name, and the learning unit 29 stores the past image teacher data. When 44 is registered, an updated discrimination model 34 is generated by re-learning using a plurality of teacher data 33 including the past image teacher data 44. Thereby, since the number of teacher data can be further increased, the discrimination accuracy of the discrimination model 34 can be improved.
 また、取得部26は、関心領域の名称として、解剖学的臓器名、区域名として肺を例に上げているが、これに限らず、図11に示すように、肝臓の各区域(尾状葉S1、左葉S2,S3,S4、右葉S5,S6,S7など)について適用してもよいし、脳の各区域について適用してもよい。あるいは、取得部26は、関心領域として、病名、病状を取得するようにしてもよい。 In addition, the acquisition unit 26 uses the anatomical organ name as the name of the region of interest and the lung as the area name as an example. However, the acquisition unit 26 is not limited thereto, and as illustrated in FIG. It may be applied to leaves S1, left leaves S2, S3, S4, right leaves S5, S6, S7, etc.), or may be applied to each area of the brain. Alternatively, the acquisition unit 26 may acquire a disease name and a medical condition as the region of interest.
 上記実施形態では、関心領域判別装置25とは別に学習支援装置18を設けているが、関心領域判別装置25に学習支援装置18が組み込まれるように一体に構成してもよい。 In the above embodiment, the learning support device 18 is provided separately from the region-of-interest discriminating device 25. However, the learning support device 18 may be integrated into the region-of-interest discriminating device 25.
 上記各実施形態において、取得部26、登録部27、学習部29、判別部35といった各種の処理を実行する処理部(processing unit)のハードウェア的な構造は、例えば、上述したように、ソフトウェアプログラムを実行して各種の処理部として機能する汎用的なプロセッサであるCPUである。また、CPUが実現する機能の全部または一部に代えて、各種のプロセッサを使用してもよい。これらの各種のプロセッサのハードウェア的な構造は、より具体的には、半導体素子などの回路素子を組み合わせた電気回路(circuitry)である。また記憶部のハードウェア的な構造はHDD(hard disc drive)やSSD(solid state drive)等の記憶装置である。 In each of the above embodiments, the hardware structure of the processing unit (processing unit) that executes various processes such as the acquisition unit 26, the registration unit 27, the learning unit 29, and the determination unit 35 is, for example, as described above. The CPU is a general-purpose processor that executes programs and functions as various processing units. Various processors may be used instead of all or part of the functions realized by the CPU. More specifically, the hardware structure of these various processors is an electric circuit (circuitry) in which circuit elements such as semiconductor elements are combined. The hardware structure of the storage unit is a storage device such as a hard disk drive (HDD) or a solid state drive (SSD).
 上述の種々の実施形態や種々の変形例を適宜組み合わせることも可能である。また、本発明は、プログラムに加えて、プログラムを記憶する記憶媒体にも及ぶ。 It is also possible to appropriately combine the various embodiments and various modifications described above. In addition to the program, the present invention extends to a storage medium that stores the program.
 なお、本発明は、読影レポートを用いる医療分野以外の分野においても、適用可能である。文字情報と画像情報が含まれる特定情報であればよく、例えば、文字情報が含まれる写真画像(画像データ)を用いるものや、SNS(social networking service)情報を用いるものに適用することができる。
 [付記項1]
 画像及び文字情報を含む特定情報を記憶する記憶部と、
 前記テキスト情報を解析することにより、前記画像に含まれる関心領域の画像と前記文字情報に含まれる前記関心領域の名称とを取得する取得部と、
 前記取得部により取得した前記関心領域の画像と前記関心領域の名称とからなる教師データを登録する登録部と、
 前記登録部に登録された教師データを用いて、前記特定情報の画像の入力に対して、前記関心領域の画像及び前記関心領域の名称を出力する判別モデルを生成するための学習部とを備えた学習支援装置。
 [付記項2]
 前記特定情報は、写真画像中に文字情報が含まれているものである。
 [付記項3]
 前記特定情報は、SNS情報である。
Note that the present invention can also be applied to fields other than the medical field using interpretation reports. Any specific information including character information and image information may be used. For example, the information may be applied to information using a photographic image (image data) including character information or information using SNS (social networking service) information.
[Additional Item 1]
A storage unit for storing specific information including image and character information;
By acquiring the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the text information,
A registration unit for registering teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest;
A learning unit for generating a discrimination model that outputs the image of the region of interest and the name of the region of interest in response to the input of the image of the specific information using the teacher data registered in the registration unit; Learning support device.
[Additional Item 2]
The specific information includes character information included in a photographic image.
[Additional Item 3]
The specific information is SNS information.
 2 医療情報システム
 11 モダリティ
 11A CT装置
 11B MRI装置
 11C CR装置
 12 読影医端末
 13 診療科医端末
 14 画像管理サーバ
 15 画像データベース
 16 読影レポートサーバ
 17 読影レポートデータベース
 18 学習支援装置
 19 オーダ管理サーバ
 21 ネットワーク
 22 検査画像
 22A 画像
 23 読影レポート
 23A 付帯情報
 23B 所見情報
 23C リンク情報
 25 関心領域判別装置
 26 取得部
 27 登録部
 28 記憶装置
 29 学習部
 31 制御部
 32 表示画面
 32A 付帯情報表示欄
 32B 所見欄
 32C 画像表示欄
 32D 画像
 33 教師データ
 34 判別モデル
 35 判別部
 36 画像
 37 名称
 38 矢印
 39 類似名称データベース
 42 検査画像
 42B 画像
 43 読影レポート
 43A 付帯情報
 44 過去画像教師データ
DESCRIPTION OF SYMBOLS 2 Medical information system 11 Modality 11A CT apparatus 11B MRI apparatus 11C CR apparatus 12 Interpretation medical terminal 13 Clinician terminal 14 Image management server 15 Image database 16 Interpretation report server 17 Interpretation report database 18 Learning support apparatus 19 Order management server 21 Network 22 Inspection image 22A Image 23 Interpretation report 23A Additional information 23B Finding information 23C Link information 25 Region of interest discrimination device 26 Acquisition unit 27 Registration unit 28 Storage device 29 Learning unit 31 Control unit 32 Display screen 32A Additional information display field 32B Observation field 32C Image display Field 32D Image 33 Teacher data 34 Discrimination model 35 Discrimination unit 36 Image 37 Name 38 Arrow 39 Similar name database 42 Inspection image 42B Image 43 Interpretation report 43A Attached information 44 Past image teacher data

Claims (19)

  1.  画像及び文字情報を含む読影レポートを記憶した記憶部と、
     前記読影レポートを解析することにより、前記画像に含まれる関心領域の画像と前記文字情報に含まれる前記関心領域の名称とを取得する取得部と、
     前記取得部により取得した前記関心領域の画像と前記関心領域の名称とからなる教師データを登録する登録部と、
     前記登録部に登録された前記教師データを複数用いて、前記読影レポートの画像の入力に対して、前記関心領域の画像及び前記関心領域の名称を出力する判別モデルを生成するための学習を行う学習部とを備えた学習支援装置。
    A storage unit storing an interpretation report including image and text information;
    By acquiring the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the interpretation report,
    A registration unit for registering teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest;
    Learning to generate a discriminant model that outputs the image of the region of interest and the name of the region of interest with respect to the input of the image of the interpretation report using a plurality of the teacher data registered in the registration unit A learning support apparatus comprising a learning unit.
  2.  前記取得部は、前記読影レポートを解析することにより、前記関心領域の位置情報を取得する請求項1記載の学習支援装置。 The learning support device according to claim 1, wherein the acquisition unit acquires position information of the region of interest by analyzing the interpretation report.
  3.  前記読影レポートは、前記文字情報に含まれる所見情報と、前記画像に含まれる前記関心領域の位置情報とを関連付けるリンク情報を有し、
     前記取得部は、前記リンク情報から、前記関心領域の位置情報を取得する請求項2記載の学習支援装置。
    The image interpretation report has link information that associates the finding information included in the character information with the position information of the region of interest included in the image,
    The learning support device according to claim 2, wherein the acquisition unit acquires position information of the region of interest from the link information.
  4.  前記読影レポートには、前記関心領域の周囲に付されたアノテーション情報を有し、
     前記取得部は、前記アノテーション情報から、前記関心領域の位置情報を取得する請求項2記載の学習支援装置。
    The interpretation report has annotation information attached around the region of interest,
    The learning support device according to claim 2, wherein the acquisition unit acquires position information of the region of interest from the annotation information.
  5.  前記取得部は、前記読影レポートの画像から前記関心領域を判別する関心領域判別部によって、前記関心領域の画像及び前記関心領域の位置情報を取得する請求項2記載の学習支援装置。 3. The learning support apparatus according to claim 2, wherein the acquisition unit acquires an image of the region of interest and position information of the region of interest by a region of interest determination unit that determines the region of interest from the image of the interpretation report.
  6.  前記学習部は、前記読影レポートを解析することにより、第1の名称と前記第1の名称と異なる第2の名称を含む複数の関心領域の名称を取得した場合には、前記関心領域の名称として前記第1の名称を含む第1の教師データと、前記関心領域の名称として前記第2の名称を含む第2の教師データとを用いて、前記学習を行う請求項1ないし5のいずれか1項記載の学習支援装置。 When the learning unit acquires the names of a plurality of regions of interest including a first name and a second name different from the first name by analyzing the interpretation report, the name of the region of interest The learning is performed using the first teacher data including the first name as the first teacher data and the second teacher data including the second name as the name of the region of interest. The learning support apparatus according to item 1.
  7.  前記読影レポートを解析することにより、複数の関心領域の名称を取得した場合には、前記学習部は、前記教師データに加えて、前記関心領域の位置情報に関する領域位置情報を用いて、学習を行う請求項1ないし5のいずれか1項記載の学習支援装置。 When the names of a plurality of regions of interest are acquired by analyzing the interpretation report, the learning unit performs learning using region position information related to the position information of the region of interest in addition to the teacher data. The learning support device according to claim 1, wherein the learning support device is performed.
  8.  前記取得部は、前記関心領域の名称に対して上位概念又は下位概念に対応する上位概念又は下位概念の名称を記憶する階層構造データベースを参照して、前記関心領域の名称から、前記上位概念又は下位概念の名称を取得し、
     前記学習部は、前記関心領域の画像と前記上位概念又は下位概念の名称とからなる教師データを用いて、前記学習を行う請求項1ないし5のいずれか1項記載の学習支援装置。
    The acquisition unit refers to a hierarchical structure database that stores a name of a superordinate concept or a subordinate concept corresponding to a superordinate concept or a subordinate concept with respect to the name of the region of interest. Get the name of the subordinate concept
    The learning support device according to claim 1, wherein the learning unit performs the learning using teacher data including an image of the region of interest and a name of the superordinate concept or the subordinate concept.
  9.  前記取得部は、前記関心領域の名称に対して互いに類似する複数の類似の名称を予め記憶した類似名称データベースを参照して、前記複数の類似の名称から、代表の名称を決定し、
     前記学習部は、前記関心領域の画像と前記代表の名称とからなる教師データを用いて、前記学習を行う請求項1ないし5のいずれか1項記載の学習支援装置。
    The acquisition unit refers to a similar name database in which a plurality of similar names similar to each other for the name of the region of interest are stored in advance, and determines a representative name from the plurality of similar names,
    The learning support device according to claim 1, wherein the learning unit performs the learning using teacher data including an image of the region of interest and the representative name.
  10.  前記取得部は、前記記憶部に前記読影レポートが新規に記憶された場合に、前記関心領域の画像および前記関心領域の名称を新規に取得し、
     前記登録部は、前記取得部により新規に取得した前記関心領域の画像と前記関心領域の名称とからなる新規の教師データを登録し、
     前記学習部は、前記新規の教師データが登録された場合に、前記新規の教師データを含めた前記教師データを複数用いて再度学習することにより、更新された前記判別モデルを生成する請求項1ないし9のいずれか1項記載の学習支援装置。
    The acquisition unit newly acquires an image of the region of interest and a name of the region of interest when the interpretation report is newly stored in the storage unit,
    The registration unit registers new teacher data including an image of the region of interest newly acquired by the acquisition unit and a name of the region of interest,
    The learning unit generates the updated discrimination model by re-learning using a plurality of the teacher data including the new teacher data when the new teacher data is registered. The learning support device according to any one of Items 9 to 9.
  11.  前記取得部は、新規に記憶された前記読影レポートと、同一の患者を対象とする過去の前記読影レポートが前記記憶部に記憶されている場合、過去の前記読影レポートの画像と、新規に記憶された前記読影レポートの画像と、新規に取得した前記関心領域の画像を位置合わせすることにより、過去の前記読影レポートの画像に含まれる前記関心領域の画像を取得し、
     前記登録部は、過去の前記読影レポートに基づき取得した前記関心領域の画像と、新規に取得した前記関心領域の名称とからなる過去画像教師データを登録し、
     前記学習部は、前記過去画像教師データが登録された場合に、前記過去画像教師データを含めた前記教師データを複数用いて再度学習することにより、更新された前記判別モデルを生成する請求項10記載の学習支援装置。
    In the case where the newly stored interpretation report and the past interpretation report for the same patient are stored in the storage unit, the acquisition unit newly stores an image of the past interpretation report and the newly stored interpretation report. By aligning the image of the image interpretation report that has been newly acquired and the image of the region of interest newly acquired, the image of the region of interest included in the image of the image interpretation report in the past is acquired,
    The registration unit registers past image teacher data including the image of the region of interest acquired based on the past interpretation report and the name of the newly acquired region of interest,
    The said learning part produces | generates the said discrimination | determination model updated by learning again using the said teacher data including the said past image teacher data, when the said past image teacher data is registered. The learning support apparatus described.
  12.  前記読影レポートには、電子カルテが含まれる請求項1~11のいずれか1項記載の学習支援装置。 The learning support device according to any one of claims 1 to 11, wherein the interpretation report includes an electronic medical record.
  13.  前記取得部は、前記関心領域の名称として、解剖学的臓器名、区域名、病名、病状を取得する請求項1~12のいずれか1項記載の学習支援装置。 The learning support device according to any one of claims 1 to 12, wherein the acquisition unit acquires an anatomical organ name, a zone name, a disease name, and a medical condition as the name of the region of interest.
  14.  画像及び文字情報を含む読影レポートを記憶した記憶部と、取得部と、登録部と、学習部とを備えた学習支援装置の学習支援方法において、
     前記取得部が、前記読影レポートを解析することにより、前記画像に含まれる関心領域の画像と前記文字情報に含まれる前記関心領域の名称とを取得する取得ステップと、
     前記登録部が、前記取得部により取得した前記関心領域の画像と前記関心領域の名称とからなる教師データを登録する登録ステップと、
     前記教師データを複数用いて、前記読影レポートの画像の入力に対して、前記関心領域の画像及び前記関心領域の名称を出力する判別モデルを生成するための学習を行う学習ステップとを備えた学習支援方法。
    In a learning support method of a learning support device including a storage unit that stores an image interpretation report including image and character information, an acquisition unit, a registration unit, and a learning unit,
    The acquisition unit acquires the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the interpretation report;
    A registration step in which the registration unit registers teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest;
    A learning step that performs learning to generate a discrimination model that outputs the image of the region of interest and the name of the region of interest with respect to the input of the image of the interpretation report using a plurality of the teacher data Support method.
  15.  コンピュータを、
     画像及び文字情報を含む読影レポートを記憶した記憶部と、
     前記読影レポートを解析することにより、前記画像に含まれる関心領域の画像と前記文字情報に含まれる前記関心領域の名称とを取得する取得部と、
     前記取得部により取得した前記関心領域の画像と前記関心領域の名称とからなる教師データを登録する登録部と、
     前記登録部に登録された前記教師データを複数用いて、前記読影レポートの画像の入力に対して、前記関心領域の画像及び前記関心領域の名称を出力する判別モデルを生成するための学習を行う学習部として機能させるための学習支援プログラム。
    Computer
    A storage unit storing an interpretation report including image and text information;
    By acquiring the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the interpretation report,
    A registration unit for registering teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest;
    Learning to generate a discriminant model that outputs the image of the region of interest and the name of the region of interest with respect to the input of the image of the interpretation report using a plurality of the teacher data registered in the registration unit A learning support program for functioning as a learning unit.
  16.  画像及び文字情報を含む読影レポートを記憶した記憶部と、
     前記読影レポートを解析することにより、前記画像に含まれる関心領域の画像と前記文字情報に含まれる前記関心領域の名称とを取得する取得部と、
     前記取得部により取得した前記関心領域の画像と前記関心領域の名称とからなる教師データを登録する登録部と、
     前記登録部に登録された前記教師データを複数用いて、前記読影レポートの画像の入力に対して、前記関心領域の画像及び前記関心領域の名称を出力する判別モデルを生成するための学習を行う学習部と、
     前記読影レポートの画像が入力された場合、前記判別モデルを用いて、前記関心領域の画像及び前記関心領域の名称を判別する判別部とを備えた関心領域判別装置。
    A storage unit storing an interpretation report including image and text information;
    By acquiring the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the interpretation report,
    A registration unit for registering teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest;
    Learning to generate a discriminant model that outputs the image of the region of interest and the name of the region of interest with respect to the input of the image of the interpretation report using a plurality of the teacher data registered in the registration unit The learning department,
    A region-of-interest discriminating apparatus comprising: a discriminating unit that discriminates an image of the region of interest and a name of the region of interest using the discrimination model when an image of the interpretation report is input.
  17.  画像及び文字情報を含む読影レポートを記憶した記憶部と、取得部と、登録部と、学習部と、判別部とを備えた関心領域判別装置において、
     前記取得部が、前記読影レポートを解析することにより、前記画像に含まれる関心領域の画像と前記文字情報に含まれる前記関心領域の名称とを取得する取得ステップと、
     前記登録部が、前記取得部により取得した前記関心領域の画像と前記関心領域の名称とからなる教師データを登録する登録ステップと、
     前記学習部が、前記教師データを複数用いて、前記読影レポートの画像の入力に対して、前記関心領域の画像及び前記関心領域の名称を出力する判別モデルを生成するための学習を行う学習ステップと、
     前記判別部が、前記読影レポートの画像が入力された場合、前記判別モデルを用いて、前記関心領域の画像及び前記関心領域の名称を判別する判別ステップとを備えた関心領域判別方法。
    In a region of interest discriminating apparatus comprising a storage unit storing an image interpretation report including image and character information, an acquisition unit, a registration unit, a learning unit, and a discrimination unit,
    The acquisition unit acquires the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the interpretation report;
    A registration step in which the registration unit registers teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest;
    A learning step in which the learning unit performs learning for generating a discrimination model that outputs the image of the region of interest and the name of the region of interest with respect to the input of the image of the interpretation report using a plurality of the teacher data When,
    A region-of-interest determination method comprising: a determination step of determining, when the image of the interpretation report is input, the image of the region of interest and the name of the region of interest using the determination model.
  18.  コンピュータを、
     画像及び文字情報を含む読影レポートを記憶した記憶部と、
     前記読影レポートを解析することにより、前記画像に含まれる関心領域の画像と前記文字情報に含まれる前記関心領域の名称とを取得する取得部と、
     前記取得部により取得した前記関心領域の画像と前記関心領域の名称とからなる教師データを登録する登録部と、
     前記登録部に登録された前記教師データを複数用いて、前記読影レポートの画像の入力に対して、前記関心領域の画像及び前記関心領域の名称を出力する判別モデルを生成するための学習を行う学習部と、
     前記読影レポートの画像が入力された場合、前記判別モデルを用いて、前記関心領域の画像及び前記関心領域の名称を判別する判別部として機能させるための関心領域判別プログラム。
    Computer
    A storage unit storing an interpretation report including image and text information;
    By acquiring the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the interpretation report,
    A registration unit for registering teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest;
    Learning to generate a discriminant model that outputs the image of the region of interest and the name of the region of interest with respect to the input of the image of the interpretation report using a plurality of the teacher data registered in the registration unit The learning department,
    A region-of-interest discrimination program for functioning as a discriminator for discriminating the image of the region of interest and the name of the region of interest using the discrimination model when an image of the interpretation report is input.
  19.  コンピュータを、
     画像及び文字情報を含む読影レポートを記憶した記憶部と、
     前記読影レポートを解析することにより、前記画像に含まれる関心領域の画像と前記文字情報に含まれる前記関心領域の名称とを取得する取得部と、
     前記取得部により取得した前記関心領域の画像と前記関心領域の名称とからなる教師データを登録する登録部と、
     前記登録部に登録された前記教師データを複数用いて、前記読影レポートの画像の入力に対して、前記関心領域の画像及び前記関心領域の名称を出力する判別モデルを生成するための学習を行う学習部と、
     前記読影レポートの画像が入力された場合、前記判別モデルを用いて、前記関心領域の画像及び前記関心領域の名称を判別する判別部として機能させるための学習済みモデル。
    Computer
    A storage unit storing an interpretation report including image and text information;
    By acquiring the image of the region of interest included in the image and the name of the region of interest included in the character information by analyzing the interpretation report,
    A registration unit for registering teacher data including the image of the region of interest acquired by the acquisition unit and the name of the region of interest;
    Learning to generate a discriminant model that outputs the image of the region of interest and the name of the region of interest with respect to the input of the image of the interpretation report using a plurality of the teacher data registered in the registration unit The learning department,
    A learned model for functioning as a discriminator for discriminating the image of the region of interest and the name of the region of interest using the discrimination model when the image of the interpretation report is input.
PCT/JP2019/004771 2018-03-16 2019-02-12 Learning assisting device, learning assisting method, learning assisting program, region-of-interest discriminating device, region-of-interest discriminating method, region-of-interest discriminating program, and learned model WO2019176407A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020505688A JP7080304B2 (en) 2018-03-16 2019-02-12 Learning support device, learning support method, learning support program, region of interest discrimination device, region of interest discrimination method, region of interest discrimination program and trained model
US17/000,363 US11468659B2 (en) 2018-03-16 2020-08-23 Learning support device, learning support method, learning support program, region-of-interest discrimination device, region-of-interest discrimination method, region-of-interest discrimination program, and learned model

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018049799 2018-03-16
JP2018-049799 2018-03-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/000,363 Continuation US11468659B2 (en) 2018-03-16 2020-08-23 Learning support device, learning support method, learning support program, region-of-interest discrimination device, region-of-interest discrimination method, region-of-interest discrimination program, and learned model

Publications (1)

Publication Number Publication Date
WO2019176407A1 true WO2019176407A1 (en) 2019-09-19

Family

ID=67908140

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/004771 WO2019176407A1 (en) 2018-03-16 2019-02-12 Learning assisting device, learning assisting method, learning assisting program, region-of-interest discriminating device, region-of-interest discriminating method, region-of-interest discriminating program, and learned model

Country Status (3)

Country Link
US (1) US11468659B2 (en)
JP (1) JP7080304B2 (en)
WO (1) WO2019176407A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI784537B (en) * 2020-06-23 2022-11-21 日商歐姆龍股份有限公司 Inspection device, inspection method and inspection program
JP7280993B1 (en) 2022-03-25 2023-05-24 ソフトバンク株式会社 Generation device, generation method and generation program

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11348250B2 (en) * 2019-11-11 2022-05-31 Ceevra, Inc. Image analysis system for identifying lung features
US11723619B2 (en) * 2021-03-19 2023-08-15 Canon Medical Systems Corporation System and method for indication and selection of region of interest for x-ray dose adjustment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012198928A (en) * 2012-06-18 2012-10-18 Konica Minolta Medical & Graphic Inc Database system, program, and report retrieval method
WO2013001584A1 (en) * 2011-06-30 2013-01-03 パナソニック株式会社 Similar case history search device and similar case history search method
WO2017017722A1 (en) * 2015-07-24 2017-02-02 オリンパス株式会社 Processing device, processing method and program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4979842B1 (en) * 2011-06-30 2012-07-18 パナソニック株式会社 Similar case retrieval apparatus and similar case retrieval method
US20160306936A1 (en) 2015-04-15 2016-10-20 Canon Kabushiki Kaisha Diagnosis support system, information processing method, and program
CA3030577A1 (en) * 2016-07-12 2018-01-18 Mindshare Medical, Inc. Medical analytics system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013001584A1 (en) * 2011-06-30 2013-01-03 パナソニック株式会社 Similar case history search device and similar case history search method
JP2012198928A (en) * 2012-06-18 2012-10-18 Konica Minolta Medical & Graphic Inc Database system, program, and report retrieval method
WO2017017722A1 (en) * 2015-07-24 2017-02-02 オリンパス株式会社 Processing device, processing method and program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI784537B (en) * 2020-06-23 2022-11-21 日商歐姆龍股份有限公司 Inspection device, inspection method and inspection program
JP7435303B2 (en) 2020-06-23 2024-02-21 オムロン株式会社 Inspection device, unit selection device, inspection method, and inspection program
JP7280993B1 (en) 2022-03-25 2023-05-24 ソフトバンク株式会社 Generation device, generation method and generation program
JP2023143197A (en) * 2022-03-25 2023-10-06 ソフトバンク株式会社 Generator, generation method and generation program

Also Published As

Publication number Publication date
US11468659B2 (en) 2022-10-11
JP7080304B2 (en) 2022-06-03
JPWO2019176407A1 (en) 2021-02-25
US20200387729A1 (en) 2020-12-10

Similar Documents

Publication Publication Date Title
KR101943011B1 (en) Method for facilitating medical image reading and apparatus using the same
EP2422318B1 (en) Quantification of medical image data
WO2019176407A1 (en) Learning assisting device, learning assisting method, learning assisting program, region-of-interest discriminating device, region-of-interest discriminating method, region-of-interest discriminating program, and learned model
JP5320335B2 (en) Diagnosis support system, diagnosis support apparatus, diagnosis support method, and diagnosis support program
JP2019153250A (en) Device, method, and program for supporting preparation of medical document
JP6906462B2 (en) Medical image display devices, methods and programs
US10916010B2 (en) Learning data creation support apparatus, learning data creation support method, and learning data creation support program
JP2009070201A (en) Diagnostic reading report generation system, diagnostic reading report generation device, and diagnostic reading report generation method
JP6885896B2 (en) Automatic layout device and automatic layout method and automatic layout program
JP6719421B2 (en) Learning data generation support device, learning data generation support method, and learning data generation support program
US20160253468A1 (en) Measurement value management apparatus, method for operating measurement value management apparatus, and measurement value management system
US20120300997A1 (en) Image processing device, method and program
JP7000206B2 (en) Medical image processing equipment, medical image processing methods, and medical image processing programs
JP2008200373A (en) Similar case retrieval apparatus and its method and program and similar case database registration device and its method and program
WO2020209382A1 (en) Medical document generation device, method, and program
JP6738305B2 (en) Learning data generation support device, learning data generation support device operating method, and learning data generation support program
JP6845071B2 (en) Automatic layout device and automatic layout method and automatic layout program
WO2020129385A1 (en) Medical document creation assistance device, method, and program
WO2021157705A1 (en) Document creation assistance device, method, and program
JP7007469B2 (en) Medical document creation support devices, methods and programs, trained models, and learning devices, methods and programs
WO2021193548A1 (en) Document creation assistance device, method, and program
JP7376715B2 (en) Progress prediction device, method of operating the progress prediction device, and progress prediction program
WO2022153702A1 (en) Medical image display device, method, and program
JP2018175695A (en) Registration apparatus, registration method, and registration program
US20230225681A1 (en) Image display apparatus, method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19766882

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020505688

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19766882

Country of ref document: EP

Kind code of ref document: A1