WO2022196494A1 - 情報処理方法、学習モデル生成方法、プログラム及び情報処理装置 - Google Patents

情報処理方法、学習モデル生成方法、プログラム及び情報処理装置 Download PDF

Info

Publication number
WO2022196494A1
WO2022196494A1 PCT/JP2022/010319 JP2022010319W WO2022196494A1 WO 2022196494 A1 WO2022196494 A1 WO 2022196494A1 JP 2022010319 W JP2022010319 W JP 2022010319W WO 2022196494 A1 WO2022196494 A1 WO 2022196494A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
cervix
medical image
image
lesion
Prior art date
Application number
PCT/JP2022/010319
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
大貴 藤間
まゆ 秦
達 末原
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Priority to JP2023507028A priority Critical patent/JPWO2022196494A1/ja
Publication of WO2022196494A1 publication Critical patent/WO2022196494A1/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/303Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the vagina, i.e. vaginoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the present invention relates to an information processing method, a learning model generation method, a program, and an information processing device.
  • Cervical cancer develops from the uterine cervix at the entrance of the uterus, and since it often develops near the entrance of the uterus, early detection is easy with regular examinations. If it can be detected early, it is relatively easy to treat and the prognosis is good.
  • a long, thin tube-shaped instrument is inserted from the cervical os to collect the mucous membrane of the surface of the uterine lining, and cytological examination is performed by observing it with a microscope. A detailed examination is performed if it is judged necessary from the results of the cytology. A detailed examination is performed by collecting a small amount of tissue from the uterine cervix from a subject by biopsy and observing it with a microscope or the like. Colposcope examination is performed to observe the tissue surface using a microscope called a colposcope, which magnifies the surface of the uterine cervix and vagina.
  • Conization is performed when it is judged necessary from the results of the detailed examination. It is one of the surgical methods that also provides treatment. A definitive diagnosis is made by the pathological diagnosis, and it is determined whether or not treatment should be completed or additional treatment such as resection should be performed.
  • Patent Document 1 by detecting methylation of human papillomavirus genomic DNA contained in a sample collected from the uterine cervix of a subject, abnormal cells derived from lesions of cervical high-grade dysplasia or higher are detected. A method is disclosed that can more accurately and simply determine whether or not the substance is contained in a sample.
  • Patent Document 1 has the problem that it is necessary to collect a tissue fragment from the cervix of the subject in order to obtain information on the lesion of the cervix, which imposes a heavy burden on the subject.
  • the purpose of the present disclosure is to reduce the burden on the subject and provide an information processing method and the like that can estimate information about cervical lesions.
  • An information processing method acquires a medical image of the uterine cervix, acquires information on a lesion from the tissue surface of the cervix in the depth direction based on the acquired medical image, A computer is caused to execute a process of displaying information about the medical image and the lesion.
  • a learning model generation method acquires training data including a medical image of the cervix and information on lesions from the tissue surface of the cervix to the depth direction, and based on the training data. , generating a learning model trained to output information on lesions from the tissue surface to the depth direction of the cervix when a medical image of the cervix is input.
  • information about cervical lesions can be estimated while reducing the burden on the subject.
  • FIG. 1 is a schematic diagram of an information processing system according to a first embodiment
  • FIG. 1 is a block diagram showing a configuration example of an information processing system
  • FIG. FIG. 4 is an explanatory diagram showing an outline of a learning model
  • It is a figure which shows the example of the content of the information memorize
  • 4 is a flowchart showing an example of a training data generation processing procedure
  • It is a schematic diagram which shows an example of a reception screen.
  • FIG. 11 is a flowchart showing an example of a learning model generation processing procedure
  • FIG. 9 is a flowchart illustrating an example of an estimation information acquisition processing procedure;
  • It is a schematic diagram which shows an example of the screen displayed on a display apparatus.
  • It is a schematic diagram which shows an example of the screen displayed on a display apparatus.
  • FIG. 11 is a flow chart showing an example of re-learning processing of a learning model in the second embodiment;
  • FIG. 11 is a flow chart
  • FIG. 1 is a schematic diagram of an information processing system according to the first embodiment.
  • the information processing system includes an information processing device 1 and an image diagnostic device 2 .
  • the information processing apparatus 1 and the image diagnostic apparatus 2 are connected for communication to a network N such as a LAN (Local Area Network) or the Internet.
  • a network N such as a LAN (Local Area Network) or the Internet.
  • the diagnostic imaging apparatus 2 is a device unit for imaging the subject's hollow organs.
  • the hollow organ to be examined is the uterus, particularly the uterine cervix (endocervical canal).
  • the image diagnostic apparatus 2 is a device unit for generating an ultrasonic tomographic image (medical image) of the cervix of the subject using the detection device 21 and performing ultrasonic examination and diagnosis of the cervix.
  • the diagnostic imaging apparatus 2 includes a detection device 21, an image processing device 22, a display device 23, and the like.
  • the detection device 21 is a device for obtaining an ultrasonic tomographic image of the uterine cervix of the subject.
  • the detection device 21 includes an insertion tube 211 and a drive device 212 .
  • the insertion tube 211 is elongated and is a portion that is inserted into the uterine cervix from the inside of the subject's vagina.
  • the insertion tube 211 has a probe portion 213 and a connector portion 214 arranged at the end of the probe portion 213 .
  • the probe section 213 is connected to the driving device 212 via the connector section 214 .
  • the side of the insertion tube 211 farther from the connector portion 214 is referred to as the distal end side.
  • a shaft 215 is inserted inside the probe portion 213 .
  • a sensor 216 is connected to the tip side of the shaft 215 .
  • Sensor 216 is, for example, an ultrasonic transducer.
  • the sensor 216 transmits ultrasonic waves based on pulse signals within the uterine cervix and receives reflected waves reflected by the living tissue of the uterine cervix.
  • the shaft 215 and the sensor 216 are configured to move forward and backward in the longitudinal direction of the uterine cervix while rotating in the circumferential direction of the uterine cervix (cervical canal) inside the probe section 213 .
  • the insertion tube 211 can capture a tomographic image including reflectors present inside the lumen wall tissue of a hollow organ, such as cancer cells, in addition to the lumen wall such as the cervical wall.
  • the detection device 21 for example, a conventional diagnostic imaging catheter and MDU (Motor Driving Unit) may be used, or it may be a dedicated detection device suitable for diagnostic imaging of the inside of the uterus.
  • the detection device 21 is not limited to one for generating an ultrasonic tomographic image using an ultrasonic transducer.
  • the detection device 21 may be a detection device for optical tomographic image generation, such as for OCT (Optical Coherence Tomography) or OFDI (Optical Frequency Domain Imaging) that generates an optical tomographic image using near-infrared light.
  • the sensor 216 is a transmitter/receiver that emits near-infrared light and receives reflected light.
  • the detection device 21 has a sensor 216 that is both an ultrasound transducer and a transceiver for OCT or OFDI, and is for generating medical images, including both ultrasound and optical tomograms. good too.
  • the probe part 213 is detachably attached to the driving device 212 .
  • the driving device 212 controls the operation of the probe section 213 inserted into the uterine cervix by driving the built-in motor according to the user's operation.
  • the driving device 212 rotates the shaft 215 and the sensor 216 in the circumferential direction while longitudinally moving the shaft 215 and the sensor 216 from the distal side to the proximal side.
  • the sensor 216 continuously scans the inside of the uterine cervix at predetermined time intervals, and outputs reflected wave data of detected ultrasonic waves to the image processing device 22 .
  • the image processing device 22 is a processing device that generates an ultrasonic tomographic image (medical image) of the cervix based on reflected wave data output from the ultrasonic probe of the detection device 21 .
  • Image processor 22 produces one image for each revolution of sensor 216 .
  • the generated image is a transverse layer image centered on the probe portion 213 and substantially perpendicular to the probe portion 213 .
  • the image processing device 22 causes the display device 23 to display the generated medical image, the estimated information acquired from the information processing device 1, and the like.
  • the display device 23 is a liquid crystal display panel, an organic EL (Electro Luminescence) display panel, or the like.
  • the display device 23 displays medical images generated by the image processing device 22, estimated information received from the information processing device 1, and the like.
  • the information processing device 1 is an information processing device capable of various types of information processing and transmission/reception of information, such as a server computer and a personal computer.
  • the information processing apparatus 1 may be a local server installed in the same facility (hospital or the like) as the image diagnostic apparatus 2, or may be a cloud server connected to the image diagnostic apparatus 2 via the Internet or the like.
  • the information processing apparatus 1 functions as an estimating apparatus that estimates information (hereinafter also referred to as estimation information) regarding lesions of the cervix using the medical image of the cervix of the subject generated by the image diagnostic apparatus 2 .
  • the information processing device 1 provides the estimated information to the diagnostic imaging device 2 .
  • the estimated information is information about lesions from the tissue surface of the cervix to the depth direction.
  • the lesion to be estimated includes precancerous lesions, for example, cervical cancer, cervical intraepithelial neoplasia (CIN: Cervical Intraepithelial Neoplasia) and cervical intraepithelial glands (AIS: Adenocarcinoma In situ) and the like.
  • lesions include lesion candidates suspected of being lesions.
  • the information about the lesion includes, for example, information on the position (range) and type (symptom) of the lesion.
  • the depth direction means the radial direction of the uterine cervix (cervical canal), that is, the direction from the tissue surface (epidermis) to the inside.
  • the depth direction means the longitudinal direction of the cervix, that is, the direction from the vagina side to the uterus side.
  • a screening test by cytology is performed first, and a detailed examination is performed if it is judged necessary from the results of the cytology. Then, if deemed necessary, conization, which is both diagnostic and therapeutic, is performed to make a definitive diagnosis. The decision to perform conization is based on the results of a thorough examination. In the detailed examination, there is a histological examination in which cervical tissue collected from the subject is observed with a microscope, etc., and a colposcope examination in which the tissue surface is observed using a colposcope (microscope) that magnifies the surface of the cervix and vagina. is done.
  • a colposcope microscope
  • the learning model described later is used to provide doctors with estimation information that accurately estimates the state of lesions in the depth direction.
  • a doctor or the like can make a non-excisional diagnosis using the estimated information, so it is possible to reduce the burden on the patient.
  • FIG. 2 is a block diagram showing a configuration example of an information processing system.
  • the information processing device 1 includes a control unit 11 , a main storage unit 12 , an auxiliary storage unit 13 , a communication unit 14 , a display unit 15 and an operation unit 16 .
  • the information processing apparatus 1 may be a multicomputer composed of a plurality of computers, or may be a virtual machine virtually constructed by software.
  • the control unit 11 is an arithmetic processing device such as one or more CPUs (Central Processing Unit) or GPUs (Graphics Processing Unit).
  • the control unit 11 reads and executes the program 13P stored in the auxiliary storage unit 13, thereby causing the server computer to function as an information processing device that performs various processes related to generation of support information.
  • the main storage unit 12 is a temporary storage area such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), and flash memory.
  • the main storage unit 12 temporarily stores a program 13P read from the auxiliary storage unit 13 when the arithmetic processing of the control unit 11 is executed, or various data generated by the arithmetic processing of the control unit 11 .
  • the auxiliary storage unit 13 is a nonvolatile storage area such as a hard disk, EEPROM (Electrically Erasable Programmable ROM), flash memory, or the like.
  • the auxiliary storage unit 13 may be an external storage device connected to the information processing device 1 .
  • the auxiliary storage unit 13 stores programs and data including a program 13P necessary for the control unit 11 to execute processing.
  • the auxiliary storage unit 13 stores a learning model 131 and a training data DB (Data Base) 132.
  • the learning model 131 is a machine learning model that has learned training data.
  • the learning model 131 is assumed to be used as a program module that constitutes artificial intelligence software. Details of the learning model 131 and the training data DB 132 will be described later.
  • the program 13P may be computer-readable and recorded on the recording medium 13A.
  • the auxiliary storage unit 13 stores a program 13P read from the recording medium 13A by a reading device (not shown).
  • the recording medium 13A is a semiconductor memory such as a flash memory, an optical disk, a magnetic disk, a magnetic optical disk, or the like.
  • the program 13P according to the present embodiment may be downloaded from an external server (not shown) connected to a communication network and stored in the auxiliary storage unit 13.
  • the communication unit 14 is a communication module for performing processing related to communication.
  • the control unit 11 transmits and receives information to and from the diagnostic imaging apparatus 2 via the communication unit 14 .
  • the display unit 15 is an output device that outputs information such as medical images and estimation information.
  • the output device is, for example, a liquid crystal display or an organic EL (Electro Luminescence) display.
  • the operation unit 16 is an input device that receives user operations.
  • the input device is, for example, a keyboard, a pointing device such as a touch panel.
  • the image processing device 22 includes a control section 221 , a main storage section 222 , an auxiliary storage section 223 , a communication section 224 , an input/output section 225 and a detection device control section 226 .
  • the control unit 221 is an arithmetic processing device such as one or more CPUs, GPUs, or the like.
  • the main storage unit 222 is a temporary storage area such as SRAM, DRAM, and flash memory.
  • the control unit 221 performs various information processing by reading and executing programs stored in the auxiliary storage unit 223 .
  • the main storage unit 222 temporarily stores programs read from the auxiliary storage unit 122 when the arithmetic processing of the control unit 11 is executed, or various data generated by the arithmetic processing of the control unit 11 .
  • the auxiliary storage unit 223 is a nonvolatile storage area such as a hard disk, EEPROM, flash memory, or the like.
  • the auxiliary storage unit 223 stores programs and data necessary for the control unit 11 to execute processing.
  • the auxiliary storage unit 223 may store the learning model 131 .
  • the communication unit 224 is a communication module for performing processing related to communication.
  • the control unit 221 transmits and receives information to and from the information processing device 1 via the communication unit 224 and acquires estimated information.
  • the input/output unit 225 is an input/output I/F (interface) for connecting an external device.
  • the input/output unit 225 is connected to the display device 23, the input device 24, and the like.
  • the display device 23 is, for example, a liquid crystal display or an organic EL display.
  • the input device 24 is, for example, a pointing device such as a keyboard or touch panel.
  • the control unit 221 outputs medical images and estimated information to the display device 23 via the input/output unit 225 .
  • the control unit 221 also receives information input to the input device 24 via the input/output unit 225 .
  • the detection device control unit 226 controls the driving device 212, controls the sensor 216, and generates a medical image based on the signal received from the sensor 216. Since the function and configuration of the detection device control unit 226 are the same as those of conventionally used ultrasonic diagnostic devices, description thereof will be omitted. Note that the control unit 221 may implement the function of the detection device control unit 226 .
  • FIG. 3 is an explanatory diagram showing an outline of the learning model 131.
  • the learning model 131 is a machine learning model that receives a medical image of the uterine cervix of the subject as input and outputs information about lesions of the uterine cervix from the tissue surface to the depth direction.
  • the information processing device 1 performs machine learning for learning predetermined training data to generate the learning model 131 in advance.
  • the information processing apparatus 1 then inputs the medical image of the subject acquired from the image diagnostic apparatus 2 to the learning model 131 and outputs information about the lesion.
  • the learning model 131 is, for example, an image recognition technology using a semantic segmentation model (Semantic Segmentation Model), which is a type of CNN (Convolutional Neural Network), so that each pixel in the input image corresponds to an object (lesion) region. Whether or not it is a pixel is recognized on a pixel-by-pixel basis.
  • the learning model 131 has an input layer for inputting medical images, an intermediate layer for extracting and restoring image feature values, and an output layer for outputting information indicating the position and type of lesions included in the medical images.
  • the learning model 131 is U-Net, for example.
  • the input layer of the learning model 131 has a plurality of nodes that receive input of pixel values of pixels included in the medical image, and passes the input pixel values to the intermediate layer.
  • the intermediate layer has a convolution layer (CONV layer) and a deconvolution layer (DECONV layer).
  • CONV layer convolution layer
  • DECONV layer deconvolution layer
  • a convolutional layer is a layer that dimensionally compresses image data.
  • the lesion features are extracted by dimensionality reduction.
  • the deconvolution layer performs the deconvolution process to restore the original dimensions. Restoration processing in the deconvolution layer generates a binarized label image indicating whether or not each pixel in the medical image is a lesion.
  • the output layer has one or more nodes that output label images.
  • the medical image input to the learning model 131 includes a plurality of frames in chronological order generated by one pullback operation.
  • the plurality of medical images in chronological order are, for example, images of tomographic images observed over a predetermined range from the uterus side to the vagina side in the depth direction of the uterine cervix.
  • a medical image is a tomogram including a predetermined detection range in the depth direction of the cervix.
  • a label image output from the learning model 131 is an image in which an object class is identified for each pixel of a medical image.
  • Objects detected using the learning model 131 include, for example, lesions such as cancer, CIN and AIS.
  • CIN is further classified into three grades of dysplasia, CIN1 to CIN3, depending on the degree of appearance of atypical cells emerging within the epithelium.
  • a label image is an image having pixel values according to these object classes, such as cancer, CIN1 to CIN3, AIS, and others, for each pixel of a medical image.
  • Information indicating the position (area) and type (symptom) of a lesion included in the medical image is indicated for each pixel by the label image. This makes it possible to visually recognize the state of the lesion in the depth direction.
  • the learning model 131 may detect other cell tissues as objects in addition to the lesions described above.
  • Other tissue is tissue other than lesions in the cervix, including normal cells.
  • the output data output by the learning model 131 is not limited to label images.
  • the learning model 131 may output, for example, a bounding box or the like indicating the area and type of a lesion included in the medical image.
  • the learning model 131 may output information indicating the position of a lesion according to, for example, the direction and the like.
  • the learning model 131 may output information indicating the degree of lesion (normal, observation required, excision, etc.).
  • the configuration of the learning model 131 is not limited as long as it can identify the positions and types of lesions included in medical images.
  • the learning model 131 may also include a first model for detecting lesions and a second model for detecting other cell tissues.
  • FIG. 4 is a diagram showing an example of the content of information stored in the training data DB 132.
  • the training data DB 132 stores an ID that identifies training data, medical images, and lesion data (information about lesions) in association with each other. Medical images acquired using the image diagnostic apparatus 2 are recorded in the medical image sequence. An image having a pixel value corresponding to an object class is recorded for each pixel constituting a medical image in the lesion data string.
  • the information processing device 1 generates the learning model 131 using the training data and stores the generated learning model 131 in the learning phase, which is the stage prior to the operation phase for estimating lesions. Then, in the operation phase, the stored learning model 131 is used to generate estimated information.
  • FIG. 5 is a flowchart showing an example of a training data generation processing procedure. The following processing is executed by the control unit 11 according to the program 13P stored in the auxiliary storage unit 13 of the information processing device 1 in the learning phase.
  • the control unit 11 of the information processing device 1 acquires medical images from the image diagnostic device 2 (step S11).
  • a medical image includes a plurality of frames in chronological order generated by one pullback operation.
  • the control unit 11 acquires a section image of a conization section performed in the past in association with the medical image (step S12).
  • a section image is image data of a section of conization obtained by conization from the same subject as the observation target of the medical image.
  • the observation range of medical images includes the operation range of conization.
  • the control unit 11 may acquire medical record information such as a definitive diagnosis result in association with it.
  • the control unit 11 displays on the display unit 15 the reception screen 151 for receiving information about the lesion for each frame of the medical image (step S13).
  • the control unit 11 uses the reception screen 151 to receive information about the lesion (step S14).
  • FIG. 6 is a schematic diagram showing an example of the reception screen 151.
  • the reception screen 151 includes a medical image display section 152 , a section image display section 153 and a lesion input section 154 .
  • the medical image display unit 152 displays each frame of a medical image.
  • the slice image display unit 153 displays a slice image associated with the medical image.
  • the lesion input unit 154 displays an input field for accepting input of information regarding lesions in medical images.
  • the control unit 11 receives information about lesions (objects) input in the input fields when a doctor or the like operates the operation unit 16 .
  • the lesion information includes the coordinate range corresponding to the lesion area and the lesion type.
  • a doctor or the like uses a mouse or the like to input a lesion area using a medical image displayed on the medical image display unit 152, and also inputs the type of lesion for the lesion area. Information about these lesions is input based on the judgment of a specialist doctor or the like who has advanced knowledge of cervical cancer.
  • Medical images show acoustic and light shadows corresponding to normal and diseased cells.
  • a doctor or the like inputs the position and type of a lesion included in the medical image by comparing the section image with each frame of the medical image.
  • the control unit 11 may superimpose and display information indicating the correspondence relationship between the direction of the slice image and the direction of the medical image according to the doctor's selection. For example, based on the direction input by the doctor or the like, a line object or the like indicating the direction in the slice image (in the example of FIG. 6, the direction indicated by the direction) is superimposed and displayed on the medical image.
  • the control unit 11 may apply and display the received direction for all other frames in the same medical image.
  • the control unit 11 may rotate the medical image and display it on the medical image display unit 152 based on the direction input by the doctor or the like. Further, the control unit 11 may superimpose information indicating the frame position of the medical image on the slice image. The frame position for the section image is calculated based on the correspondence relationship between the observation range of the medical image and the treatment range of conization.
  • the control unit 11 generates training data, which is a data set in which information about lesions is labeled as correct values for medical images (step S15). More specifically, the control unit 11 generates training data in which labels (metadata) representing the coordinate range corresponding to the object region and the type of the object are added to the medical image.
  • the control unit 11 stores the generated training data in the training data DB 132 (step S16), and ends the series of processes.
  • the control unit 11 collects a large amount of medical images and information about lesions, and stores a plurality of information groups generated based on the collected data as training data in the training data DB 132 .
  • control unit 11 may receive information about lesions without acquiring slice images. For example, if a doctor or the like can specify information about a lesion based only on medical images, the information about the lesion may be input without using a section image.
  • FIG. 7 is a flowchart showing an example of the processing procedure for generating the learning model 131.
  • the following processing is executed by the control unit 11 according to the program 13P stored in the auxiliary storage unit 13 of the information processing device 1 after the processing of FIG. 5 is completed in the learning phase, for example.
  • the control unit 11 of the information processing device 1 acquires a set of training data extracted from the information group based on the information stored in the training data DB 132 (step S21). Using the acquired training data, the control unit 11 generates a learning model 131 that outputs information about lesions from the tissue surface of the cervix to the depth direction when a medical image of the cervix is input. (Step S22).
  • control unit 11 inputs each frame of the medical image included in the training data to the learning model 131 as input data, and acquires the coordinate range and type of the object output from the learning model 131 .
  • the control unit 11 calculates the error between the output coordinate range and type of the object and the correct value of the coordinate range and type of the object using a predetermined loss function.
  • the control unit 11 adjusts parameters such as weights between nodes using, for example, error backpropagation so as to optimize (minimize or maximize) the loss function. It is assumed that the definition information describing the learning model 131 is given an initial set value before learning is started. Optimized parameters are obtained when learning is completed by satisfying predetermined criteria for error and number of learning times.
  • control unit 11 causes the auxiliary storage unit 13 to store the definition information regarding the learned learning model 131 as the learned learning model 131 (step S23), and ends the processing according to this flowchart.
  • the control unit 11 causes the auxiliary storage unit 13 to store the definition information regarding the learned learning model 131 as the learned learning model 131 (step S23), and ends the processing according to this flowchart.
  • control unit 11 of the information processing device 1 executes a series of processes
  • the subject of each process is not limited.
  • Part or all of the above processing may be executed by the control unit 31 of the diagnostic imaging apparatus 2 .
  • the information processing apparatus 1 and the image diagnostic apparatus 2 may cooperate to perform a series of processes by performing inter-process communication, for example.
  • the learning model 131 may be generated by the information processing apparatus 1 and learned by the image diagnostic apparatus 2 .
  • the information processing system uses the learning model 131 generated as described above to provide estimated information for the medical image of the subject.
  • a processing procedure executed by the information processing system in the operation phase will be described below.
  • FIG. 8 is a flowchart showing an example of an estimation information acquisition processing procedure.
  • the following processing is executed by the control unit 11 according to the program 13P stored in the auxiliary storage unit 13 of the information processing device 1.
  • FIG. 8 the control unit 11 may perform the following processing in real time each time a medical image is transmitted from the image diagnostic apparatus 2, or may perform post-processing at an arbitrary timing based on the recorded medical image. .
  • the control unit 11 of the information processing device 1 acquires the medical image by receiving the medical image transmitted from the image diagnostic device 2 (step S31).
  • a medical image is a tomographic image including a plurality of frames in chronological order and including a plurality of continuous frames from the uterus side to the vagina side along the depth direction of the cervix.
  • the medical image may be continuous from the vaginal side to the uterine side.
  • the control unit 11 inputs the acquired medical image to the learning model 131 as input data (step S32).
  • the control unit 11 acquires estimation information output from the learning model 131 (step S33).
  • the estimated information is output, for example, in a labeled image having pixel values according to the location and type of lesions and other cells.
  • the control unit 11 determines whether or not there is a deviation from the adjacent label image (step S34).
  • the method of judging the divergence is not limited, as an example, the control unit 11 judges the presence or absence of the divergence by comparing the pixel values of the target label image and the pixel values of the preceding and succeeding label images.
  • the label image is an image composed of a plurality of continuous frames over the depth direction of the cervical canal. Since a lesion usually has a certain length in the depth direction of the cervical canal, it is assumed that the pixel values indicating the lesion are continuously shown in corresponding pixels of consecutive frames. Therefore, if only the target frame indicates a pixel value different from that of the preceding and succeeding frames, there is a high possibility that the estimation result indicated by the pixel value is erroneous. The control unit 11 determines such deviation of pixel values.
  • the control unit 11 determines whether or not there is a divergence by determining whether or not at least one of the pixel values of the target label image and the pixel values of the preceding and succeeding label images match. When the pixel value of the target label image matches at least one of the pixel values of the previous label image and the pixel value of the subsequent label image, the control unit 11 determines that there is no deviation. When the pixel value of the target label image does not match the pixel value of the previous label image and the pixel value of the subsequent label image, the control unit 11 determines that there is a divergence.
  • the control unit 11 executes the above-described determination processing for each pixel of all label images.
  • step S34 When it is determined that there is a deviation (step S34: NO), the control unit 11 recognizes pixels with the deviation as noise (step S35). The control unit 11 executes a predetermined noise removal process or the like, and returns the process to step S32.
  • the noise removal method is not limited, for example, image processing is performed to perform frame interpolation based on the pixel values of the preceding and succeeding frames on the pixels of the original medical image corresponding to the pixels with deviation.
  • the control unit 11 inputs the noise-removed medical image to the learning model 131 and obtains the estimation information output from the learning model 131 to re-estimate the information about the lesion. Note that the control unit 11 may return the process to step S32 without executing the noise removal process or the like.
  • step S34 If it is determined that there is no divergence (step S34: YES), the control unit 11 estimates information suggesting the necessity of treatment such as conization based on the acquired lesion estimation information (step S36), and estimates Derive the result.
  • the control unit 11 derives a result of estimating the necessity of treatment based on, for example, a table (not shown) that associates the types and positions (sizes) of lesions with the necessity of treatment.
  • the control unit 11 may estimate the necessity of treatment by a machine learning method using a learning model that outputs information suggesting the necessity of treatment when lesion estimation information is input. Note that the processing of step S36 is not essential.
  • the control unit 11 generates screen information including estimated information (step S37).
  • the control unit 11 transmits the generated screen information to the image processing device 22, and causes the display device 23 to display the screen 231 including the estimation information via the image processing device 22 (step S38).
  • the control unit 11 ends the series of processes.
  • control unit 11 of the information processing device 1 executes a series of processes
  • the processing in FIG. 8 may be executed partially or wholly by the control unit 31 of the diagnostic imaging apparatus 2 .
  • the control unit 31 of the image diagnostic apparatus 2 may store the learning model 131 acquired from the information processing apparatus 1 in the auxiliary storage unit 33 and execute the estimation information acquisition process based on the learning model 131 .
  • the estimated information is not limited to that displayed on the display device 23 via the image processing device 22 .
  • the control unit 11 may output the estimation information to a device (for example, a personal computer) other than the image processing device 22 and display it.
  • FIG. 9 and 10 are schematic diagrams showing an example of the screen 231 displayed on the display device 23.
  • FIG. FIG. 9 is an example of a screen 231 containing a two-dimensional image
  • FIG. 10 is an example of a screen 231 containing a three-dimensional image.
  • the control unit 221 of the image processing device 22 receives screen information including estimation information transmitted from the information processing device 1, and displays a screen 231 including estimation information as shown in FIG. 9 or 10 based on the received screen information. It is displayed on the display device 23 .
  • a screen 231 including a two-dimensional image includes, for example, a medical image display section 232, a two-dimensional image display section 233, a label display section 234, a treatment necessity display section 235, and a display switching button 236.
  • the medical image display unit 232 displays medical images received from the image diagnostic apparatus 2 .
  • the two-dimensional image display unit 233 displays a two-dimensional image in which estimation information indicated by a label corresponding to a lesion is superimposed on a medical image.
  • the two-dimensional image display unit 233 displays a plurality of two-dimensional images corresponding to a plurality of frames included in the medical image, and a two-dimensional image selected by the user from among the plurality of two-dimensional images. magnified view and included.
  • a medical image corresponding to the enlarged two-dimensional image may be displayed on the medical image display unit 232 .
  • the label display unit 234 displays the type of lesion indicated by the label and the display mode of the label in association with each other.
  • the treatment necessity display unit 235 displays the estimated result of the necessity of treatment in text or the like. When the process of step S36 is omitted, the estimated result of necessity of treatment is not displayed, and the doctor or the like may judge necessity of treatment based on other information.
  • the control unit 11 of the information processing apparatus 1 processes the label image output from the learning model 131 for each frame of a plurality of frames of medical images that are continuous along the depth direction of the cervix into a semi-transparent mask. generates a guide image superimposed on the frame of In this case, the control unit 11 changes the display mode of each lesion area according to the lesion type, such as by changing the display color of the mask according to the lesion type. Further, when other tissue cells are detected by the learning model 131, the control unit 11 masks only the boundary portions with the other pixels among the pixels indicating the other cell tissue, and omits the display of the portions other than the boundary portions. do.
  • the control unit 11 displays the medical image and the two-dimensional image in association with each other.
  • control unit 11 may notify the user of the information about the lesion by means of a warning sound, synthesized voice, screen blinking, etc., according to the type of the lesion. By outputting a warning sound or the like when a lesion is included in the medical image, the information can be reliably notified.
  • a screen 231 including a three-dimensional image includes, for example, a three-dimensional image display section 237, a label display section 234, a treatment necessity display section 235, and a display switching button 236.
  • FIG. The three-dimensional image display unit 237 superimposes and displays estimated information indicated by labels corresponding to lesions on the three-dimensional image of the uterine cervix. Configurations other than the three-dimensional image display unit 237 are the same as those in FIG.
  • the control unit 11 of the information processing device 1 generates a three-dimensional image of the uterine cervix including the lesion by stacking a plurality of continuous label images (slice data) along the depth direction.
  • a three-dimensional image can be generated, for example, by the voxel method.
  • a three-dimensional image is represented by volume data represented by coordinate values of voxels in a predetermined coordinate system and voxel values indicating types of lesions.
  • the data format of the three-dimensional image is not particularly limited, and may be polygon data or point cloud data. Other tissue cells in the 3D image may display only the tissue surface as in the 2D image.
  • the control unit 11 of the information processing device 1 generates information of the screen 231 including either a two-dimensional image or a three-dimensional image according to the operation of the display switching button 236 acquired via the image processing device 22, and displays the image. Output to the processing device 22 .
  • the control unit 11 may output information on the screen 231 of both the two-dimensional image and the three-dimensional image, and switch the display on the image processing device 22 side.
  • the screen 231 may include both the two-dimensional image display section 233 and the three-dimensional image display section 236, and display the two-dimensional image and the three-dimensional image in parallel.
  • a doctor or the like can grasp the state of the lesion in the depth direction and the estimation information of the necessity of treatment, and make a diagnosis. For subjects judged to have high-grade dysplasia, treatment such as excision surgery is determined.
  • the learning model 131 is used to accurately estimate information about lesions including precancerous lesions based on medical images of the subject.
  • the estimation result is displayed in a visually recognizable form using a two-dimensional image and a three-dimensional image, and suitably assists diagnosis by a doctor or the like.
  • a medical image which is a tomographic image
  • the learning model 131 is re-learned.
  • the differences from the first embodiment will be mainly described, and the same reference numerals will be given to the configurations common to the first embodiment, and detailed description thereof will be omitted.
  • FIG. 11 is a flowchart showing an example of relearning processing of the learning model in the second embodiment.
  • the control unit 11 of the information processing device 1 acquires estimation information output from the learning model 131 (step S41).
  • the control unit 11 acquires correction information for the estimated information (step S42).
  • the control unit 11 may acquire the correction information by accepting input of correction information from a doctor or the like via the image processing device 22 .
  • the control unit 221 of the image processing device 22 accepts a correction input for correcting the position or type of each object displayed on the screen 231 illustrated in FIG. 9, and transmits the received correction information to the information processing device 1. .
  • the control unit 11 re-learns using the correction information for the estimated information, and updates the learning model 131 (step S43). Specifically, the control unit 11 performs re-learning using the medical image input to the learning model 131 and correction information for the estimated information as training data, and updates the learning model 131 . That is, the control unit 11 optimizes parameters such as weights between nodes so that the estimated information output from the learning model 131 approximates the corrected estimated information, and regenerates the learning model 131 .
  • the learning model 131 can be further optimized through the operation of this information processing system.
  • control unit 12 main storage unit 13 auxiliary storage unit 14 communication unit 15 display unit 16 operation unit 13P program 131 learning model 132 training data DB 13A recording medium 2 diagnostic imaging device 21 detection device 211 insertion tube 212 drive device 213 probe section 214 connector section 215 shaft 216 sensor 22 image processing device 221 control section 222 main storage section 223 auxiliary storage section 224 communication section 225 input/output section 226 detection Device control unit 23 display device 24 input device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Optics & Photonics (AREA)
  • Gynecology & Obstetrics (AREA)
  • Reproductive Health (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
PCT/JP2022/010319 2021-03-17 2022-03-09 情報処理方法、学習モデル生成方法、プログラム及び情報処理装置 WO2022196494A1 (ja)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023507028A JPWO2022196494A1 (enrdf_load_stackoverflow) 2021-03-17 2022-03-09

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-043994 2021-03-17
JP2021043994 2021-03-17

Publications (1)

Publication Number Publication Date
WO2022196494A1 true WO2022196494A1 (ja) 2022-09-22

Family

ID=83320597

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/010319 WO2022196494A1 (ja) 2021-03-17 2022-03-09 情報処理方法、学習モデル生成方法、プログラム及び情報処理装置

Country Status (2)

Country Link
JP (1) JPWO2022196494A1 (enrdf_load_stackoverflow)
WO (1) WO2022196494A1 (enrdf_load_stackoverflow)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025032671A1 (ja) * 2023-08-07 2025-02-13 日本電気株式会社 内視鏡検査支援装置、内視鏡検査支援方法、及び、記録媒体

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006524553A (ja) * 2003-04-28 2006-11-02 ボード オブ リージェンツ, ザ ユニバーシティ オブ テキサス システム カテーテル画像化プローブ及び方法
US20140341449A1 (en) * 2011-09-23 2014-11-20 Hamid Reza TIZHOOSH Computer system and method for atlas-based consensual and consistent contouring of medical images
WO2016031273A1 (ja) * 2014-08-25 2016-03-03 オリンパス株式会社 超音波観測装置、超音波観測システム、超音波観測装置の作動方法
JP2016517288A (ja) * 2013-03-15 2016-06-16 シナプティヴ メディカル (バルバドス) インコーポレイテッドSynaptive Medical (Barbados) Inc. 低侵襲治療のための計画、誘導およびシミュレーションシステムおよび方法(関連出願の相互参照)本出願は、参照によりその全体が本明細書に組み込まれる、2013年3月15日に提出された「planning,navigationandsimulationsystemsandmethodsforminimallyinvasivetherapy」と題する米国仮特許出願第61/800,155号の優先権を主張する。本出願はまた、参照によりその全体が本明細書に組み込まれる、2014年1月8日に提出された「planning,navigationandsimulationsystemsandmethodsforminimallyinvasivetherapy」と題する米国仮特許出願第61/924,993号の優先権をも主張する。本出願はまた、参照によりその全体が本明細書に組み込まれる、2013年7月11日に提出された「surgicaltrainingandimagingbrainphantom」と題する米国仮特許出願第61/845,256号の優先権をも主張する。本出願はまた、参照によりその全体が本明細書に組み込まれる、2013年11月5日に提出された「surgicaltrainingandimagingbrainphantom」と題する米国仮特許出願第61/900,122号の優先権をも主張する。
JP2017217037A (ja) * 2016-06-02 2017-12-14 学校法人 埼玉医科大学 子宮癌腔内照射用アプリケータ、子宮癌の放射線治療計画方法、及び子宮癌の放射線治療計画装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006524553A (ja) * 2003-04-28 2006-11-02 ボード オブ リージェンツ, ザ ユニバーシティ オブ テキサス システム カテーテル画像化プローブ及び方法
US20140341449A1 (en) * 2011-09-23 2014-11-20 Hamid Reza TIZHOOSH Computer system and method for atlas-based consensual and consistent contouring of medical images
JP2016517288A (ja) * 2013-03-15 2016-06-16 シナプティヴ メディカル (バルバドス) インコーポレイテッドSynaptive Medical (Barbados) Inc. 低侵襲治療のための計画、誘導およびシミュレーションシステムおよび方法(関連出願の相互参照)本出願は、参照によりその全体が本明細書に組み込まれる、2013年3月15日に提出された「planning,navigationandsimulationsystemsandmethodsforminimallyinvasivetherapy」と題する米国仮特許出願第61/800,155号の優先権を主張する。本出願はまた、参照によりその全体が本明細書に組み込まれる、2014年1月8日に提出された「planning,navigationandsimulationsystemsandmethodsforminimallyinvasivetherapy」と題する米国仮特許出願第61/924,993号の優先権をも主張する。本出願はまた、参照によりその全体が本明細書に組み込まれる、2013年7月11日に提出された「surgicaltrainingandimagingbrainphantom」と題する米国仮特許出願第61/845,256号の優先権をも主張する。本出願はまた、参照によりその全体が本明細書に組み込まれる、2013年11月5日に提出された「surgicaltrainingandimagingbrainphantom」と題する米国仮特許出願第61/900,122号の優先権をも主張する。
WO2016031273A1 (ja) * 2014-08-25 2016-03-03 オリンパス株式会社 超音波観測装置、超音波観測システム、超音波観測装置の作動方法
JP2017217037A (ja) * 2016-06-02 2017-12-14 学校法人 埼玉医科大学 子宮癌腔内照射用アプリケータ、子宮癌の放射線治療計画方法、及び子宮癌の放射線治療計画装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GRUESSNER S. E. M.: "Intrauterine versus transvaginal sonography for benign and malignant disorders of the female reproductive tract : Sonographic evaluation of uterine and tubal abnormalities", ULTRASOUND IN OBSTETRICS AND GYNECOLOGY, JOHN WILEY & SONS LTD., GB, vol. 23, no. 4, 1 April 2004 (2004-04-01), GB , pages 382 - 387, XP055968177, ISSN: 0960-7692, DOI: 10.1002/uog.1014 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025032671A1 (ja) * 2023-08-07 2025-02-13 日本電気株式会社 内視鏡検査支援装置、内視鏡検査支援方法、及び、記録媒体

Also Published As

Publication number Publication date
JPWO2022196494A1 (enrdf_load_stackoverflow) 2022-09-22

Similar Documents

Publication Publication Date Title
EP2965263B1 (en) Multimodal segmentation in intravascular images
JP5486432B2 (ja) 画像処理装置、その作動方法およびプログラム
JP5100323B2 (ja) 複数の画像の間で対応する標認点を同期させるシステム
US8805043B1 (en) System and method for creating and using intelligent databases for assisting in intima-media thickness (IMT)
JP4829217B2 (ja) データセットの視覚化
EP3199107B1 (en) Image display device, control method thereof, and detection method of radiopaque marker
EP4129197B1 (en) Computer program and information processing device
JP2011206168A (ja) 観察支援システムおよび方法並びにプログラム
US20240013386A1 (en) Medical system, method for processing medical image, and medical image processing apparatus
WO2022196494A1 (ja) 情報処理方法、学習モデル生成方法、プログラム及び情報処理装置
JP2016522072A (ja) 解剖学的に知的な心エコー法における肺組織同定
JP2012081202A (ja) 医用画像処理装置及び制御プログラム
WO2021199961A1 (ja) コンピュータプログラム、情報処理方法及び情報処理装置
US20230394672A1 (en) Medical image processing apparatus, medical image processing method, and program
JP4686279B2 (ja) 医用診断装置及び診断支援装置
US12283048B2 (en) Diagnosis support device, diagnosis support system, and diagnosis support method
WO2023054467A1 (ja) モデル生成方法、学習モデル、コンピュータプログラム、情報処理方法及び情報処理装置
KR101014562B1 (ko) 자궁의 가상 내시경 영상을 형성하는 방법
JP2019165923A (ja) 診断支援システム及び診断支援方法
JP7609278B2 (ja) 画像処理装置、画像処理方法及びプログラム
WO2022239529A1 (ja) 医療画像処理装置、医療画像処理方法、及びプログラム
US20240335093A1 (en) Medical support device, endoscope system, medical support method, and program
EP4400059A1 (en) Image processing device, image processing system, image display method, and image processing program
JP2023166228A (ja) 医療支援装置、医療支援方法、及びプログラム
WO2024202789A1 (ja) 医療支援装置、内視鏡システム、医療支援方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22771245

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023507028

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22771245

Country of ref document: EP

Kind code of ref document: A1