WO2022071326A1 - Information processing device, learned model generation method and training data generation method - Google Patents
Information processing device, learned model generation method and training data generation method Download PDFInfo
- Publication number
- WO2022071326A1 WO2022071326A1 PCT/JP2021/035668 JP2021035668W WO2022071326A1 WO 2022071326 A1 WO2022071326 A1 WO 2022071326A1 JP 2021035668 W JP2021035668 W JP 2021035668W WO 2022071326 A1 WO2022071326 A1 WO 2022071326A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- catheter
- image
- medical device
- data
- position information
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/12—Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
Definitions
- the present invention relates to an information processing device, a trained model generation method, and a training data generation method.
- a catheter system is used in which an image acquisition catheter is inserted into a luminal organ such as a blood vessel to acquire an image (Patent Document 1).
- One aspect is to provide an information processing device or the like that supports understanding of an image acquired by an image acquisition catheter.
- the information processing apparatus has an image acquisition unit that acquires a catheter image obtained by a radial scanning type image acquisition catheter, and a first position regarding the position of a medical device included in the catheter image when the catheter image is input.
- the medical device learned model that outputs information is provided with a first position information output unit that inputs the acquired catheter image and outputs the first position information.
- FIG. 1 is an explanatory diagram illustrating an outline of the catheter system 10.
- the catheter system 10 of the present embodiment is used for IVR (Interventional Radiology) in which various organs are treated while performing fluoroscopy using an image diagnostic device such as an X-ray fluoroscope.
- IVR Interventional Radiology
- an image diagnostic device such as an X-ray fluoroscope.
- the catheter system 10 includes an image acquisition catheter 40, an MDU (Motor Driving Unit) 33, and an information processing device 20.
- the image acquisition catheter 40 is connected to the information processing apparatus 20 via the MDU 33.
- a display device 31 and an input device 32 are connected to the information processing device 20.
- the input device 32 is an input device such as a keyboard, mouse, trackball or microphone.
- the display device 31 and the input device 32 may be integrally laminated to form a touch panel.
- the input device 32 and the information processing device 20 may be integrally configured.
- FIG. 2 is an explanatory diagram illustrating an outline of the image acquisition catheter 40.
- the image acquisition catheter 40 has a probe portion 41 and a connector portion 45 arranged at an end portion of the probe portion 41.
- the probe portion 41 is connected to the MDU 33 via the connector portion 45.
- the side of the image acquisition catheter 40 far from the connector portion 45 is referred to as the distal end side.
- the shaft 43 is inserted inside the probe portion 41.
- the sensor 42 is connected to the tip end side of the shaft 43.
- a guide wire lumen 46 is provided at the tip of the probe portion 41. After inserting the guide wire to a position beyond the target portion, the user guides the sensor 42 to the target portion by inserting the guide wire into the guide wire lumen 46.
- An annular tip marker 44 is fixed in the vicinity of the tip of the probe portion 41.
- the sensor 42 is, for example, an ultrasonic transducer that transmits and receives ultrasonic waves, or a transmission / reception unit for OCT (Optical Coherence Tomography) that irradiates near-infrared light and receives reflected light.
- OCT Optical Coherence Tomography
- the image acquisition catheter 40 is an IVUS (Intravascular Ultrasound) catheter used when taking an ultrasonic tomographic image from the inside of the circulatory system will be described as an example.
- FIG. 3 is an explanatory diagram illustrating the configuration of the catheter system 10.
- the catheter system 10 includes an information processing device 20, an MDU 33, and an image acquisition catheter 40.
- the information processing device 20 includes a control unit 21, a main storage device 22, an auxiliary storage device 23, a communication unit 24, a display unit 25, an input unit 26, a catheter control unit 271, and a bus.
- the control unit 21 is an arithmetic control device that executes the program of the present embodiment.
- the control unit 21 uses one or more CPUs (Central Processing Unit), GPU (Graphics Processing Unit), TPU (Tensor Processing Unit), multi-core CPU, and the like.
- the control unit 21 is connected to each hardware unit constituting the information processing apparatus 20 via a bus.
- the main storage device 22 is a storage device such as a SRAM (Static Random Access Memory), a DRAM (Dynamic Random Access Memory), and a flash memory.
- the main storage device 22 temporarily stores information necessary in the middle of processing performed by the control unit 21 and a program being executed by the control unit 21.
- the auxiliary storage device 23 is a storage device such as a SRAM, a flash memory, a hard disk, or a magnetic tape.
- the auxiliary storage device 23 stores a medical device learned model 611, a classification model 62, a program to be executed by the control unit 21, and various data necessary for executing the program.
- the communication unit 24 is an interface for communicating between the information processing device 20 and the network.
- the display unit 25 is an interface for connecting the display device 31 and the bus.
- the input unit 26 is an interface for connecting the input device 32 and the bus.
- the catheter control unit 271 controls the MDU 33, controls the sensor 42, generates an image based on the signal received from the sensor 42, and the like.
- the MDU 33 rotates the sensor 42 and the shaft 43 inside the probe portion 41.
- the catheter control unit 271 generates one catheter image 51 (see FIG. 4) for each rotation of the sensor 42.
- the generated catheter image 51 is a transverse layer image centered on the probe portion 41 and substantially perpendicular to the probe portion 41.
- the MDU 33 can further advance and retreat the sensor 42 while rotating the sensor 42 and the shaft 43 inside the probe portion 41.
- the catheter control unit 271 continuously generates a plurality of catheter images 51 substantially perpendicular to the probe unit 41.
- the continuously generated catheter image 51 can be used to construct a three-dimensional image. Therefore, the image acquisition catheter 40 realizes the function of a three-dimensional scanning catheter that sequentially acquires a plurality of catheter images 51 along the longitudinal direction.
- the advance / retreat operation of the sensor 42 includes both an operation of advancing / retreating the entire probe unit 41 and an operation of advancing / retreating the sensor 42 inside the probe unit 41.
- the advance / retreat operation may be automatically performed by the MDU 33 at a predetermined speed, or may be manually performed by the user.
- the image acquisition catheter 40 is not limited to the mechanical scanning method that mechanically rotates and advances and retreats. It may be an electronic radial scanning type image acquisition catheter 40 using a sensor 42 in which a plurality of ultrasonic transducers are arranged in an annular shape.
- a reflector existing inside the circulatory system such as red blood cells, and for example, the respiratory system and the digestive system, etc.
- a catheter image 51 containing an organ located outside the circulatory system can be taken.
- the image acquisition catheter 40 is used for atrial septal puncture.
- a blocken blow needle is punctured into the fossa ovalis, which is a thin portion of the atrial septum, under ultrasonic guidance.
- the tip of the Brocken blow needle reaches the inside of the left atrium.
- the catheter image 51 shows reflexes of biological tissues constituting the circulatory system such as the atrial septum, right atrium, left atrium, and aorta, and red blood cells contained in blood flowing inside the circulatory system.
- a blocken blow needle is depicted.
- a user such as a doctor can safely perform an atrial septal puncture by confirming the positional relationship between the fossa ovalis and the tip of the blocken blow needle using the catheter image 51.
- the Brocken blow needle is an example of the medical device of the present embodiment.
- the application of the catheter system 10 is not limited to the atrial septal puncture.
- the catheter system 10 can be used for procedures such as transcatheter myocardial ablation, transcatheter valve replacement, and stent placement in coronary arteries.
- the site to be treated using the catheter system 10 is not limited to the area around the heart.
- the catheter system 10 can be used to treat various sites such as pancreatic ducts, bile ducts and blood vessels in the lower extremities.
- the control unit 21 may realize the function of the catheter control unit 271.
- the information processing apparatus 20 may be an X-ray angiography apparatus, an X-ray CT (Computed Tomography) apparatus, an MRI (Magnetic Resonance Imaging) apparatus, a PET (Positron Emission Tomography) apparatus, or an ultrasonography apparatus 20 via HIS (Hospital Information System) or the like. It is connected to various diagnostic imaging devices 37 such as an ultrasonic diagnostic device.
- the information processing device 20 of the present embodiment is a dedicated ultrasonic diagnostic device, a personal computer, a tablet, a smartphone, or the like having the function of the ultrasonic diagnostic device.
- the information processing device 20 is also used for learning a trained model such as the medical device trained model 611 and creating training data will be described as an example.
- a computer or server different from the information processing apparatus 20 may be used for training the trained model and creating training data.
- control unit 21 performs software-like processing
- process described using the flowchart and various trained models may be implemented by dedicated hardware.
- FIG. 4 is an explanatory diagram illustrating an outline of the operation of the catheter system 10.
- a case where a plurality of catheter images 51 are taken while pulling the sensor 42 at a predetermined speed and the images are displayed in real time will be described as an example.
- the control unit 21 captures one catheter image 51 (step S501).
- the control unit 21 acquires the position information of the medical device depicted in the catheter image 51 (step S502).
- the “x” mark indicates the position of the medical device in the catheter image 51.
- the control unit 21 connects the catheter image 51, the position of the catheter image 51 in the longitudinal direction of the image acquisition catheter 40, and the position information of the medical device to the auxiliary storage device 23 or the large-capacity storage device connected to the HIS. Record (step S503).
- the control unit 21 generates classification data 52 for each portion constituting the catheter image 51, which is classified according to the drawn subject (step S504).
- the classification data 52 is shown by a schematic diagram in which the catheter image 51 is painted separately based on the classification result.
- the control unit 21 determines whether the user has specified a two-dimensional display or a three-dimensional display (step S505). When it is determined that the user has specified the two-dimensional display (2D in step S505), the control unit 21 displays the catheter image 51 and the classification data 52 on the display device 31 by the two-dimensional display (step S506).
- step S505 of FIG. 4 it is described as if one of "two-dimensional display” and “three-dimensional display” is selected, as in “2D / 3D". However, when the user selects "3D", the control unit 21 may display both "two-dimensional display” and "three-dimensional display”.
- step S505 determines whether or not the position information of the medical device sequentially recorded in step S503 is normal (step S511). .. When it is determined that it is not normal (NO in step S511), the control unit 21 corrects the position information (step S512). Details of the processes performed in steps S511 and S512 will be described later.
- control unit 21 When it is determined to be normal (YES in step S511), or after the end of step S512, the control unit 21 performs a three-dimensional display illustrating the structure of the part under observation and the position of the medical device (step S513). .. As described above, the control unit 21 may display both the three-dimensional display and the two-dimensional display on one screen.
- control unit 21 determines whether or not the acquisition of the catheter image 51 is completed (step S507). For example, when the end instruction by the user is received, the control unit 21 determines that the process is terminated.
- step S507 If it is determined that the process is not completed (NO in step S507), the control unit 21 returns to step S501. When it is determined to end the process (YES in step S507), the control unit 21 ends the process.
- step S506 a process flow in the case of performing a two-dimensional display (step S506) or a three-dimensional display (step S513) in real time while taking a series of catheter images 51 has been described.
- the control unit 21 may perform two-dimensional display or three-dimensional display in non-real time based on the data recorded in step S503.
- FIG. 5A is an explanatory diagram schematically showing the operation of the image acquisition catheter 40.
- FIG. 5B is an explanatory diagram schematically showing a catheter image 51 taken by an image acquisition catheter 40.
- FIG. 5C is an explanatory diagram schematically explaining the classification data 52 generated based on the catheter image 51.
- the RT (Radius-Theta) format and the XY format will be described with reference to FIGS. 5A to 5C.
- the catheter control unit 271 acquires radial scan line data centered on the image acquisition catheter 40, as schematically shown by eight arrows in FIG. 5A.
- the catheter control unit 271 can generate the catheter image 51 shown in FIG. 5B in two formats, the RT format catheter image 518 and the XY format catheter image 519, based on the scanning line data.
- the RT format catheter image 518 is an image generated by arranging the scan line data in parallel with each other.
- the lateral direction of the RT format catheter image 518 indicates the distance from the image acquisition catheter 40.
- the vertical direction of the RT format catheter image 518 indicates the scanning angle.
- One RT-type catheter image 518 is formed by arranging the scan line data acquired by rotating the sensor 42 360 degrees in parallel in the order of scan angles.
- the left side of the RT type catheter image 518 shows a place near the image acquisition catheter 40, and the right side of the RT type catheter image 518 shows a place far from the image acquisition catheter 40.
- the XY format catheter image 519 is an image generated by arranging and interpolating each scan line data in a radial pattern.
- the XY format catheter image 519 shows a tomographic image in which the subject is cut perpendicular to the image acquisition catheter 40 at the position of the sensor 42.
- FIG. 5C schematically shows classification data 52 classified for each depicted subject for each portion constituting the catheter image 51.
- the classification data 52 can also be displayed in two formats, RT format classification data 528 and XY format classification data 529. Since the image conversion method between the RT format and the XY format is known, the description thereof will be omitted.
- the thick downward hatching indicates the biological tissue region such as the atrial wall and the ventricular wall that forms the cavity into which the image acquisition catheter 40 is inserted.
- the narrow left-down hatching indicates the inside of the first cavity, which is the blood flow region into which the tip portion of the image acquisition catheter 40 is inserted.
- the narrow downward-sloping hatch indicates the inside of the second cavity, which is a blood flow region other than the first cavity.
- the first cavity is the right atrium
- the second cavity is the left atrium, right ventricle, left ventricle, aorta, coronary artery, etc.
- the inside of the first lumen is referred to as a first lumen region
- the inside of the second lumen is referred to as a second lumen region.
- the thick, downward-sloping hatch indicates a non-luminal region that is neither the first lumen region nor the second lumen region in the non-living tissue region.
- the non-luminal region includes an extracardiac region, a region outside the heart structure, and the like.
- the inside of the left atrium is also included in the non-luminal region.
- lumens such as the left ventricle, pulmonary artery, pulmonary vein and aortic arch are also included in the non-luminal region if the distal wall cannot be adequately visualized.
- Black paint indicates the medical device area where medical devices such as Brocken blow needles are depicted.
- the biological tissue region and the non-biological tissue region may be collectively referred to as a biological tissue-related region.
- the medical device is not always inserted into the same first cavity as the image acquisition catheter 40. Depending on the procedure, a medical device may be inserted into the second cavity.
- the hatching and blackening shown in FIG. 5C are examples of modes in which each region can be distinguished. Each area is displayed on the display device 31 using, for example, different colors.
- the control unit 21 realizes the function of the first aspect output unit that outputs the first lumen region, the second lumen region, and the biological tissue region in a manner that can be distinguished from each other.
- the control unit 21 also realizes the function of the second aspect output unit that outputs the first lumen region, the second lumen region, the non-luminal region, and the biological tissue region in a manner that can be distinguished from each other.
- the display in XY format is suitable during the IVR procedure.
- the information in the vicinity of the image acquisition catheter 40 is compressed and the amount of data is reduced, and data that does not originally exist is added by interpolation at a position away from the image acquisition catheter 40. Therefore, when analyzing the catheter image 51, it is possible to obtain more accurate results by using the RT format image than by using the XY format image.
- control unit 21 generates RT format classification data 528 based on the RT format catheter image 518.
- the control unit 21 converts the XY format catheter image 519 to generate the RT format catheter image 518, and converts the RT format classification data 528 to generate the XY format classification data 529.
- the classification data 52 will be described with specific examples.
- the "living tissue area label” is attached to the pixels classified into the “living tissue region”
- the "first lumen region label” is attached to the pixels classified into the “first lumen region”
- the "second lumen region” is attached to the pixels.
- “non-luminal area label” for pixels classified as “non-luminal area” for pixels classified as “non-luminal area”
- the “medical instrument area label” is recorded in the cell
- the "non-living tissue area label” is recorded in the pixels classified into the "non-living tissue area”.
- Each label is indicated by, for example, an integer.
- the control unit 21 may generate XY format classification data 529 based on the XY format catheter image 519.
- the control unit 21 may generate RT format classification data 528 based on the XY format classification data 529.
- FIG. 6 is an explanatory diagram illustrating the configuration of the medical device learned model 611.
- the medical device learned model 611 is a model that accepts the catheter image 51 and outputs the first position information regarding the position where the medical device is drawn.
- the medical device trained model 611 implements step S502 described with reference to FIG.
- the output layer of the medical device learned model 611 functions as a first position information output unit that outputs the first position information.
- the input of the medical device learned model 611 is the RT format catheter image 518.
- the first position information is the probability that the medical device for each part on the RT format catheter image 518 is drawn.
- the place where the probability that the medical device is drawn is shown by dark hatching, and the place where the probability that the medical device is drawn is low is shown without hatching.
- the medical device learned model 611 is generated by machine learning, for example, using a neural network structure of CNN (Convolutional Neural Network).
- CNN Convolutional Neural Network
- Examples of CNNs that can be used to generate the medical device trained model 611 include R-CNN (Region Based Convolutional Neural Network), YOLO (You only look once), U-Net, and GAN (Generative Adversarial Network). Be done.
- the medical device trained model 611 may be generated using a neural network structure other than CNN.
- the medical device learned model 611 may be a model that accepts a plurality of catheter images 51 acquired in time series and outputs the first position information with respect to the latest catheter image 51.
- a model that accepts time-series inputs such as RNN (Recurrent Neural Network) can be combined with the above-mentioned neural network structure to generate a medical device learned model 611.
- the RNN is, for example, LSTM (Long short-term memory).
- the medical device learned model 611 includes a memory unit that holds information about the catheter image 51 previously input.
- the medical device learned model 611 outputs the first position information based on the information held in the memory unit and the latest catheter image 51.
- the medical device learned model 611 When using a plurality of catheter images 51 acquired in chronological order, the medical device learned model 611 includes a retroactive input unit that inputs an output based on a previously input catheter image 51 together with the next catheter image 51. May be good.
- the medical device learned model 611 outputs the first position information based on the latest catheter image 51 and the input from the recursive input unit.
- the medical device learned model 611 may output a place where there is a high probability that the medical device is drawn by using the position of one pixel on the catheter image 51 that has received the input.
- the medical device learned model 611 is a model that outputs the position of the pixel having the highest probability after calculating the probability that the medical device is drawn for each part on the catheter image 51 as shown in FIG. May be good.
- the medical device learned model 611 may output the position of the center of gravity of the region where the probability that the medical device is drawn exceeds a predetermined threshold value.
- the medical device learned model 611 may output a region where the probability that the medical device is drawn exceeds a predetermined threshold.
- the medical device learned model 611 is a model that outputs the first position information of each of the plurality of medical devices.
- the medical device learned model 611 may be a model that outputs only the first position information of one medical device.
- the control unit 21 inputs the RT format catheter image 518 that masks the periphery of the first position information output from the medical device learned model 611 into the medical device learned model 611, and inputs the first position of the second medical device. Information can be obtained. By repeating the same process, the control unit 21 can also acquire the first position information of the third and subsequent medical devices.
- FIG. 7 is an explanatory diagram illustrating the configuration of the classification model 62.
- the classification model 62 is a model that accepts the catheter image 51 and outputs the classification data 52 that classifies each portion constituting the catheter image 51 according to the drawn subject.
- the classification model 62 implements step S504 described with reference to FIG.
- each pixel constituting the input RT format catheter image 518 is, for example, a “biological tissue region”, a “first lumen region”, a “second lumen region”, and a “non-luminal region”. And, it is classified into the "medical device area”, and the RT format classification data 528 in which the position of the pixel and the label indicating the classification result are associated with each other is output.
- the classification model 62 may divide the catheter image 51 into regions of arbitrary size such as, for example, 3 vertical pixels and 3 horizontal pixels for a total of 9 pixels, and output classification data 52 in which each region is classified.
- the classification model 62 is a trained model that performs semantic segmentation on, for example, the catheter image 51. A specific example of the classification model 62 will be described later.
- FIG. 8 is an explanatory diagram illustrating an outline of processing related to location information.
- a plurality of catheter images 51 are taken while moving the sensor 42 in the longitudinal direction of the image acquisition catheter 40.
- the line drawing of the substantially truncated cone schematically shows a biological tissue region constructed three-dimensionally based on a plurality of catheter images 51.
- the interior of the substantially truncated cone means the first lumen region.
- White circles and black circles indicate the positions of medical devices obtained from the respective catheter images 51.
- the black circle is located far away from the white circle, so it is determined to be an erroneous detection.
- the shape of the medical device can be reproduced by the thick line that smoothly connects the white circles.
- the x mark indicates complementary information that complements the position information of the medical device that has not been detected.
- the medical device when the medical device is in contact with the biomedical tissue area, even if a user such as a skilled doctor or a laboratory technician interprets one catheter image 51 in a still image state, the medical device is visualized anywhere. It is known that it may be difficult to distinguish between the two. However, when observing the catheter image 51 by moving images, the user can relatively easily determine the position of the medical device. This is because the user interprets the image while expecting that the medical device is in the same position as the previous frame.
- the medical device is reconstructed so as not to cause a contradiction by using the position information of the medical device acquired from each of the plurality of catheter images 51.
- a catheter system 10 that accurately determines the position of the medical device and displays the shape of the medical device in a three-dimensional image as in the case of observing a moving image by a user.
- the display of step S506 and step S513 can provide a catheter system 10 that supports understanding of the catheter image 51 acquired by using the image acquisition catheter 40.
- the user can accurately grasp the position of the medical device and can safely perform IVR.
- the present embodiment relates to a method for generating a medical device learned model 611.
- the description of the parts common to the first embodiment will be omitted.
- a case where the medical device learned model 611 is generated by using the information processing apparatus 20 described with reference to FIG. 3 will be described as an example.
- the medical device learned model 611 may be created by using a computer or the like different from the information processing device 20.
- the medical device learned model 611 for which machine learning has been completed may be copied to the auxiliary storage device 23 via a network.
- the medical device trained model 611 trained with one hardware can be used by a plurality of information processing devices 20.
- FIG. 9 is an explanatory diagram illustrating the record layout of the medical device position training data DB (Database) 71.
- the medical device position training data DB 71 is a database in which the catheter image 51 and the position information of the medical device are recorded in association with each other, and is used for training the medical device learned model 611 by machine learning.
- the medical device position training data DB 71 has a catheter image field and a position information field.
- a catheter image 51 such as an RT format catheter image 518 is recorded.
- so-called sound line data indicating the ultrasonic signal received by the sensor 42 may be recorded.
- the scan line data generated based on the sound line data may be recorded in the catheter image field.
- the position information field the position information of the medical device depicted in the catheter image 51 is recorded.
- the position information is information indicating the position of one pixel marked by the labeler on the catheter image 51, for example, as will be described later.
- the position information may be information indicating a region of a circle centered on the vicinity of the point marked by the labeler on the catheter image 51.
- the circle is a dimension that does not exceed the size of the medical device depicted in the catheter image 51.
- the circle has a size inscribed in a square having 50 pixels or less in length and width, for example.
- FIG. 10 is an example of a screen used for creating the medical device position training data DB 71.
- a set of catheter images 51 of an RT format catheter image 518 and an XY format catheter image 519 is displayed.
- the RT format catheter image 518 and the XY format catheter image 519 are images created based on the same sound line data.
- the control button area 782 is displayed below the catheter image 51. At the upper part of the control button area 782, a frame number of the catheter image 51 being displayed and a jump button used when the user inputs an arbitrary frame number to jump the display are arranged.
- buttons used by the user to perform operations such as fast forward, rewind, and frame advance are arranged below the frame number and the like. Since these buttons are the same as those generally used in various image reproduction devices and the like, the description thereof will be omitted.
- the user of the present embodiment is a person in charge of creating training data by looking at the catheter image 51 recorded in advance and labeling the position of the medical device.
- the person in charge of creating training data is referred to as a labeler.
- a labeler is a physician, laboratory technician, or trained to perform accurate labeling, who is proficient in interpreting catheter images 51. Further, in the following description, the work of the labeler marking the catheter image 51 for labeling may be described as marking.
- the labeler observes the displayed catheter image 51 and determines the position where the medical device is visualized. Generally, the area where the medical device is visualized is very small with respect to the total area of the catheter image 51.
- the labeler moves the cursor 781 to substantially the center of the area where the medical device is drawn, and marks by a click operation or the like.
- the display device 31 is a touch panel
- the labeler may perform marking by a tap operation using a finger, a stylus pen, or the like.
- the labeler may perform marking by a so-called flick operation.
- the labeler may mark the catheter image 51 of either the RT format catheter image 518 or the XY format catheter image 519.
- the control unit 21 may display a mark at the corresponding position of the other catheter image 51.
- the control unit 21 creates a new record in the medical device position training data DB 71, and records the catheter image 51 and the position marked by the labeler in association with each other.
- the control unit 21 displays the next catheter image 51 on the display device 31.
- the labeler can sequentially mark a plurality of catheter images 51 by simply performing a click operation or the like on the catheter image 51 without operating each button of the control button area 782.
- the labeler performs only one click operation or the like on one catheter image 51 in which one medical device is depicted.
- a plurality of medical devices may be visualized on the catheter image 51.
- the labeler can mark each medical device with a single click operation or the like.
- a case where one medical device is depicted on one catheter image 51 will be described as an example.
- FIG. 11 is a flowchart illustrating the flow of processing of the program for creating the medical device position training data DB 71.
- the case where the medical device position training data DB 71 is created by using the information processing device 20 will be described as an example.
- the program of FIG. 11 may be executed by hardware other than the information processing apparatus 20.
- a large number of catheter images 51 are recorded in the auxiliary storage device 23 or an external large-capacity storage device.
- the catheter image 51 is recorded in the auxiliary storage device 23 in the form of moving image data including a plurality of RT format catheter images 518 taken in time series will be described as an example.
- the control unit 21 acquires a 1-frame RT format catheter image 518 from the auxiliary storage device 23 (step S671).
- the control unit 21 converts the RT format catheter image 518 to generate the XY format catheter image 519 (step S672).
- the control unit 21 displays the screen described with reference to FIG. 10 on the display device 31 (step S673).
- the control unit 21 accepts a position information input operation by the labeler via the input device 32 (step S674).
- the input operation is a click operation or a tap operation on the RT format catheter image 518 or the XY format catheter image 519.
- the control unit 21 displays a mark such as a small circle at the position where the input operation is received (step S675). Since the reception of an input operation for the image displayed on the display device 31 and the display of the mark on the display device 31 via the input device 32 are user interfaces that have been conventionally used, the details thereof will be omitted.
- the control unit 21 determines whether or not the image for which the input operation is received in step S674 is the RT format catheter image 518 (step S676). When it is determined that the RT format catheter image 518 is used (YES in step S676), the control unit 21 also displays a mark at the corresponding position of the XY format catheter image 519 (step S677). If it is determined that the RT format catheter image 518 is not (NO in step S676), the control unit 21 also displays a mark at the corresponding position of the RT format catheter image 518 (step S678).
- the control unit 21 creates a new record in the medical device position training data DB 71.
- the control unit 21 associates the catheter image 51 with the position information input by the labeler and records it in the medical device position training data DB 71 (step S679).
- the catheter image 51 recorded in step S679 may be only the RT format catheter image 518 acquired in step S671 or both the RT format catheter image 518 and the XY format catheter image 519 generated in step S672. ..
- the catheter image 51 recorded in step S679 may be the sound line data for one rotation received by the sensor 42 or the scan line data generated by signal processing the sound line data.
- the position information recorded in step S679 is information indicating the position of one pixel on the RT format catheter image 518, which corresponds to the position where the labeler performs a click operation or the like using the input device 32, for example.
- the position information may be information indicating the position where the labeler has performed a click operation or the like and the range around the position.
- the control unit 21 determines whether or not to end the process (step S680). For example, when the processing of the catheter image 51 recorded in the auxiliary storage device 23 is completed, the control unit 21 determines that the processing is completed. If it is determined to end (YES in step S680), the control unit 21 ends the process.
- control unit 21 If it is determined that the process is not completed (NO in step S680), the control unit 21 returns to step S671.
- the control unit 21 acquires the next RT format catheter image 518 in step S671 and executes the processing of step S672 or less. That is, the control unit 21 automatically acquires and displays the next RT format catheter image 518 without waiting for the operation of the button displayed in the control button area 782.
- control unit 21 records the training data based on the large number of RT format catheter images 518 recorded in the auxiliary storage device 23 in the medical device position training data DB 71.
- control unit 21 may display, for example, a "save button” on the screen described with reference to FIG. 10, and execute step S679 when the selection of the "save button" is accepted. Further, the control unit 21 displays, for example, an "AUTO button” on the screen described using FIG. 10, and automatically automatically without waiting for the selection of the "save button” while accepting the selection of the "AUTO button”. Step S679 may be executed.
- the catheter image 51 recorded in the medical device position training data DB 71 in step S679 is the RT format catheter image 518
- the position information is the position of one pixel on the RT format catheter image 518. Let's take an example.
- FIG. 12 is a flowchart illustrating the processing flow of the medical device learned model 611 generation program.
- an unlearned model combining, for example, a convolutional layer, a pooling layer, and a fully connected layer is prepared.
- the unlearned model is, for example, the CNN model.
- Examples of CNNs that can be used to generate the medical device trained model 611 include R-CNN, YOLO, U-Net, GAN and the like.
- the medical device trained model 611 may be generated using a neural network structure other than CNN.
- the control unit 21 acquires a training record used for training of one epoch from the medical device position training data DB 71 (step S571).
- the training record recorded in the medical device position training data DB 71 is a combination of the RT format catheter image 518 and the coordinates indicating the position of the medical device depicted in the RT format catheter image 518.
- the control unit 21 adjusts the model parameters so that when the RT format catheter image 518 is input to the input layer of the model, the positions of the pixels corresponding to the position information are output from the output layer (step S572). ..
- the program may appropriately have a function of accepting corrections by the user, presenting the basis for judgment, additional learning, and the like to be executed by the control unit 21.
- the control unit 21 determines whether or not to end the process (step S573). For example, the control unit 21 determines that the process is completed when the learning of a predetermined number of epochs is completed.
- the control unit 21 may acquire test data from the medical device position training data DB 71 and input it to the model being machine-learned, and may determine that the process ends when an output with a predetermined accuracy is obtained.
- step S573 If it is determined that the process is not completed (NO in step S573), the control unit 21 returns to step S571.
- the control unit 21 records the parameters of the learned medical device position training data DB 71 in the auxiliary storage device 23 (step S574). After that, the control unit 21 ends the process.
- the medical device learned model 611 that receives the catheter image 51 and outputs the first position information is generated.
- a model that accepts time-series input such as RNN may be prepared.
- the RNN is, for example, an LSTM.
- step S572 when a plurality of RT-type catheter images 518 captured in time series are input to the input layer of the model, the control unit 21 associates the output layer with the last RT-type catheter image 518 in time series. Adjust the model parameters so that the pixel positions corresponding to the given position information are output.
- FIG. 13 is a flowchart illustrating a processing flow of a program for adding data to the medical device position training data DB 71.
- the program of FIG. 13 is a program for adding training data to the medical device position training data DB 71 after creating the medical device learned model 611.
- the added training data is used for additional training of the medical device trained model 611.
- a large number of catheter images 51 that have not yet been used for creating the medical device position training data DB 71 are recorded in the auxiliary storage device 23 or an external large-capacity storage device.
- the catheter image 51 is recorded in the auxiliary storage device 23 in the form of moving image data including a plurality of RT format catheter images 518 taken in time series will be described as an example.
- the control unit 21 acquires a 1-frame RT format catheter image 518 from the auxiliary storage device 23 (step S701).
- the control unit 21 inputs the RT format catheter image 518 into the medical device learned model 611 and acquires the first position information (step S702).
- the control unit 21 converts the RT format catheter image 518 to generate the XY format catheter image 519 (step S703).
- the control unit 21 displays the screen described with reference to FIG. 10 in a state where the mark indicating the first position information acquired in step S702 is superimposed on the RT format catheter image 518 and the XY format catheter image 519, respectively. Is displayed in (step S704).
- the labeler determines that the position of the automatically displayed mark is inappropriate, the labeler performs a single click operation or the like to input the correct position of the medical device. That is, the labeler inputs a correction instruction for the automatically displayed mark.
- the control unit 21 determines whether or not the input operation via the input device 32 by the labeler has been accepted within a predetermined time (step S705). It is desirable that the labeler can appropriately set the predetermined time.
- the input operation is a click operation or a tap operation on the RT format catheter image 518 or the XY format catheter image 519.
- the control unit 21 displays a mark such as a small circle at the position where the input operation is accepted (step S706). It is desirable that the mark displayed in step S706 has a different color, a different shape, or the like from the mark indicating the position information acquired in step S702.
- the control unit 21 may erase the mark indicating the position information acquired in step S702.
- the control unit 21 determines whether or not the image for which the input operation is received in step S705 is the RT format catheter image 518 (step S707). When it is determined that the RT format catheter image 518 is used (YES in step S707), the control unit 21 also displays a mark at the corresponding position of the XY format catheter image 519 (step S708). If it is determined that the RT format catheter image 518 is not (NO in step S707), the control unit 21 also displays a mark at the corresponding position of the RT format catheter image 518 (step S709).
- the control unit 21 creates a new record in the medical device position training data DB 71.
- the control unit 21 records the correction data in which the catheter image 51 and the position information input by the labeler are associated with each other in the medical device position training data DB 71 (step S710).
- control unit 21 If it is determined that the input operation is not accepted (NO in step S705), the control unit 21 creates a new record in the medical device position training data DB 71.
- the control unit 21 records the uncorrected data associated with the catheter image 51 and the first position information acquired in step S532 in the medical device position training data DB 71 (step S711).
- step S712 determines whether or not to end the process. For example, when the processing of the catheter image 51 recorded in the auxiliary storage device 23 is completed, the control unit 21 determines that the processing is completed. If it is determined to end (YES in step S712), the control unit 21 ends the process.
- step S712 If it is determined that the process is not completed (NO in step S712), the control unit 21 returns to step S701.
- the control unit 21 acquires the next RT format catheter image 518 in step S701 and executes the processing of step S702 or less.
- the control unit 21 adds training data based on a large number of RT format catheter images 518 recorded in the auxiliary storage device 23 to the medical device position training data DB 71.
- control unit 21 may display, for example, an "OK button” for approving the output by the medical device learned model 611 on the screen described using FIG.
- the control unit 21 determines in step S705 that the instruction to the effect of "NO” has been accepted, and executes step S711.
- the labeler can mark one medical device drawn on the catheter image 51 with only one operation such as one click operation or one tap operation.
- the control unit 21 may accept an operation of marking one medical device by a so-called double-click operation or double-tap operation. Compared to the case of marking the boundary line of a medical device, the marking work can be significantly reduced, so that the burden on the labeler can be reduced. According to this embodiment, a lot of training data can be created in a short time.
- the labeler when a plurality of medical devices are drawn on the catheter image 51, the labeler can mark each medical device with a single click operation or the like.
- control unit 21 may display, for example, an "OK button” on the screen described with reference to FIG. 10, and execute step S679 when the selection of the "OK button" is accepted.
- the medical device position training data DB 71 may have a field for recording the type of medical device.
- the control unit 21 receives an input of a type of medical device such as a “Brocken blow needle”, a “guide wire”, and a “balloon catheter”.
- a medical device learned model 611 that outputs the type of the medical device in addition to the position of the medical device is generated.
- This embodiment relates to a catheter system 10 that uses two trained models to acquire second position information about the position of a medical device from a catheter image 51.
- the description of the parts common to the second embodiment will be omitted.
- FIG. 14 is an explanatory diagram illustrating the depiction of the medical device.
- the medical devices depicted in the RT format catheter image 518 and the XY format catheter image 519 are highlighted.
- the acoustic shadow is drawn in a straight line in the horizontal direction.
- the acoustic shadow is visualized in a fan shape.
- a high-luminance region is visualized in a portion closer to the image acquisition catheter 40 than the acoustic shadow.
- the high-luminance region may be visualized in the form of so-called multiple echo, which repeats regularly along the scanning line direction.
- the scanning angle in which the medical device is drawn can be determined based on the scanning angle direction of the RT format catheter image 518, that is, the lateral feature in FIG.
- FIG. 15 is an explanatory diagram illustrating the configuration of the angle-learned model 612.
- the angle-learned model 612 is a model that accepts the catheter image 51 and outputs scanning angle information regarding the scanning angle on which the medical device is drawn.
- an angle-learned model that accepts an RT-type catheter image 518 and outputs scanning angle information indicating the probability that a medical device is drawn in each scanning angle, that is, in the vertical direction of the RT-type catheter image 518. 612 is schematically shown. Since the medical device is drawn over a plurality of scanning angles, the total probability of outputting the scanning angle information exceeds 100%.
- the angle-learned model 612 may extract and output an angle at which the probability that the medical device is drawn is high.
- the angle-learned model 612 is generated by machine learning. By extracting the scanning angle of the position information from the position information field of the medical device position training data DB 71 described with reference to FIG. 9, it can be used for the training data to generate the angle-learned model 612.
- an unlearned model such as a CNN that combines a convolutional layer, a pooling layer, and a fully connected layer is prepared.
- the program of FIG. 12 adjusts each parameter of the prepared model and performs machine learning.
- the control unit 21 acquires a training record used for training of one epoch from the medical device position training data DB 71 (step S571).
- the training record recorded in the medical device position training data DB 71 is a combination of the RT format catheter image 518 and the coordinates indicating the position of the medical device depicted in the RT format catheter image 518.
- the control unit 21 adjusts the model parameters so that when the RT format catheter image 518 is input to the input layer of the model, the scanning angle corresponding to the position information is output from the output layer (step S572).
- the program may appropriately have a function of accepting corrections by the user, presenting the basis for judgment, additional learning, and the like to be executed by the control unit 21.
- the control unit 21 determines whether or not to end the process (step S573). For example, the control unit 21 determines that the process is completed when the learning of a predetermined number of epochs is completed.
- the control unit 21 may acquire test data from the medical device position training data DB 71 and input it to the model being machine-learned, and may determine that the process ends when an output with a predetermined accuracy is obtained.
- step S573 If it is determined that the process is not completed (NO in step S573), the control unit 21 returns to step S571.
- the control unit 21 records the parameters of the learned medical device position training data DB 71 in the auxiliary storage device 23 (step S574). After that, the control unit 21 ends the process.
- an angle-learned model 612 that receives the catheter image 51 and outputs information regarding the scanning angle is generated.
- a model that accepts time-series input such as RNN may be prepared.
- the RNN is, for example, an LSTM.
- step S572 when a plurality of RT-type catheter images 518 captured in time series are input to the input layer of the model, the control unit 21 associates the output layer with the last RT-type catheter image 518 in time series. Adjust the model parameters to output information about the scan angle.
- control unit 21 may determine the scanning angle on which the medical device is drawn by pattern matching.
- FIG. 16 is an explanatory diagram illustrating the position information model 619.
- the position information model 619 is a model that accepts the RT format catheter image 518 and outputs the second position information indicating the position of the drawn medical device.
- the position information model 619 includes a medical device learned model 611, an angle learned model 612, and a position information synthesis unit 615.
- the same RT format catheter image 518 is input to both the medical device trained model 611 and the angle trained model 612.
- the first position information is output from the medical device learned model 611.
- the first position information is the probability that the medical device is visualized at each site on the RT format catheter image 518.
- the probability that the medical device is visualized at the position where the distance from the center of the image acquisition catheter 40 is r and the scanning angle is ⁇ is shown by P1 (r, ⁇ ).
- Scanning angle information is output from the angle-learned model 612.
- the scanning angle information is the probability that the medical device is depicted at each scanning angle. In the following description, the probability that the medical device is drawn in the direction of the scanning angle ⁇ is shown by Pt ( ⁇ ).
- the first position information and the scanning angle information are combined by the position information synthesizing unit 615 to generate the second position information.
- the second position information is the probability that the medical device is visualized at each site on the RT format catheter image 518, similarly to the first position information.
- the input end of the position information synthesis unit 615 fulfills the functions of the first position information acquisition unit and the scanning angle information acquisition unit.
- the second position information P2 (r, ⁇ ) at the position where the distance from the center of the image acquisition catheter 40 is r and the scanning angle is ⁇ is calculated by, for example, Eq. (1-1).
- k is a coefficient relating to the weighting between the first position information and the scanning angle information.
- the second position information P2 (r, ⁇ ) may be calculated by the equation (1-2).
- the second position information P2 (r, ⁇ ) may be calculated by the equation (1-3).
- Equation (1-3) is an equation for calculating the average value of the first position information and the scanning angle information.
- the second position information P2 (r, ⁇ ) in Eqs. (1-1) to (1-3) is not a probability but a numerical value that relatively indicates the magnitude of the possibility that the medical device is drawn. Is. By synthesizing the first position information and the scanning angle information, the accuracy in the scanning angle direction is improved.
- the second position information may be information about the position where the value of P2 (r, ⁇ ) is the largest.
- the second position information may be determined by a function other than the equations exemplified by the equations (1-1) to (1-3).
- the second position information is an example of the position information of the medical device acquired in step S502 described with reference to FIG.
- the medical device trained model 611, the angle trained model 612, and the position information synthesizing unit 615 cooperate to realize step S502 described with reference to FIG.
- the output end of the position information synthesis unit 615 functions as a second position information output unit that outputs the second position information based on the first position information and the scanning angle information.
- FIG. 17 is a flowchart illustrating a processing flow of the program of the third embodiment.
- the flowchart described with reference to FIG. 17 shows the details of the process of step S502 described with reference to FIG.
- the control unit 21 acquires a 1-frame RT format catheter image 518 (step S541).
- the control unit 21 inputs the RT format catheter image 518 into the medical device learned model 611 and acquires the first position information (step S542).
- the control unit 21 inputs the RT format catheter image 518 into the angle-learned model 612 and acquires scanning angle information (step S543).
- the control unit 21 calculates the second position information based on, for example, the equation (1-1) or the equation (1-2) (step S544). After that, the control unit 21 ends the process. After that, the control unit 21 uses the second position information calculated in step S544 for the position information in step S502.
- the catheter system 10 that accurately calculates the position information of the medical device depicted in the catheter image 51.
- the present embodiment relates to a specific example of the classification model 62 described with reference to FIG.
- FIG. 18 is an explanatory diagram illustrating the configuration of the classification model 62.
- the classification model 62 includes a first classification trained model 621 and a classification data conversion unit 629.
- the first classification trained model 621 accepts the RT format catheter image 518 and classifies each part constituting the RT format catheter image 518 into a "living tissue region", a "non-living tissue region", and a "medical device region”.
- the first classification data 521 is output.
- the first classification trained model 621 further outputs the reliability of the classification result for each part, that is, the probability that the classification result is correct.
- the output layer of the first classification trained model 621 fulfills the function of the first classification data output unit that outputs the first classification data 521.
- the upper right figure of FIG. 18 schematically shows the first classification data 521 in RT format.
- Thick, downward-sloping hatches indicate biological tissue areas such as the atrioventricular and ventricular walls.
- Black paint indicates the medical device area where medical devices such as Brocken blow needles are depicted.
- Lattice hatches indicate non-living tissue areas that are neither medical device areas nor living tissue areas.
- the first classification data 521 is converted into classification data 52 by the classification data conversion unit 629.
- the lower right figure of FIG. 18 schematically shows RT format classification data 528.
- the non-living tissue region is classified into three types: a first lumen region, a second lumen region, and a non-luminal region. Similar to FIG. 5C, the narrow left-sloping hatch indicates the first lumen region. A narrow downward-sloping hatch indicates the second luminal region. Thick, downward-sloping hatches indicate non-luminal areas.
- the classification data conversion unit 629 The outline of the processing performed by the classification data conversion unit 629 will be described.
- the region in contact with the image acquisition catheter 40 that is, the region on the rightmost side in the first classification data 521 is classified into the first lumen region.
- the region surrounded by the living tissue region is classified into the second lumen region. It is desirable that the classification of the second lumen region is determined in a state where the upper end and the lower end of the RT type catheter image 518 are connected to form a cylindrical shape.
- a region of the non-living tissue region that is neither the first lumen region nor the second lumen region is classified as a non-luminal region.
- FIG. 19 is an explanatory diagram illustrating the first training data.
- the first training data is used when the first classification trained model 621 is generated by machine learning.
- the first training data may be created by using a computer or the like different from the information processing apparatus 20.
- the control unit 21 displays two types of catheter images 51, an RT format catheter image 518 and an XY format catheter image 519, on the display device 31.
- the labeler observes the displayed catheter image 51 and observes "the boundary line between the first lumen region and the living tissue region", “the boundary line between the second lumen region and the living tissue region", and "non-lumen". Marking four types of boundary line data, "the boundary line between the area and the living tissue area" and "the outline of the medical device area”.
- the labeler may mark the catheter image 51 of either the RT format catheter image 518 or the XY format catheter image 519.
- the control unit 21 displays a boundary line corresponding to the marking at the corresponding position of the other catheter image 51. From the above, the labeler can confirm both the RT format catheter image 518 and the XY format catheter image 519 and perform appropriate marking.
- the labeler inputs whether each area separated by the four types of marked boundary line data is a "living tissue area”, a "non-living tissue area”, or a “medical instrument area”.
- the control unit 21 may automatically determine the area, and the labeler may give a correction instruction as necessary.
- the first classification data 521 which clearly indicates whether each region of the catheter image 51 is classified into the "living tissue region", the "non-living tissue region", or the “medical device region” is created.
- the first classification data 521 will be described with specific examples.
- the "living tissue area label” is attached to the pixels classified into the "living tissue region”, and the “first lumen region label” is attached to the pixels classified into the “first lumen region”, and the “second lumen region” is attached to the pixels.
- the “second lumen area label” is for the pixels classified as "”
- the “non-luminal area label” is for the pixels classified as “non-lumen area”
- the pixels are classified as “medical instrument area”.
- the “medical device area label” is recorded in the pixels classified into the "non-living tissue area", and the "non-living tissue area label” is recorded in each of the pixels.
- Each label is indicated by, for example, an integer.
- the first classification data 521 is an example of label data in which a pixel position and a label are associated with each other.
- the control unit 21 records the catheter image 51 and the first classification data 521 in association with each other.
- the first training data DB is created.
- the first training data DB recorded by associating the RT format catheter image 518 with the RT format first classification data 521 in the first training data DB will be described as an example.
- the control unit 21 may generate XY format classification data 529 based on the XY format catheter image 519.
- the control unit 21 may generate RT format classification data 528 based on the XY format classification data 529.
- the U-Net structure includes a multi-layered encoder layer and a multi-layered decoder layer connected behind the multilayer encoder layer.
- Each encoder layer includes a pooling layer and a convolutional layer. Semantic segmentation assigns a label to each pixel that makes up the input image.
- the unlearned model may be a Mask R-CNN model or a model that realizes segmentation of any other image.
- the control unit 21 acquires a training record used for training of one epoch from the first training data DB (step S571).
- the control unit 21 adjusts the model parameters so that when the RT format catheter image 518 is input to the input layer of the model, the RT format first classification data 521 is output from the output layer (step S572). ..
- the program may appropriately have a function of accepting corrections by the user, presenting the basis for judgment, additional learning, and the like to be executed by the control unit 21.
- the control unit 21 determines whether or not to end the process (step S573). For example, the control unit 21 determines that the process is completed when the learning of a predetermined number of epochs is completed.
- the control unit 21 may acquire test data from the first training data DB, input it to the model being machine-learned, and determine that the process ends when an output with a predetermined accuracy is obtained.
- step S573 If it is determined that the process is not completed (NO in step S573), the control unit 21 returns to step S571.
- the control unit 21 records the parameters of the trained first classification trained model 621 in the auxiliary storage device 23 (step S574). After that, the control unit 21 ends the process.
- the first classification trained model 621 that accepts the catheter image 51 and outputs the first classification data 521 is generated.
- the model that accepts the time-series input includes, for example, a memory unit that holds information about the RT format catheter image 518 input in the past.
- the model that accepts the time-series input may include a recursive input unit that inputs the output for the RT format catheter image 518 input in the past together with the next RT format catheter image 518.
- the catheter image 51 acquired in time series it is possible to realize the first classification trained model 621 that is less susceptible to image noise and outputs the first classification data 521 with high accuracy.
- the first classification trained model 621 may be created by using a computer or the like different from the information processing apparatus 20.
- the first classification trained model 621 for which machine learning has been completed may be copied to the auxiliary storage device 23 via the network.
- the first classification trained model 621 trained by one hardware can be used by a plurality of information processing devices 20.
- FIG. 20 is a flowchart illustrating a processing flow of the program of the fourth embodiment.
- the flowchart described with reference to FIG. 20 shows the details of the processing performed by the classification model 62 described with reference to FIG.
- the control unit 21 acquires a 1-frame RT format catheter image 518 (step S551).
- the control unit 21 inputs the RT format catheter image 518 into the first classification learned model 621 and acquires the first classification data 521 (step S552).
- the control unit 21 extracts one continuous non-living tissue region from the first classification data 521 (step S553). It is desirable that the processing after the extraction of the non-living tissue region is performed in a cylindrical shape by connecting the upper end and the lower end of the RT format catheter image 518.
- the control unit 21 determines whether or not the non-living tissue region extracted in step S552 is the side in contact with the image acquisition catheter 40, that is, the portion in contact with the left end of the RT format catheter image 518 (step S554). When it is determined that the side is in contact with the image acquisition catheter 40 (YES in step S554), the control unit 21 determines that the non-living tissue region extracted in step S553 is the first lumen region (step S555).
- step S554 When it is determined that the portion is not in contact with the image acquisition catheter 40 (NO in step S554), the control unit 21 determines whether or not the non-living tissue region extracted in step S552 is surrounded by the living tissue region (NO). Step S556). When it is determined that the living tissue region is surrounded (YES in step S556), the control unit 21 determines that the non-living tissue region extracted in step S553 is the second lumen region (step S557). By step S555 and step S557, the control unit 21 realizes the function of the lumen region extraction unit.
- control unit 21 determines that the non-living tissue region extracted in step S553 is a non-luminal region (step S558).
- step S557 or step S558 the control unit 21 determines whether or not the processing of all non-living tissue regions has been completed (step S559). If it is determined that the process has not been completed (NO in step S559), the control unit 21 returns to step S553. When it is determined that the process is completed (YES in step S559), the control unit 21 ends the process.
- the control unit 21 realizes the function of the classification data conversion unit 629 by the processing from step S553 to step S559.
- the first classification learned model 621 may be a model that classifies the XY format catheter image 519 into a living tissue region, a non-living tissue region, and a medical instrument region.
- the first classification trained model 621 may be a model that classifies the RT format catheter image 518 into a living tissue region and a non-living tissue region. In doing so, the labeler does not have to make markings for the medical device area.
- the generated first classification trained model 621 can be used to provide a catheter system 10 that generates classification data 52.
- each region separated by the four types of marked boundary line data is "biological tissue region", “first lumen region”, “second lumen region”, “non-luminal region” and “medical treatment”. You may enter which of the "instrument areas”.
- the catheter image 51 can be converted into a "living tissue region”, a "first lumen region”, a “second lumen region”, and a "non-living tissue region”. It is possible to generate a first-class trained model 621 that classifies into the "luminal region” and the "medical instrument region”.
- the catheter image 51 can be displayed as a "living tissue region”, a “first lumen region”, a “second lumen region”, a “non-luminal region”, and a “medical instrument” without using the classification data conversion unit 629.
- a classification model 62 classified into “regions” can be realized.
- the present embodiment relates to a catheter system 10 using a synthetic classification model 626 that synthesizes classification data 52 output from each of the two classification-learned models.
- the description of the parts common to the fourth embodiment will be omitted.
- FIG. 21 is an explanatory diagram illustrating the configuration of the classification model 62 of the fifth embodiment.
- the classification model 62 includes a synthetic classification model 626 and a classification data conversion unit 629.
- the synthetic classification model 626 includes a first classification trained model 621, a second classification trained model 622, and a classification data synthesis unit 628. Since the first classification trained model 621 is the same as that of the fourth embodiment, the description thereof will be omitted.
- the second classification trained model 622 accepts the RT format catheter image 518 and classifies each part constituting the RT format catheter image 518 into a "living tissue region", a "non-living tissue region", and a "medical device region”. This is a model that outputs the second classification data 522.
- the second classification trained model 622 further outputs the reliability of the classification result for each part, that is, the probability that the classification result is correct. The details of the second classification trained model 622 will be described later.
- the classification data synthesis unit 628 synthesizes the first classification data 521 and the second classification data 522 to generate synthetic classification data 526. That is, the input end of the classification data synthesis unit 628 realizes the functions of the first classification data acquisition unit and the second classification data acquisition unit. The output end of the classification data synthesis unit 628 realizes the function of the composition classification data output unit.
- the details of the synthetic classification data 526 will be described later.
- the synthetic classification data 526 is converted into classification data 52 by the classification data conversion unit 629. Since the processing performed by the classification data conversion unit 629 is the same as that of the fourth embodiment, the description thereof will be omitted.
- FIG. 22 is an explanatory diagram illustrating the second training data.
- the second training data is used when generating the second classification trained model 622 by machine learning.
- the second training data may be created by using a computer or the like different from the information processing apparatus 20.
- the control unit 21 displays two types of catheter images 51, an RT format catheter image 518 and an XY format catheter image 519, on the display device 31.
- the labeler observes the displayed catheter image 51 and marks two types of boundary line data, "the boundary line between the first lumen region and the biological tissue region" and "the outline of the medical device region".
- the labeler may mark the catheter image 51 of either the RT format catheter image 518 or the XY format catheter image 519.
- the control unit 21 displays a boundary line corresponding to the marking at the corresponding position of the other catheter image 51. From the above, the labeler can confirm both the RT format catheter image 518 and the XY format catheter image 519 and perform appropriate marking.
- the labeler inputs whether each area separated by the two types of marked boundary line data is a "living tissue area”, a "non-living tissue area”, or a “medical instrument area”.
- the control unit 21 may automatically determine the area, and the labeler may give a correction instruction as necessary.
- the second classification data 522 which clearly indicates whether each part of the catheter image 51 is classified into the "living tissue region", the "non-living tissue region", or the “medical instrument region” is created.
- the second classification data 522 will be described with specific examples. Pixels classified into “living tissue area” are classified into “living tissue area label”, and pixels classified into “non-living tissue area” are classified into “non-living tissue area label” into “medical instrument area”. A “medical device area label” is recorded on each of the pixels. Each label is indicated by, for example, an integer.
- the second classification data 522 is an example of label data in which pixel positions and labels are associated with each other.
- the control unit 21 records the catheter image 51 and the second classification data 522 in association with each other.
- the second training data DB is created by repeating the above processing and recording a large number of sets of data.
- the second classification trained model 622 can be generated by performing the same processing as the machine learning described in the fourth embodiment using the second training data DB.
- the second classification learned model 622 may be a model that classifies the XY format catheter image 519 into a living tissue region, a non-living tissue region, and a medical instrument region.
- the second classification trained model 622 may be a model that classifies the RT format catheter image 518 into a living tissue region and a non-living tissue region. In doing so, the labeler does not have to make markings for the medical device area.
- the creation of the second classification data 522 can be performed in a shorter time than the creation of the first classification data 521.
- the training of the labeler for creating the second classification data 522 can be performed in a shorter time than the training of the labeler for creating the first classification data 521.
- a large amount of training data can be registered in the second training data DB as compared with the first training DB.
- the boundary between the first lumen region and the biological tissue region and the outer shape of the medical instrument region have been trained in the second classification, which can be identified with higher accuracy than the first classification trained model 621.
- Model 622 can be generated.
- the second classification trained model 622 does not learn about the non-living tissue region other than the first lumen region, it cannot be distinguished from the living tissue region.
- the processing performed by the classification data synthesis unit 628 will be described.
- the same RT format catheter image 518 is input to both the first classification trained model 621 and the second classification trained model 622.
- the first classification data 521 is output from the medical device learned model 611.
- the second classification data 522 is output from the second classification trained model 622.
- the classified label and the reliability of the label are output for each pixel of the RT format catheter image 518 as an example. I will explain.
- the first classification trained model 621 and the second classification trained model 622 output labels and probabilities classified by range, for example, a total of 9 pixels of 3 vertical pixels and 3 horizontal pixels of the RT format catheter image 518. You may.
- the reliability that the first classification trained model 621 is a living tissue region is shown by Q1t (r, ⁇ ).
- Q1t (r, ⁇ ) 0 for the pixels classified by the first classification trained model 621 into a region other than the biological tissue region.
- the classification data synthesis unit 628 classifies pixels having a Qt (r, ⁇ ) of 0.5 or more into a biological tissue region.
- the reliability that the first category trained model 621 is in the medical device area is indicated by Q1c (r, ⁇ )
- the reliability that the second category trained model 622 is in the medical device area is Q2c (r, ⁇ ). It is shown by r, ⁇ ).
- the classification data synthesis unit 628 classifies pixels having a Qc (r, ⁇ ) of 0.5 or more into the medical device area.
- the classification data synthesis unit 628 classifies the pixels that are not classified into the medical device area or the living tissue area into the non-living tissue area.
- the classification data synthesizing unit 628 generates the synthetic classification data 526 by synthesizing the first classification data 521 and the second classification data 522.
- the synthetic classification data 526 is converted into RT format classification data 528 by the classification data conversion unit 629.
- Eqs. (5-1) and (5-2) are examples.
- the threshold value when the classification data synthesis unit 628 performs classification is also an example.
- the classification data synthesis unit 628 may be a trained model that accepts the first classification data 521 and the second classification data 522 and outputs the synthetic classification data 526.
- the first classification data 521 is obtained by the classification data conversion unit 629 described in the fourth embodiment as a "living tissue region", a “first lumen region”, a “second lumen region”, a “non-lumen region” and a “non-luminal region”. After being classified into the “medical device area", it may be input to the classification data synthesis unit 628.
- the first classification trained model 621 uses the catheter image 51 described in the modified example 4-1 as a "living tissue region”, a “first lumen region”, a “second lumen region”, a “non-luminal region”, and a “non-luminal region”. It may be a model classified into the "medical device area”.
- the classification data synthesis unit 628 When the data in which the non-living tissue region has been classified into the "first lumen region”, “second lumen region” and “non-lumen region” is input to the classification data synthesis unit 628, the classification data synthesis unit 628 It is possible to output synthetic classification data 526 that has been classified into a "living tissue region", a "first lumen region”, a “second lumen region”, a “non-luminal region”, and a “medical instrument region”. In such a case, it is not necessary to input the synthetic classification data 526 into the classification data conversion unit 629 and convert it into the RT format classification data 528.
- FIG. 23 is a flowchart illustrating a processing flow of the program of the fifth embodiment.
- the flowchart described with reference to FIG. 23 shows the details of the processing performed by the classification model 62 described with reference to FIG.
- the control unit 21 acquires a 1-frame RT format catheter image 518 (step S581). By step S581, the control unit 21 realizes the function of the image acquisition unit.
- the control unit 21 inputs the RT format catheter image 518 into the first classification learned model 621 and acquires the first classification data 521 (step S582).
- the control unit 21 inputs the RT format catheter image 518 into the second classification trained model 622 and acquires the second classification data 522 (step S583).
- the control unit 21 activates a classification / synthesis subroutine (step S584).
- the classification / synthesis subroutine is a subroutine that synthesizes the first classification data 521 and the second classification data 522 to generate the synthesis classification data 526.
- the processing flow of the classification synthesis subroutine will be described later.
- the control unit 21 extracts one continuous non-living tissue region from the synthetic classification data 526 (step S585). It is desirable that the processing after the extraction of the non-living tissue region is performed in a cylindrical shape by connecting the upper end and the lower end of the RT format catheter image 518.
- the control unit 21 determines whether or not the non-living tissue region extracted in step S585 is on the side in contact with the image acquisition catheter 40 (step S554).
- step S559 since the processing up to step S559 is the same as the processing flow of the program of the fourth embodiment described with reference to FIG. 20, the description thereof will be omitted.
- the control unit 21 determines whether or not the processing of all non-living tissue regions has been completed (step S559). If it is determined that the process has not been completed (NO in step S559), the control unit 21 returns to step S585. When it is determined that the process is completed (YES in step S559), the control unit 21 ends the process.
- FIG. 24 is a flowchart illustrating the processing flow of the classification / synthesis subroutine.
- the classification / synthesis subroutine is a subroutine that synthesizes the first classification data 521 and the second classification data 522 to generate the synthesis classification data 526.
- the control unit 21 selects a pixel to be processed (step S601).
- the control unit 21 acquires the reliability Q1t (r, ⁇ ) that the pixel being processed is a living tissue region from the first classification data 521 (step S602).
- the control unit 21 acquires the reliability Q2t (r, ⁇ ) that the pixel being processed is a living tissue region from the second classification data 522 (step S603).
- the control unit 21 calculates the combined value Qt (r, ⁇ ) based on the equation (5-1), for example (step S604).
- the control unit 21 determines whether or not the combined value Qt (r, ⁇ ) is equal to or greater than a predetermined threshold value (step S605).
- the predetermined threshold is, for example, 0.5.
- the control unit 21 classifies the pixel being processed into the "living tissue region" (step S606).
- the control unit 21 acquires the reliability Q1c (r, ⁇ ) that the pixel being processed is the medical device area from the first classification data 521. (Step S611).
- the control unit 21 acquires the reliability Q2c (r, ⁇ ) that the pixel being processed is in the medical device region from the second classification data 522 (step S612).
- the control unit 21 calculates the combined value Qc (r, ⁇ ) based on the equation (5-2), for example (step S613).
- the control unit 21 determines whether or not the combined value Qc (r, ⁇ ) is equal to or greater than a predetermined threshold value (step S614).
- the predetermined threshold is, for example, 0.5.
- the control unit 21 classifies the pixel being processed into the “medical device area” (step S615).
- the control unit 21 classifies the pixel being processed into a “non-living tissue region” (step S616).
- step S607 the control unit 21 determines whether or not the processing of all the pixels is completed. If it is determined that the process has not been completed (NO in step S607), the control unit 21 returns to step S601. When it is determined that the process is completed (YES in step S607), the control unit 21 ends the process.
- the control unit 21 realizes the function of the classification data synthesis unit 628 by the subroutine of the classification synthesis.
- a catheter system 10 that generates RT format classification data 528 using synthetic classification data 526 that synthesizes classification data 52 output from each of the two classification learned models.
- the second classification trained model 622 which can collect a large number of training data relatively easily and improve the classification accuracy
- the first classification trained model 621 which takes time to collect training data.
- the present embodiment relates to a catheter system 10 that classifies each portion constituting the catheter image 51 by using the position information of a medical device as a hint.
- the description of the parts common to the first embodiment will be omitted.
- FIG. 25 is an explanatory diagram illustrating the configuration of the hinted trained model 631.
- the hinted trained model 631 is used in step S604 described using FIG. 4 instead of the classification model 62 described using FIG.
- the trained model 631 receives the RT format catheter image 518 and the position information of the medical device depicted in the RT format catheter image 518, and for each part constituting the RT format catheter image 518, "living tissue”. This model outputs hinted classification data 561 classified into "region”, “non-living tissue region”, and “medical device region”. The first classification trained model 621 further outputs the reliability of the classification result for each part, that is, the probability that the classification result is correct.
- FIG. 26 is an explanatory diagram illustrating the record layout of the training data DB 72 with hints.
- the training data DB 72 with hints includes the catheter image 51, the position information of the medical device depicted in the catheter image 51, and the classification data 52 in which each part constituting the catheter image 51 is classified according to the drawn subject. It is a database that records in association with.
- the classification data 52 is data created by the labeler based on the procedure described using, for example, FIG.
- the hinted training model 631 can be generated by performing the same processing as the machine learning described in the fourth embodiment using the hinted training data DB 72.
- FIG. 27 is a flowchart illustrating a processing flow of the program of the sixth embodiment.
- the flowchart described with reference to FIG. 27 shows the details of the process performed in step S504 described with reference to FIG.
- the control unit 21 acquires a 1-frame RT format catheter image 518 (step S621).
- the control unit 21 inputs the RT format catheter image 518 into the medical device learned model 611 described using, for example, FIG. 6 to acquire the position information of the medical device (step S622).
- the control unit 21 inputs the RT format catheter image 518 and the position information into the hinted trained model 631 and acquires the hinted classification data 561 (step S623).
- the control unit 21 extracts one continuous non-living tissue region from the hinted classification data 561 (step S624). It is desirable that the processing after the extraction of the non-living tissue region is performed in a cylindrical shape by connecting the upper end and the lower end of the RT format catheter image 518.
- the control unit 21 determines whether or not the non-living tissue region extracted in step S624 is on the side in contact with the image acquisition catheter 40 (step S554).
- step S559 since the processing up to step S559 is the same as the processing flow of the program of the fourth embodiment described with reference to FIG. 20, the description thereof will be omitted.
- the control unit 21 determines whether or not the processing of all non-living tissue regions has been completed (step S559). If it is determined that the process has not been completed (NO in step S559), the control unit 21 returns to step S624. When it is determined that the process is completed (YES in step S559), the control unit 21 ends the process.
- the catheter system 10 that accurately generates the classification data 52 can be provided by inputting the position information of the medical device as a hint.
- FIG. 28 is a flowchart illustrating a processing flow of the program of the modified example. The process described with reference to FIG. 28 is performed in place of the process described with reference to FIG. 27.
- the control unit 21 acquires a 1-frame RT format catheter image 518 (step S621).
- the control unit 21 acquires the position information of the medical device (step S622).
- the control unit 21 determines whether or not the acquisition of the position information of the medical device is successful (step S631). For example, when the reliability output from the medical device learned model 611 is higher than the threshold value, the control unit 21 determines that the acquisition of the position information is successful.
- step S631 means that the medical device is visualized on the RT format catheter image 518, and the control unit 21 can acquire the position information of the medical device with a reliability higher than the threshold value.
- the unsuccessful case includes, for example, the absence of a medical device in the imaging range of the RT format catheter image 518, and the case where the medical device is in close contact with the surface of the biological tissue area and is not clearly visualized.
- step S631 When it is determined that the acquisition of the position information is successful (YES in step S631), the control unit 21 inputs the RT format catheter image 518 and the position information into the hinted trained model 631 and inputs the hinted classification data 561. Acquire (step S623). When it is determined that the acquisition of the position information is not successful (NO in step S631), the control unit 21 inputs the RT format catheter image 518 into the hint-unlearned model 632 and acquires the hint-uncategorized data (step S632). ).
- the hint unlearned model 632 is a classification model 62 described using, for example, FIG. 7, FIG. 18 or FIG. 21.
- the hint unclassified data is the classification data 52 output from the classification model 62.
- control unit 21 extracts one continuous non-living tissue region from the hinted classification data 561 or the classification model 62 (step S624). Since the subsequent processing is the same as the processing flow described with reference to FIG. 27, the description thereof will be omitted.
- the hinted classification data 561 is an example of the first data.
- the hinted trained model 631 is an example of the first trained model that outputs the first data when the catheter image 51 and the position information of the medical device are input.
- the output layer of the hint trained model 631 is an example of a first data output unit that outputs the first data.
- Hint uncategorized data is an example of the second data.
- the hint unlearned model 632 is an example of the second trained model and the second model that output the second data when the catheter image 51 is input.
- the output layer of the hint unlearned model 632 is an example of the second data output unit.
- the classification model 62 that does not require input of location information is used. Therefore, it is possible to provide a catheter system 10 that prevents a malfunction due to inputting an erroneous hint into the hint-learned model 631.
- the present embodiment relates to a catheter system 10 that synthesizes the output of the hinted trained model 631 and the output of the hintless trained model 632 to generate synthetic data 536.
- the description of the parts common to the sixth embodiment will be omitted.
- the synthetic data 536 is data used in place of the classification data 52, which is the output of step S504 described with reference to FIG.
- FIG. 29 is an explanatory diagram illustrating the configuration of the classification model 62 of the seventh embodiment.
- the classification model 62 includes a position classification analysis unit 66 and a third synthesis unit 543.
- the position classification analysis unit 66 includes a position information acquisition unit 65, a hinted learning model 631, a hintless learning model 632, a first synthesis unit 541 and a second synthesis unit 542.
- the position information acquisition unit 65 obtains position information indicating the position where the medical device is drawn from, for example, the medical device learned model 611 described using FIG. 6 or the position information model 619 described using FIG. get. Since the hinted trained model 631 is the same as that of the sixth embodiment, the description thereof will be omitted.
- the hint unlearned model 632 is a classification model 62 described using, for example, FIG. 7, FIG. 18 or FIG. 21.
- the operation of the first synthesis unit 541 will be described.
- the first synthesis unit 541 creates classification information by synthesizing the hinted classification data 561 output from the hinted trained model 631 and the hinted unclassified data output from the hintless learned model 632.
- the input end of the first synthesis unit 541 functions as a first data acquisition unit for acquiring hinted classification data 561 and a second data acquisition unit for acquiring hint unclassified data.
- the output end of the first synthesis unit 541 functions as a first composition data output unit that outputs the first composition data obtained by combining the hinted classification data 561 and the hint unclassification data.
- the first synthesis unit 541 fulfills the function of the classification data conversion unit 629 and is not classified. Classify the biological tissue area.
- the first synthesis unit 541 sets the weight of the hint trained model 631 to be larger than the weight of the hint unlearned model 632, and sets both. Synthesize. Since the method of weighting and compositing images is known, the description thereof will be omitted.
- the first synthesis unit 541 may synthesize by defining the weighting of the hint classified data 561 and the hint unclassified data based on the reliability of the position information acquired by the position information acquisition unit 65.
- the first synthesis unit 541 may synthesize the hinted classification data 561 and the hint unclassified data based on the reliability of each region of the hinted classification data 561 and the hint unclassified data.
- the synthesis based on the reliability of the classification data 52 can be executed, for example, by the same processing as the classification data synthesis unit 628 described in the fifth embodiment.
- the first synthesis unit 541 treats the medical device region output from the hinted trained model 631 and the hintless trained model 632 in the same manner as the adjacent non-living tissue region. For example, when the medical instrument region exists in the first lumen region, the first synthesis unit 541 treats the medical instrument region in the same manner as the first lumen region. Similarly, when the medical instrument region exists in the second lumen region, the first synthesis unit 541 treats the medical instrument region in the same manner as the second lumen region.
- a trained model that does not output the medical device area may be used for either the hinted trained model 631 or the hintless trained model 632. Therefore, as shown in the central portion of FIG. 29, the classification information output from the first synthesis unit 541 does not include information regarding the medical device region.
- the first synthesis unit 541 may function as a switch for switching between hinted classification data 561 and hintless classification data based on whether or not the position information acquisition unit 65 succeeds in acquiring the position information.
- the first synthesis unit 541 may further function as the classification data conversion unit 629.
- the first synthesis unit 541 outputs the classification information based on the hinted classification data 561 output from the hinted trained model 631. ..
- the first synthesis unit 541 outputs the classification information based on the hint unclassified data output from the hint unlearned model 632.
- the second synthesis unit 542 When the position information acquisition unit 65 succeeds in acquiring the position information, the second synthesis unit 542 outputs the medical device area output from the hint-learned model 631. When the position information acquisition unit 65 does not succeed in acquiring the position information, the second synthesis unit 542 outputs the medical device area included in the hint unclassified data.
- the second synthesis unit 542 synthesizes the medical device area included in the hint classified data 561 and the medical device area included in the hint unclassified data. May be output.
- the synthesis of the hint classified data 561 and the hint unclassified data can be executed, for example, by the same processing as the classification data synthesis unit 628 described in the fifth embodiment.
- the output end of the second synthesis unit 542 fulfills the function of the second synthetic data output unit that outputs the second synthetic data obtained by combining the medical device area of the hint classified data 561 and the medical device area of the hint unclassified data.
- the operation of the third synthesis unit 543 will be described.
- the third synthesis unit 543 outputs synthetic data 536 in which the medical device region output from the second synthesis unit 542 is superimposed on the classification information output from the first synthesis unit 541. In FIG. 29, the superimposed medical device area is shown in black.
- the third synthesis unit 543 may perform the function of the classification data conversion unit 629 that classifies the non-living tissue region into the first lumen region, the second lumen region, and the non-lumen region. ..
- a part or all of the plurality of trained models constituting the position classification analysis unit 66 is a model that accepts a plurality of catheter images 51 acquired in time series and outputs information for the latest catheter image 51. You may.
- a catheter system 10 that acquires position information of a medical device with high accuracy and outputs it in combination with classification information.
- the control unit 21 generates synthetic data 536 based on each of a plurality of catheter images 51 continuously imaged along the longitudinal direction of the image acquisition catheter 40, and then stacks the synthetic data 536 to form a living body. Three-dimensional data of tissues and medical devices may be constructed and displayed.
- FIG. 30 is an explanatory diagram illustrating the configuration of the classification model 62 of the modified example.
- An X% hint trained model 639 has been added to the position classification analysis unit 66.
- the position information is input in X% of the training data, and the position information is not input in (100-X)%. It is a model that has been trained under the conditions.
- the data output from the X% hint trained model 639 will be referred to as X% hint classification data.
- the X% hint trained model 639 is the same as the hinted trained model 631 when X is "100", and is the same as the hint unlearned model 632 when X is "0". .. X is, for example, "50".
- the first synthesis unit 541 synthesized the classification data 52 acquired from each of the hint trained model 631, the hint unlearned model 632, and the X% hint trained model 639 based on a predetermined weighting. Output data.
- the weighting changes depending on whether or not the position information acquisition unit 65 succeeds in acquiring the position information.
- the output of the hint learned model 631 and the output of the X% hint learned model 639 are combined.
- the output of the hint-unlearned model 632 and the output of the X% hint-learned model 639 are combined.
- the weighting at the time of synthesis may be changed based on the reliability of the position information acquired by the position information acquisition unit 65.
- the position classification analysis unit 66 may include a plurality of X% hint trained models 639.
- X% hint trained model 639 in which X is “20” and an X% hint trained model 639 in which X is “50” can be used in combination.
- FIG. 31 is an explanatory diagram illustrating an outline of the process of the eighth embodiment.
- a plurality of RT-type catheter images 518 continuously taken along the longitudinal direction of the image acquisition catheter 40 are used.
- the control unit 21 inputs a plurality of RT-type catheter images 518 to the position classification analysis unit 66 described in the seventh embodiment, respectively.
- the position classification analysis unit 66 outputs classification information and a medical device area corresponding to each RT format catheter image 518.
- the control unit 21 inputs the classification information and the medical device information into the third synthesis unit 543 to synthesize the synthesis data 536.
- the control unit 21 creates biological three-dimensional data 551 showing the three-dimensional structure of biological tissue based on a plurality of synthetic data 536.
- the biological three-dimensional data 551 is voxel data in which values indicating a biological tissue label, a first lumen region label, a second lumen region label, a non-luminal region label, and the like are recorded for each volume lattice in a three-dimensional space, for example. Is.
- the biological three-dimensional data 551 may be polygon data composed of a plurality of polygons indicating boundaries of each region. Since a method of creating three-dimensional data 55 based on a plurality of RT format data is known, the description thereof will be omitted.
- the control unit 21 acquires position information indicating the position of the medical device depicted in each RT format catheter image 518 from the position information acquisition unit 65 included in the position classification analysis unit 66.
- the control unit 21 creates medical device three-dimensional data 552 showing the three-dimensional shape of the medical device based on a plurality of position information. The details of the medical device three-dimensional data 552 will be described later.
- the control unit 21 synthesizes the biological three-dimensional data 551 and the medical device three-dimensional data 552 to generate the three-dimensional data 55.
- the three-dimensional data 55 is used for the "3D display" of step S513 described with reference to FIG.
- the control unit 21 replaces the medical device region included in the synthetic data 536 with a blank area or a non-biological area, and then synthesizes the medical device three-dimensional data 552.
- the control unit 21 may generate biological three-dimensional data 551 using the classification information output from the first synthesis unit 541 included in the position classification analysis unit 66.
- FIGS. 32A to 32D are explanatory views for explaining the outline of the process of correcting the position information.
- 32A to 32D are schematic views showing a state in which the catheter image 51 is taken while pulling the image acquisition catheter 40 to the right in the figure in chronological order.
- the thick cylinder schematically shows the inner surface of the first cavity.
- FIG. 32A three catheter images 51 have already been taken.
- the position information of the medical device extracted from each catheter image 51 is indicated by a white circle.
- FIG. 32B shows a state in which the fourth catheter image 51 is taken.
- the position information of the medical device extracted from the fourth catheter image 51 is shown by a black circle.
- the medical device was detected in a place clearly different from the three catheter images 51 taken earlier.
- medical instruments used in IVR have a certain degree of rigidity and it is unlikely that they will bend sharply. Therefore, the position information indicated by the black circle is likely to be an erroneous detection.
- FIG. 32C two more catheter images 51 have been taken.
- the position information of the medical device extracted from each catheter image 51 is indicated by a white circle.
- the five white circles are lined up in a substantially row along the longitudinal direction of the image acquisition catheter 40, but the black circles are far apart, and it is clear that the detection is false.
- the position information complemented based on the five white circles is indicated by a cross.
- the shape of the medical device in the first cavity can be correctly displayed on the three-dimensional image.
- the control unit 21 uses the representative point of the medical device area acquired from the second synthesis unit 542 included in the position classification analysis unit 66 for the position information. You may. For example, the center of gravity of the medical device area can be used as a representative point.
- FIG. 33 is a flowchart illustrating a processing flow of the program of the eighth embodiment.
- the program described with reference to FIG. 33 is a program executed when it is determined in step S505 described with reference to FIG. 4 that the user has specified the three-dimensional display (3D in step S505).
- the program of FIG. 33 can be executed while a plurality of catheter images 51 are being imaged along the longitudinal direction of the image acquisition catheter 40.
- the catheter image 51 that has been imaged has been generated with classification information and position information, respectively, and is stored in the auxiliary storage device 23 or an external large-capacity storage device as an example. explain.
- the control unit 21 acquires the position information corresponding to one catheter image 51 and records it in the main storage device 22 or the auxiliary storage device 23 (step S641).
- the control unit 21 processes the catheter image 51 stored earlier in the series of catheter images 51 in order.
- the control unit 21 may acquire and record position information from the first few catheter images 51 in the series of catheter images 51.
- the control unit 21 acquires the position information corresponding to the next one catheter image 51 (step S642).
- the position information being processed is described as the first position information.
- the control unit 21 extracts the position information closest to the first position information from the position information acquired in step S641 and the past step S641 (step S643).
- the position information extracted in step S643 will be referred to as a second position information.
- step S642 the distances between the position information are compared in a state where a plurality of catheter images 51 are projected onto one plane orthogonal to the image acquisition catheter 40. That is, when extracting the second position information, the distance in the longitudinal direction of the image acquisition catheter 40 is not taken into consideration.
- the control unit 21 determines whether or not the distance between the first position information and the second position information is equal to or less than a predetermined threshold value (step S644).
- the threshold is, for example, 3 millimeters.
- step S644 determines whether or not the processing of the recorded position information is completed. If it is determined that the process has not been completed (NO in step S646), the control unit 21 returns to step S642.
- the position information indicated by the black circle in FIG. 32 is an example of the position information determined to exceed the threshold value in step S644.
- the control unit 21 ignores such position information without recording it in step S645.
- the control unit 21 realizes the function of the exclusion unit that excludes the position information that does not satisfy the predetermined condition by the processing when it is determined as NO in step S644.
- the control unit 21 may add a flag indicating an "error" to the position information determined to exceed the threshold value in step S644 and record it.
- step S646 determines whether or not the position information can be complemented based on the position information recorded in steps S641 and S645 (step S647).
- step S647 determines whether or not the position information can be complemented based on the position information recorded in steps S641 and S645 (step S647).
- step S648 complements the position information (step S648).
- step S648 the control unit 21 complements the position information that substitutes for the position information determined to exceed the threshold value in, for example, step S644.
- the control unit 21 may complement the position information between the catheter images 51. Completion can be performed using any method such as linear interpolation, spline interpolation, Lagrange interpolation or Newton interpolation.
- the control unit 21 realizes the function of the complement unit that adds the complement information to the position information in step S648.
- step S649 When it is determined that the position information cannot be complemented (NO in step S647), or after the end of step S648, the control unit 21 activates the subroutine of the three-dimensional display (step S649).
- the three-dimensional display subroutine is a subroutine that performs three-dimensional display based on a series of catheter images 51. The processing flow of the three-dimensional display subroutine will be described later.
- the control unit 21 determines whether or not to end the process (step S650). For example, when the MDU 33 starts a new pullback operation, that is, the imaging of the catheter image 51 used for generating the three-dimensional image, the control unit 21 determines that the process is completed.
- step S650 If it is determined that the process is not completed (NO in step S650), the control unit 21 returns to step S642. When it is determined to end the process (YES in step S650), the control unit 21 ends the process.
- control unit 21 generates and records classification information and position information based on the newly captured catheter image 51 in parallel with the execution of the program of FIG. 33. That is, if it is determined that the process is completed in step S646, the steps S647 and subsequent steps are executed, but there is a possibility that new position information and classification information are generated during the execution from step S647 to step S650. ..
- FIG. 34 is a flowchart illustrating the processing flow of the subroutine of the three-dimensional display.
- the three-dimensional display subroutine is a subroutine that performs three-dimensional display based on a series of catheter images 51.
- the control unit 21 realizes the function of the three-dimensional output unit by the subroutine of the three-dimensional display.
- the control unit 21 acquires synthetic data 536 corresponding to a series of catheter images 51 (step S661).
- the control unit 21 creates biological three-dimensional data 551 showing the three-dimensional structure of biological tissue based on a series of synthetic data 536 (step S662).
- the control unit 21 when synthesizing the three-dimensional data 55, the control unit 21 replaces the medical device region included in the synthetic data 536 with a blank area or a non-biological area, and then synthesizes the medical device three-dimensional data 552. do.
- the control unit 21 may generate biological three-dimensional data 551 using the classification information output from the first synthesis unit 541 included in the position classification analysis unit 66.
- the control unit 21 may generate the biological three-dimensional data 551 based on the first classification data 521 described with reference to FIG. That is, the control unit 21 can generate the biological three-dimensional data 551 directly based on the plurality of first classification data 521.
- the control unit 21 may generate biological three-dimensional data 551 indirectly based on a plurality of first classification data 521. “Indirectly based” means generating biological three-dimensional data 551 based on a plurality of synthetic data 536 generated using a plurality of first classification data 521, as described using, for example, FIG. 31. Means. The control unit 21 may generate biological three-dimensional data 551 based on a plurality of data different from the synthetic data 536 generated by using the first plurality of classification data 521.
- the control unit 21 adds thickness information to the curve defined by the series of position information recorded in steps S641 and S645 of the program described with reference to FIG. 33 and the complementary information supplemented in step S648 (step). S663).
- the thickness information is preferably the thickness of a medical device commonly used in IVR procedures.
- the control unit 21 may receive information about the medical device in use and add thickness information corresponding to the medical device. By adding the thickness information, the three-dimensional shape of the medical device is reproduced.
- the control unit 21 synthesizes the three-dimensional shape of the medical device generated in step S662 with the biological three-dimensional data 551 generated in step S662 (step S664).
- the control unit 21 displays the synthesized three-dimensional data 55 on the display device 31 (step S665).
- the control unit 21 receives instructions from the user such as rotation, change of cross section, enlargement, reduction, etc. for the three-dimensionally displayed image, and changes the display. Since the reception of instructions and the change of the display for the three-dimensionally displayed image have been performed conventionally, the description thereof will be omitted. The control unit 21 ends the process.
- a catheter system 10 that eliminates the influence of erroneous detection of position information and displays a medical device having a proper shape.
- the user can easily perform the IVR procedure by easily grasping the positional relationship between the Brocken-blow needle and the fossa ovalis, for example.
- Modification 8-1 The present modification relates to a catheter system 10 that performs three-dimensional display based on the medical device region detected from the catheter image 51 when the medical device is not erroneously detected. The description of the parts common to the eighth embodiment will be omitted.
- step S663 of the subroutine described with reference to FIG. 34 the control unit 21 determines the thickness of the medical device based on the medical device area output from, for example, the hint trained model 631 or the hint unlearned model 632. .. However, for the catheter image 51 for which the position information is determined to be incorrect, the thickness information is supplemented based on the medical device area of the anterior-posterior catheter image 51.
- a catheter system 10 that appropriately displays a medical device whose thickness changes in the middle, such as a medical device in which a needle is projected from a sheath, in a three-dimensional image.
- the present embodiment relates to a padding process suitable for a trained model for processing an RT-type catheter image 518 acquired using a radial scanning image acquisition catheter 40.
- the description of the parts common to the first embodiment will be omitted.
- the padding process is a process of adding data around the input data before performing the convolution process.
- the input data is the input image.
- the input data is the feature map extracted in the previous stage.
- so-called zero padding processing is generally performed in which "0" data is added around the input data input to the convolutional layer.
- FIG. 35 is an explanatory diagram illustrating the padding process of the ninth embodiment.
- the right end of FIG. 35 is a schematic diagram of input data input to the convolutional layer.
- the convolutional layer is an example of a first convolutional layer included in the medical device trained model 611 and a second convolutional layer included in the angle trained model 612, for example.
- the convolutional layer may be the convolutional layer included in any trained model used to process the catheter image 51 taken with the radial scanning image acquisition catheter 40.
- the input data is in RT format, the horizontal direction corresponds to the distance from the sensor 42, and the vertical direction corresponds to the scanning angle.
- An enlarged schematic diagram of the upper right end portion and the left lower end portion of the input data is shown in the center of FIG. 35.
- Each frame corresponds to a pixel, and the numerical value in the frame corresponds to a pixel value.
- FIG. 35 is a schematic diagram of the data after the padding process of the present embodiment is performed.
- the numbers shown in italics indicate the data added by the padding process.
- "0" data is added to the left and right ends of the input data.
- the data indicated by "A” at the lower end of the data is copied to the upper end of the input data before the padding process is performed.
- the data indicated by "B” at the upper end of the data is copied to the lower end of the input data before the padding process is performed.
- the same data as the side with a large scanning angle is added to the outside of the side with a small scanning angle, and the same data as the side with a small scanning angle is added to the outside of the side with a large scanning angle.
- the padding process described with reference to FIG. 35 will be referred to as a polar padding process.
- the upper end and the lower end of the RT type catheter image 518 are substantially the same.
- one medical device or lesion may be separated above and below the RT format catheter image 518.
- the polar padding process is a process that utilizes such characteristics.
- the polar padding process may be performed on all the convolutional layers included in the trained model, or the polar padding process may be performed on some convolutional layers.
- FIG. 35 shows an example of performing a padding process in which one data is added to each of the four sides of the input data, but the padding process may be a process of adding a plurality of data.
- the number of data to be added in the polar padding process is selected according to the size of the filter used in the convolution process and the amount of stride.
- FIG. 36 is an explanatory diagram illustrating a polar padding process of a modified example.
- the polar padding process of this variant is effective for the convolutional layer at the stage of first processing the RT format catheter image 518.
- FIG. 36 schematically shows a state in which radial scanning is performed while pulling the sensor 42 to the right. Based on the scan line data acquired while the sensor 42 makes one rotation, one RT-type catheter image 518 schematically shown in the lower left of FIG. 36 is generated. The RT format catheter image 518 is formed from the upper side to the lower side according to the rotation of the sensor 42.
- the lower right of FIG. 36 schematically shows a state in which the RT format catheter image 518 is padded.
- Below the RT format catheter image 518 the data at the start of the RT format catheter image 518 one turn behind, which is shown by hatching downward to the right, is added. Data of "0" is added to the left and right of the RT format catheter image 518.
- FIG. 37 is an explanatory diagram illustrating the configuration of the catheter system 10 of the tenth embodiment.
- the catheter system 10 of the present embodiment is realized by operating the catheter control device 27, the MDU 33, the image acquisition catheter 40, the general-purpose computer 90, and the program 97 in combination.
- the catheter control device 27 the MDU 33
- the image acquisition catheter 40 the general-purpose computer 90
- the program 97 the program 97 in combination.
- morphology The description of the parts common to the first embodiment will be omitted.
- the catheter control device 27 is an ultrasonic diagnostic device for IVUS that controls the MDU 33, controls the sensor 42, and generates a transverse layer image and a longitudinal tomographic image based on the signal received from the sensor 42. Since the function and configuration of the catheter control device 27 are the same as those of the ultrasonic diagnostic device conventionally used, the description thereof will be omitted.
- the catheter system 10 of the present embodiment includes a computer 90.
- the computer 90 includes a control unit 21, a main storage device 22, an auxiliary storage device 23, a communication unit 24, a display unit 25, an input unit 26, a reading unit 29, and a bus.
- the computer 90 is an information device such as a general-purpose personal computer, a tablet, a smartphone, or a server computer.
- Program 97 is recorded on the portable recording medium 96.
- the control unit 21 reads the program 97 via the reading unit 29 and stores it in the auxiliary storage device 23. Further, the control unit 21 may read the program 97 stored in the semiconductor memory 98 such as the flash memory mounted in the computer 90. Further, the control unit 21 may download the program 97 from the communication unit 24 and another server computer (not shown) connected via a network (not shown) and store the program 97 in the auxiliary storage device 23.
- the program 97 is installed as a control program of the computer 90, loaded into the main storage device 22, and executed. As a result, the computer 90 functions as the information processing device 20 described above.
- the computer 90 is a general-purpose personal computer, tablet, smartphone, large computer, virtual machine operating on the large computer, cloud computing system, or quantum computer.
- the computer 90 may be a plurality of personal computers or the like that perform distributed processing.
- FIG. 38 is a functional block diagram of the information processing apparatus 20 according to the eleventh embodiment.
- the information processing device 20 includes an image acquisition unit 81 and a first position information output unit 83.
- the image acquisition unit 81 acquires the catheter image 51 obtained by the radial scanning type image acquisition catheter 40.
- the first position information output unit 83 When the catheter image 51 is input, the first position information output unit 83 outputs the acquired catheter image 51 to the medical device learned model 611 that outputs the first position information regarding the position of the medical device included in the catheter image 51. Input and output the first position information.
- (Appendix A1) An image acquisition unit that acquires a catheter image obtained by an image acquisition catheter inserted in the first cavity, and an image acquisition unit.
- Non-living tissue including the first lumen region inside the first cavity and the second lumen region inside the second lumen into which the image acquisition catheter is not inserted when the catheter image is input.
- the first classification data that outputs the first classification data by inputting the acquired catheter image into the first classification trained model that outputs the first classification data that classifies the region and the biological tissue region as different regions. Equipped with an output unit
- the first classification trained model uses the first training data in which at least the first lumen region, the non-living tissue region including the second lumen region, and the living tissue region are specified.
- the information processing device that is being generated.
- Appendix A2 In the first classification data, a lumen region extraction unit that extracts the first lumen region and the second lumen region from the non-living tissue region, respectively.
- Appendix A1 includes a first-mode output unit that outputs the first classification data by changing the mode so that the first lumen region, the second lumen region, and the biological tissue region can be distinguished from each other. The information processing device described.
- Appendix A4 In the first classification trained model, when the catheter image is input, the biological tissue region, the first lumen region, the second lumen region, and the non-lumen region are different from each other.
- the information processing apparatus according to Appendix A3, which outputs the first classification data classified as.
- the image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
- the catheter image is an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
- the information processing apparatus according to any one of Supplementary A1 to Supplementary A4, wherein the first classification data is a classification result of each pixel in the RT format image.
- the first classification trained model is Contains multiple convolutional layers, At least one of the plurality of convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and adds the same data as the side having a small scanning angle to the outside of the side having a large scanning angle.
- the first classification trained model is It is equipped with a memory unit that holds information about the catheter image that was input in the past.
- the information processing apparatus according to Appendix A7 which outputs the first classification data based on the information held in the memory unit and the latest catheter image among the plurality of catheter images.
- the first classification trained model is medical treatment showing the biological tissue region, the non-living tissue region, and the medical device inserted in the first cavity or the second cavity when the catheter image is input.
- the information processing apparatus according to any one of Supplementary A1 to Supplementary A8, which outputs the first classification data in which the device area is classified as a different area.
- the second classification trained model that outputs the second classification data in which the non-living tissue region including the first lumen region and the living tissue region are classified as different regions is used.
- the second classification data acquisition unit that inputs the acquired catheter image and acquires the output second classification data, It is provided with a synthetic classification data output unit that outputs synthetic classification data obtained by synthesizing the second classification data with the first classification data.
- the second classification trained model is described in any one of Supplementary A1 to Supplementary A9, which is generated using the second training data in which only the first lumen region of the non-living tissue region is specified. Information processing equipment.
- the second classification trained model is a medical treatment showing the living tissue region, the non-living tissue region, and the medical device inserted in the first cavity or the second cavity when the catheter image is input.
- the information processing apparatus according to Appendix A10 which outputs the second classification data in which the instrument area and the device area are classified as different areas.
- the first classification trained model further outputs the probability of being the biological tissue region or the probability of being the non-living tissue region for each portion of the catheter image.
- the second classification trained model further outputs the probability of being the biological tissue region or the probability of being the non-living tissue region for each portion of the catheter image.
- the synthetic classification data output unit adds the second classification data to the first classification data based on the result of calculating the probability of being the living tissue region or the probability of being the non-living tissue region for each part of the catheter image.
- Supplementary A10 or Supplementary A11 which outputs synthetic classification data obtained by synthesizing the above.
- Appendix A14 The information processing apparatus according to Appendix A13, comprising a three-dimensional output unit that outputs a three-dimensional image generated based on the plurality of first classification data generated from each of the plurality of acquired catheter images.
- the catheter image obtained by the image acquisition catheter inserted in the first cavity is acquired, and the catheter image is acquired. At least, a non-living tissue region including a first lumen region inside the first cavity and a second lumen region inside the second cavity into which the image acquisition catheter is not inserted, and living tissue.
- the first classification data is generated by using the first training data in which the region and the above are clearly specified, and when the catheter image is input, the non-living tissue region and the living tissue region are classified as different regions.
- the catheter image obtained by the image acquisition catheter inserted in the first cavity is acquired, and the catheter image is acquired. At least, a non-living tissue region including a first lumen region inside the first cavity and a second lumen region inside the second cavity into which the image acquisition catheter is not inserted, and living tissue.
- the first classification data is generated by using the first training data in which the region and the above are clearly specified, and when the catheter image is input, the non-living tissue region and the living tissue region are classified as different regions.
- a plurality of sets of training data recorded in association with a non-living tissue region label including a non-luminal region that is not a region and a label data with a plurality of labels having the same are obtained.
- the biological tissue region label and the biological tissue region label are used for each portion of the catheter image.
- the non-living tissue region label of the plurality of sets of training data includes a first lumen region label indicating the first lumen region, a second lumen region label indicating the second lumen region, and the non-lumen region.
- the biological tissue region label and the living tissue region label are used for each portion of the catheter image.
- the method for generating a trained model according to Appendix A17 which generates a trained model that outputs a first lumen region label, the second lumen region label, and the non-lumen region label.
- the catheter image is an RT format image obtained by the radial scanning type image acquisition catheter in which scanning line data for one rotation is arranged in parallel in the order of scanning angles.
- the trained model contains multiple convolutional layers. At least one of the convolutional layers is padding that adds the same data as the side with a large scanning angle to the outside of the side with a small scanning angle and the same data as the side with a small scanning angle to the outside of the side with a large scanning angle.
- the method for generating a trained model according to any one of Supplementary A17 to Supplementary A19.
- Appendix B1 An image acquisition unit that acquires a catheter image obtained by a radial scanning type image acquisition catheter, and an image acquisition unit.
- the acquired catheter image is input to the medical device learned model that outputs the first position information regarding the position of the medical device included in the catheter image, and the first position information is input.
- An information processing device including a first position information output unit for output.
- Appendix B2 The information processing device according to Appendix B1, wherein the first position information output unit outputs the first position information using the position of one pixel included in the catheter image.
- the first position information output unit is a time-series first position information acquisition unit and a time-series first position information acquisition unit that acquires the time-series first position information corresponding to each of the plurality of catheter images obtained in chronological order.
- An exclusion unit that excludes the first position information that does not satisfy a predetermined condition from the one position information,
- the information processing apparatus according to Appendix B1 or Appendix B2, which comprises a complement section for adding complementary information satisfying a predetermined condition to the first position information in chronological order.
- the medical device learned model outputs the first position information regarding the latest catheter image among the plurality of catheter images when a plurality of the catheter images acquired in time series are input.
- the information processing apparatus according to any one of Supplementary B3.
- the medical device learned model is It is equipped with a memory unit that holds information about the catheter image that was input in the past.
- the information processing device which outputs the first position information based on the information held in the memory unit and the latest catheter image among the plurality of catheter images.
- the medical device learned model is The input of the catheter image is accepted as an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles. Contains multiple first convolutional layers At least one of the plurality of first convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and the same data as the side having a small scanning angle to the outside of the side having a large scanning angle.
- the information processing apparatus according to any one of Supplementary note B1 to Supplementary note B5, which has been learned by performing a padding process.
- the angle trained model is The input of the catheter image is accepted as an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles. Contains multiple second convolutional layers At least one of the plurality of second convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and the same data as the side having a small scanning angle to the outside of the side having a large scanning angle.
- the information processing apparatus according to Appendix B7 which has been learned by performing a padding process.
- the medical device learned model is any one of Supplementary B1 to Supplementary B8 generated using a plurality of sets of training data recorded by associating the catheter image with the position of the medical device included in the catheter image.
- the training data is The catheter image obtained by the image acquisition catheter is displayed, and the image is displayed.
- the position of the medical device included in the catheter image is received by one click operation or one tap operation on the catheter image.
- the information processing apparatus according to Appendix B9 which is generated by a process of associating and storing the catheter image and the position of a medical device.
- the training data is The catheter image is input to the medical device trained model, and the catheter image is input.
- the first position information output from the medical device trained model is superimposed on the input catheter image and displayed.
- the correction instruction regarding the position of the medical device included in the catheter image is not accepted, the uncorrected data relating the catheter image and the first position information is stored as the training data.
- the correction data in which the catheter image is associated with the information regarding the position of the medical device based on the correction instruction is stored as the training data.
- the information processing device according to Appendix B9.
- Appendix B12 A plurality of sets of training data recorded by associating the catheter image obtained by the image acquisition catheter with the first position information regarding the position of the medical device included in the catheter image were acquired.
- the first position information is information about the position of one pixel included in the catheter image.
- a catheter image including the lumen obtained by the image acquisition catheter is displayed.
- the first position information regarding the position of the medical device inserted into the lumen contained in the catheter image is received.
- a training data generation method for causing a computer to execute a process of storing training data in which the catheter image and the first position information are associated with each other.
- Appendix B15 The training data generation method according to Appendix B14, wherein the first position information is information regarding the position of one pixel included in the catheter image.
- the image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
- the display of the catheter image is an RT format image in which a plurality of scan line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles, and data based on the scan line data is radially around the image acquisition catheter. Display two images side by side with the XY format image placed in The training data generation method according to any one of Supplementary Provisions B14 to B16, wherein the first position information is accepted from any of the RT format image and the XY format image.
- Appendix B19 The training data generation method according to Appendix B18, wherein the uncorrected data and the corrected data are data relating to the position of one pixel included in the catheter image.
- Appendix B20 A plurality of the catheter images obtained in time series are sequentially input to the medical device trained model, and the catheter images are input to the medical device trained model.
- the image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
- the display of the catheter image is an RT format image in which a plurality of scan line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles, and data based on the scan line data is radially around the image acquisition catheter. Display two images side by side with the XY format image placed in The training data generation method according to any one of Supplementary Provisions B18 to B21, wherein the position of the medical device is accepted from both the RT format image and the XY format image.
- An image acquisition unit that acquires a catheter image including the lumen obtained by an image acquisition catheter, and an image acquisition unit.
- a position information acquisition unit that acquires position information regarding the position of a medical device inserted into the lumen included in the catheter image, and a position information acquisition unit.
- each region of the catheter image is classified into at least three regions, a biological tissue region, a medical instrument region in which the medical instrument is present, and a non-biological tissue region.
- An information processing device including a first data output unit that inputs the acquired catheter image and the acquired position information and outputs the first data to the first trained model that outputs data.
- Appendix C2 The location information acquisition unit When the catheter image is input, the acquired catheter image is input to the medical device learned model that outputs the position information included in the catheter image, and the position information is acquired from the medical device learned model.
- the information processing device according to the appendix C1.
- each region of the catheter image is classified into at least three regions: a biological tissue region, a medical instrument region in which the medical instrument is present, and a non-living tissue region.
- a second data acquisition unit that inputs the acquired catheter image to the second model that outputs the second data to acquire the second data, and synthetic data obtained by synthesizing the first data and the second data.
- the information processing apparatus according to Appendix C2, which includes a synthetic data output unit for output.
- the synthetic data output unit is A first synthetic data output unit that outputs the first synthetic data obtained by synthesizing the data related to the biological tissue-related region classified into the biological tissue region and the non-biological tissue region among the first data and the second data.
- the second synthetic data output unit is When the position information can be acquired from the medical device learned model, the second synthetic data is output using the data related to the medical device area included in the first data.
- the information processing apparatus according to Appendix C4, which outputs the second synthetic data using the data related to the medical device region included in the second data when the position information cannot be acquired from the medical device learned model. ..
- the synthetic data output unit outputs the second synthetic data obtained by synthesizing the data related to the medical device area based on the reliability of the first data and the weighting according to the reliability of the second data.
- Appendix C7 The information processing apparatus according to Appendix C6, wherein the reliability is determined based on whether or not the position information can be acquired from the medical device learned model.
- the synthetic data output unit is When the position information can be acquired from the medical device trained model, the reliability of the first data is set higher than the reliability of the second data.
- the information processing apparatus according to Appendix C6, which sets the reliability of the first data to be lower than the reliability of the second data when the position information cannot be acquired from the medical device learned model.
- Catheter system 20 Information processing device 21 Control unit 22 Main storage device 23 Auxiliary storage device 24 Communication unit 25 Display unit 26 Input unit 27 Catheter control device 271 Catheter control unit 29 Reading unit 31 Display device 32 Input device 33 MDU 37 Diagnostic imaging device 40 Catheter for image acquisition 41 Probe part 42 Sensor 43 Shaft 44 Tip marker 45 Connector part 46 Guide wire lumen 51 Catheter image 518 RT format catheter image (catheter image) 519 XY format catheter image 52 Classification data (hint unclassified data, second data) 521 First classification data (label data) 522 Second classification data (label data) 526 Synthetic classification data 528 RT format classification data 529 XY format classification data 536 Synthetic data 541 1st synthesis unit 542 2nd synthesis unit 543 3rd synthesis unit 55 3D data 551 Biological 3D data 552 Medical device 3D data 561 Hint classified data (1st data) 611 Medical equipment trained model 612 Angle trained model 615 Position information synthesizer 619 Position information model
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Image Analysis (AREA)
Abstract
Provided is an information processing device which aids in the understanding of an image obtained by an image-obtaining catheter. The information processing device is equipped with: an image acquisition unit for acquiring a catheter image (518) obtained by an image-obtaining catheter of the radial-scanning type; and a first location information output unit which, when the catheter image (518) is inputted, inputs the acquired catheter image (518) and outputs first location information to a medical device learned model (611) for outputting the first location information, which pertains to the location of the medical device included in the catheter image (518).
Description
本発明は、情報処理装置、学習済モデルの生成方法および訓練データ生成方法に関する。
The present invention relates to an information processing device, a trained model generation method, and a training data generation method.
血管等の管腔器官に画像取得用カテーテルを挿入して、画像を取得するカテーテルシステムが使用されている(特許文献1)。
A catheter system is used in which an image acquisition catheter is inserted into a luminal organ such as a blood vessel to acquire an image (Patent Document 1).
たとえば心腔内領域のように複雑な構造の場所では、画像取得用カテーテルで取得した画像を素早く理解することが難しい場合がある。
In a place with a complicated structure such as an intracardiac region, it may be difficult to quickly understand the image acquired by the image acquisition catheter.
一つの側面では、画像取得用カテーテルにより取得した画像の理解を支援する情報処理装置等を提供することを目的とする。
One aspect is to provide an information processing device or the like that supports understanding of an image acquired by an image acquisition catheter.
情報処理装置は、ラジアル走査型の画像取得用カテーテルにより得られたカテーテル画像を取得する画像取得部と、前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する医療器具学習済モデルに、取得した前記カテーテル画像を入力して、前記第1位置情報を出力する第1位置情報出力部とを備える。
The information processing apparatus has an image acquisition unit that acquires a catheter image obtained by a radial scanning type image acquisition catheter, and a first position regarding the position of a medical device included in the catheter image when the catheter image is input. The medical device learned model that outputs information is provided with a first position information output unit that inputs the acquired catheter image and outputs the first position information.
一つの側面では、画像取得用カテーテルにより取得した画像の理解を支援する情報処理装置等を提供できる。
On one aspect, it is possible to provide an information processing device or the like that supports understanding of an image acquired by an image acquisition catheter.
[実施の形態1]
図1は、カテーテルシステム10の概要を説明する説明図である。本実施の形態のカテーテルシステム10は、X線透視装置等の画像診断装置を用いて透視を行ないながら、種々の臓器の治療を行なうIVR(Interventional Radiology)に使用される。治療対象部位の近傍に配置したカテーテルシステム10により取得した画像を参照することにより、治療用の医療器具を正確に操作できる。 [Embodiment 1]
FIG. 1 is an explanatory diagram illustrating an outline of thecatheter system 10. The catheter system 10 of the present embodiment is used for IVR (Interventional Radiology) in which various organs are treated while performing fluoroscopy using an image diagnostic device such as an X-ray fluoroscope. By referring to the image acquired by the catheter system 10 arranged in the vicinity of the treatment target site, the medical device for treatment can be accurately operated.
図1は、カテーテルシステム10の概要を説明する説明図である。本実施の形態のカテーテルシステム10は、X線透視装置等の画像診断装置を用いて透視を行ないながら、種々の臓器の治療を行なうIVR(Interventional Radiology)に使用される。治療対象部位の近傍に配置したカテーテルシステム10により取得した画像を参照することにより、治療用の医療器具を正確に操作できる。 [Embodiment 1]
FIG. 1 is an explanatory diagram illustrating an outline of the
カテーテルシステム10は、画像取得用カテーテル40と、MDU(Motor Driving Unit)33と、情報処理装置20とを備える。画像取得用カテーテル40は、MDU33を介して情報処理装置20に接続されている。情報処理装置20には、表示装置31および入力装置32が接続されている。入力装置32は、たとえばキーボード、マウス、トラックボールまたはマイク等の入力デバイスである。表示装置31と入力装置32とは、一体に積層されて、タッチパネルを構成していてもよい。入力装置32と情報処理装置20とは、一体に構成されていてもよい。
The catheter system 10 includes an image acquisition catheter 40, an MDU (Motor Driving Unit) 33, and an information processing device 20. The image acquisition catheter 40 is connected to the information processing apparatus 20 via the MDU 33. A display device 31 and an input device 32 are connected to the information processing device 20. The input device 32 is an input device such as a keyboard, mouse, trackball or microphone. The display device 31 and the input device 32 may be integrally laminated to form a touch panel. The input device 32 and the information processing device 20 may be integrally configured.
図2は、画像取得用カテーテル40の概要を説明する説明図である。画像取得用カテーテル40は、プローブ部41と、プローブ部41の端部に配置されたコネクタ部45とを有する。プローブ部41は、コネクタ部45を介してMDU33に接続される。以下の説明では画像取得用カテーテル40のコネクタ部45から遠い側を先端側と記載する。
FIG. 2 is an explanatory diagram illustrating an outline of the image acquisition catheter 40. The image acquisition catheter 40 has a probe portion 41 and a connector portion 45 arranged at an end portion of the probe portion 41. The probe portion 41 is connected to the MDU 33 via the connector portion 45. In the following description, the side of the image acquisition catheter 40 far from the connector portion 45 is referred to as the distal end side.
プローブ部41の内部に、シャフト43が挿通されている。シャフト43の先端側に、センサ42が接続されている。プローブ部41の先端に、ガイドワイヤルーメン46が設けられている。ユーザは、目的部位を超える位置までガイドワイヤを挿入した後に、当該ガイドワイヤをガイドワイヤルーメン46に挿通することにより、センサ42を目的部位に誘導する。プローブ部41の先端部近傍に、環状の先端マーカ44が固定されている。
The shaft 43 is inserted inside the probe portion 41. The sensor 42 is connected to the tip end side of the shaft 43. A guide wire lumen 46 is provided at the tip of the probe portion 41. After inserting the guide wire to a position beyond the target portion, the user guides the sensor 42 to the target portion by inserting the guide wire into the guide wire lumen 46. An annular tip marker 44 is fixed in the vicinity of the tip of the probe portion 41.
センサ42は、たとえば超音波の送受信を行なう超音波トランスデューサ、または、近赤外光の照射と反射光の受信を行なうOCT(Optical Coherence Tomography)用の送受信部である。以下の説明では、画像取得用カテーテル40は循環器の内側から超音波断層像を撮影する際に用いられるIVUS(Intravascular Ultrasound)用カテーテルである場合を例にして説明する。
The sensor 42 is, for example, an ultrasonic transducer that transmits and receives ultrasonic waves, or a transmission / reception unit for OCT (Optical Coherence Tomography) that irradiates near-infrared light and receives reflected light. In the following description, the case where the image acquisition catheter 40 is an IVUS (Intravascular Ultrasound) catheter used when taking an ultrasonic tomographic image from the inside of the circulatory system will be described as an example.
図3は、カテーテルシステム10の構成を説明する説明図である。前述の通りカテーテルシステム10は、情報処理装置20、MDU33および画像取得用カテーテル40を有する。情報処理装置20は、制御部21、主記憶装置22、補助記憶装置23、通信部24、表示部25、入力部26、カテーテル制御部271およびバスを備える。
FIG. 3 is an explanatory diagram illustrating the configuration of the catheter system 10. As described above, the catheter system 10 includes an information processing device 20, an MDU 33, and an image acquisition catheter 40. The information processing device 20 includes a control unit 21, a main storage device 22, an auxiliary storage device 23, a communication unit 24, a display unit 25, an input unit 26, a catheter control unit 271, and a bus.
制御部21は、本実施の形態のプログラムを実行する演算制御装置である。制御部21には、一または複数のCPU(Central Processing Unit)、GPU(Graphics Processing Unit)、TPU(Tensor Processing Unit)またはマルチコアCPU等が使用される。制御部21は、バスを介して情報処理装置20を構成するハードウェア各部と接続されている。
The control unit 21 is an arithmetic control device that executes the program of the present embodiment. The control unit 21 uses one or more CPUs (Central Processing Unit), GPU (Graphics Processing Unit), TPU (Tensor Processing Unit), multi-core CPU, and the like. The control unit 21 is connected to each hardware unit constituting the information processing apparatus 20 via a bus.
主記憶装置22は、SRAM(Static Random Access Memory)、DRAM(Dynamic Random Access Memory)、フラッシュメモリ等の記憶装置である。主記憶装置22には、制御部21が行なう処理の途中で必要な情報および制御部21で実行中のプログラムが一時的に保存される。
The main storage device 22 is a storage device such as a SRAM (Static Random Access Memory), a DRAM (Dynamic Random Access Memory), and a flash memory. The main storage device 22 temporarily stores information necessary in the middle of processing performed by the control unit 21 and a program being executed by the control unit 21.
補助記憶装置23は、SRAM、フラッシュメモリ、ハードディスクまたは磁気テープ等の記憶装置である。補助記憶装置23には、医療器具学習済モデル611、分類モデル62、制御部21に実行させるプログラム、およびプログラムの実行に必要な各種データが保存される。通信部24は、情報処理装置20とネットワークとの間の通信を行なうインターフェースである。
The auxiliary storage device 23 is a storage device such as a SRAM, a flash memory, a hard disk, or a magnetic tape. The auxiliary storage device 23 stores a medical device learned model 611, a classification model 62, a program to be executed by the control unit 21, and various data necessary for executing the program. The communication unit 24 is an interface for communicating between the information processing device 20 and the network.
表示部25は、表示装置31とバスとを接続するインターフェースである。入力部26は、入力装置32とバスとを接続するインターフェースである。カテーテル制御部271は、MDU33の制御、センサ42の制御、および、センサ42から受信した信号に基づく画像の生成等を行なう。
The display unit 25 is an interface for connecting the display device 31 and the bus. The input unit 26 is an interface for connecting the input device 32 and the bus. The catheter control unit 271 controls the MDU 33, controls the sensor 42, generates an image based on the signal received from the sensor 42, and the like.
MDU33は、プローブ部41の内部でセンサ42およびシャフト43を回転させる。カテーテル制御部271は、センサ42の一回転ごとに1枚のカテーテル画像51(図4参照)を生成する。生成されるカテーテル画像51は、プローブ部41を中心とし、プローブ部41に略垂直な横断層像である。
The MDU 33 rotates the sensor 42 and the shaft 43 inside the probe portion 41. The catheter control unit 271 generates one catheter image 51 (see FIG. 4) for each rotation of the sensor 42. The generated catheter image 51 is a transverse layer image centered on the probe portion 41 and substantially perpendicular to the probe portion 41.
MDU33は、プローブ部41の内部でセンサ42およびシャフト43を回転させながら、さらにセンサ42を進退させることも可能である。センサ42を引っ張りながら、または押し込みながら回転させる操作により、カテーテル制御部271はプローブ部41に略垂直な複数枚のカテーテル画像51を連続的に生成する。連続的に生成されたカテーテル画像51は、三次元画像の構築に使用可能である。したがって、画像取得用カテーテル40は長手方向に沿って複数のカテーテル画像51を順次取得する三次元走査用カテーテルの機能を実現する。
The MDU 33 can further advance and retreat the sensor 42 while rotating the sensor 42 and the shaft 43 inside the probe portion 41. By rotating the sensor 42 while pulling or pushing it in, the catheter control unit 271 continuously generates a plurality of catheter images 51 substantially perpendicular to the probe unit 41. The continuously generated catheter image 51 can be used to construct a three-dimensional image. Therefore, the image acquisition catheter 40 realizes the function of a three-dimensional scanning catheter that sequentially acquires a plurality of catheter images 51 along the longitudinal direction.
センサ42の進退操作には、プローブ部41全体を進退させる操作と、プローブ部41の内部でセンサ42を進退させる操作との両方を含む。進退操作は、MDU33により所定の速度で自動的に行なわれても、ユーザにより手動で行なわれても良い。
The advance / retreat operation of the sensor 42 includes both an operation of advancing / retreating the entire probe unit 41 and an operation of advancing / retreating the sensor 42 inside the probe unit 41. The advance / retreat operation may be automatically performed by the MDU 33 at a predetermined speed, or may be manually performed by the user.
なお、画像取得用カテーテル40は機械的に回転および進退を行なう機械走査方式に限定しない。複数の超音波トランスデューサを環状に配置したセンサ42を用いた、電子ラジアル走査型の画像取得用カテーテル40であってもよい。
The image acquisition catheter 40 is not limited to the mechanical scanning method that mechanically rotates and advances and retreats. It may be an electronic radial scanning type image acquisition catheter 40 using a sensor 42 in which a plurality of ultrasonic transducers are arranged in an annular shape.
画像取得用カテーテル40を用いて、心臓壁、血管壁等の循環器を構成する生体組織に加えて、たとえば赤血球等の循環器の内側に存在する反射体、および、たとえば呼吸器、消化器等の循環器の外側に存在する臓器を含むカテーテル画像51を撮影できる。
Using the image acquisition catheter 40, in addition to the biological tissues constituting the circulatory system such as the heart wall and the blood vessel wall, a reflector existing inside the circulatory system such as red blood cells, and for example, the respiratory system and the digestive system, etc. A catheter image 51 containing an organ located outside the circulatory system can be taken.
本実施の形態においては、画像取得用カテーテル40を心房中隔穿刺に用いる場合を例にして説明する。心房中隔穿刺においては、右心房に画像取得用カテーテル40を挿入した後に、超音波ガイド下にブロッケンブロー針を心房中隔の薄肉部である卵円窩に穿刺する。ブロッケンブロー針の先端は、左心房の内部に到達する。
In this embodiment, a case where the image acquisition catheter 40 is used for atrial septal puncture will be described as an example. In the atrial septal puncture, after inserting the image acquisition catheter 40 into the right atrium, a blocken blow needle is punctured into the fossa ovalis, which is a thin portion of the atrial septum, under ultrasonic guidance. The tip of the Brocken blow needle reaches the inside of the left atrium.
心房中隔穿刺を行なう場合、カテーテル画像51には、心房中隔、右心房、左心房、大動脈等の循環器を構成する生体組織、および、循環器内部を流れる血液に含まれる赤血球等の反射体に加えて、ブロッケンブロー針が描出される。医師等のユーザは、カテーテル画像51を用いて卵円窩とブロッケンブロー針の先端との位置関係を確認することにより、安全に心房中隔穿刺を行なえる。ブロッケンブロー針は、本実施の形態の医療器具の例示である。
When performing an atrial septal puncture, the catheter image 51 shows reflexes of biological tissues constituting the circulatory system such as the atrial septum, right atrium, left atrium, and aorta, and red blood cells contained in blood flowing inside the circulatory system. In addition to the body, a blocken blow needle is depicted. A user such as a doctor can safely perform an atrial septal puncture by confirming the positional relationship between the fossa ovalis and the tip of the blocken blow needle using the catheter image 51. The Brocken blow needle is an example of the medical device of the present embodiment.
なお、カテーテルシステム10の用途は心房中隔穿刺に限定しない。たとえば経カテーテル心筋焼灼術、経カテーテル弁置換術および冠動脈等へのステント留置等の手技に、カテーテルシステム10を使用できる。カテーテルシステム10を使用して治療を行なう部位は心臓周辺に限定しない。たとえば、膵管、胆管および下肢の血管等の様々な部位の治療にカテーテルシステム10を使用できる。
The application of the catheter system 10 is not limited to the atrial septal puncture. For example, the catheter system 10 can be used for procedures such as transcatheter myocardial ablation, transcatheter valve replacement, and stent placement in coronary arteries. The site to be treated using the catheter system 10 is not limited to the area around the heart. For example, the catheter system 10 can be used to treat various sites such as pancreatic ducts, bile ducts and blood vessels in the lower extremities.
カテーテル制御部271の機能および構成は、従来から使用されている超音波診断装置と同様であるため、詳細については説明を省略する。なお、制御部21が、カテーテル制御部271の機能を実現してもよい。
Since the function and configuration of the catheter control unit 271 are the same as those of the ultrasonic diagnostic apparatus conventionally used, the details thereof will be omitted. The control unit 21 may realize the function of the catheter control unit 271.
情報処理装置20は、HIS(Hospital Information System)等を介して、X線血管撮影装置、X線CT(Computed Tomography)装置、MRI(Magnetic Resonance Imaging)装置、PET(Positron Emission Tomography)装置、または超音波診断装置等の様々な画像診断装置37と接続されている。
The information processing apparatus 20 may be an X-ray angiography apparatus, an X-ray CT (Computed Tomography) apparatus, an MRI (Magnetic Resonance Imaging) apparatus, a PET (Positron Emission Tomography) apparatus, or an ultrasonography apparatus 20 via HIS (Hospital Information System) or the like. It is connected to various diagnostic imaging devices 37 such as an ultrasonic diagnostic device.
本実施の形態の情報処理装置20は、専用の超音波診断装置、または、超音波診断装置の機能を有するパソコン、タブレット、または、スマートフォン等である。なお、以下の説明では医療器具学習済モデル611等の学習済モデルの学習および訓練データの作成にも情報処理装置20を用いる場合を例にして説明する。学習済モデルの学習および訓練データの作成には、情報処理装置20とは異なるコンピュータまたはサーバ等が使用されてもよい。
The information processing device 20 of the present embodiment is a dedicated ultrasonic diagnostic device, a personal computer, a tablet, a smartphone, or the like having the function of the ultrasonic diagnostic device. In the following description, the case where the information processing device 20 is also used for learning a trained model such as the medical device trained model 611 and creating training data will be described as an example. A computer or server different from the information processing apparatus 20 may be used for training the trained model and creating training data.
なお、以後に説明では、主に制御部21がソフトウェア的な処理を行なう場合を例にして説明する。フローチャートを使用して説明する処理、および、種々の学習済モデルは、それぞれ専用のハードウェアにより実装されてもよい。
In the following description, a case where the control unit 21 performs software-like processing will be mainly described as an example. The process described using the flowchart and various trained models may be implemented by dedicated hardware.
図4は、カテーテルシステム10の動作の概要を説明する説明図である。図4では、センサ42を所定の速度で引きながら複数のカテーテル画像51を撮影し、リアルタイムで画像を表示する場合を例にして説明する。
FIG. 4 is an explanatory diagram illustrating an outline of the operation of the catheter system 10. In FIG. 4, a case where a plurality of catheter images 51 are taken while pulling the sensor 42 at a predetermined speed and the images are displayed in real time will be described as an example.
制御部21は、1枚のカテーテル画像51を撮影する(ステップS501)。制御部21は、カテーテル画像51に描出されている医療器具の位置情報を取得する(ステップS502)。図4においては、「×」印でカテーテル画像51中の医療器具の位置を示す。
The control unit 21 captures one catheter image 51 (step S501). The control unit 21 acquires the position information of the medical device depicted in the catheter image 51 (step S502). In FIG. 4, the “x” mark indicates the position of the medical device in the catheter image 51.
制御部21は、カテーテル画像51と、画像取得用カテーテル40の長手方向におけるカテーテル画像51の位置と、医療器具の位置情報とを関連づけて補助記憶装置23またはHISに接続された大容量記憶装置に記録する(ステップS503)。
The control unit 21 connects the catheter image 51, the position of the catheter image 51 in the longitudinal direction of the image acquisition catheter 40, and the position information of the medical device to the auxiliary storage device 23 or the large-capacity storage device connected to the HIS. Record (step S503).
制御部21は、カテーテル画像51を構成する各部分について、描出されている被写体ごとに分類した分類データ52を生成する(ステップS504)。図4においては、分類結果に基づいてカテーテル画像51を塗分けた模式図により、分類データ52を示す。
The control unit 21 generates classification data 52 for each portion constituting the catheter image 51, which is classified according to the drawn subject (step S504). In FIG. 4, the classification data 52 is shown by a schematic diagram in which the catheter image 51 is painted separately based on the classification result.
制御部21は、ユーザが二次元表示を指定しているか、三次元表示を指定しているかを判定する(ステップS505)。ユーザが二次元表示を指定していると判定した場合(ステップS505で2D)、制御部21はカテーテル画像51および分類データ52を二次元表示により表示装置31に表示する(ステップS506)。
The control unit 21 determines whether the user has specified a two-dimensional display or a three-dimensional display (step S505). When it is determined that the user has specified the two-dimensional display (2D in step S505), the control unit 21 displays the catheter image 51 and the classification data 52 on the display device 31 by the two-dimensional display (step S506).
なお、図4のステップS505においては、「2D/3D」のように、「二次元表示」と「三次元表示」とのいずれか一方の選択であるかのように記載する。しかし、ユーザが「3D」を選択した場合には、制御部21は「二次元表示」と「三次元表示」との双方を表示してもよい。
In step S505 of FIG. 4, it is described as if one of "two-dimensional display" and "three-dimensional display" is selected, as in "2D / 3D". However, when the user selects "3D", the control unit 21 may display both "two-dimensional display" and "three-dimensional display".
ユーザが三次元表示を指定していると判定した場合(ステップS505で3D)、制御部21はステップS503で逐次記録した医療器具の位置情報が正常であるか否かを判定する(ステップS511)。正常ではないと判定した場合(ステップS511でNO)、制御部21は位置情報の修正を行なう(ステップS512)。ステップS511およびステップS512で行なう処理の詳細については後述する。
When it is determined that the user has specified the three-dimensional display (3D in step S505), the control unit 21 determines whether or not the position information of the medical device sequentially recorded in step S503 is normal (step S511). .. When it is determined that it is not normal (NO in step S511), the control unit 21 corrects the position information (step S512). Details of the processes performed in steps S511 and S512 will be described later.
正常であると判定した場合(ステップS511でYES)、またはステップS512の終了後、制御部21は観察中の部位の構造と、医療器具の位置とを図示する三次元表示を行なう(ステップS513)。前述のとおり制御部21は、三次元表示と二次元表示の両方を一つの画面に表示させてもよい。
When it is determined to be normal (YES in step S511), or after the end of step S512, the control unit 21 performs a three-dimensional display illustrating the structure of the part under observation and the position of the medical device (step S513). .. As described above, the control unit 21 may display both the three-dimensional display and the two-dimensional display on one screen.
ステップS506またはステップS513の終了後、制御部21はカテーテル画像51の取得が終了したか否かを判定する(ステップS507)。たとえば、ユーザによる終了指示を受け付けた場合に、制御部21は処理を終了すると判定する。
After the end of step S506 or step S513, the control unit 21 determines whether or not the acquisition of the catheter image 51 is completed (step S507). For example, when the end instruction by the user is received, the control unit 21 determines that the process is terminated.
処理を終了しないと判定した場合(ステップS507でNO)、制御部21はステップS501に戻る。処理を終了すると判定した場合(ステップS507でYES)、制御部21は処理を終了する。
If it is determined that the process is not completed (NO in step S507), the control unit 21 returns to step S501. When it is determined to end the process (YES in step S507), the control unit 21 ends the process.
図4においては、一連のカテーテル画像51を撮影中に、リアルタイムで二次元表示(ステップS506)または三次元表示(ステップS513)を行なう場合の処理の流れを説明した。制御部21はステップS503で記録したデータに基づいて、非リアルタイムで二次元表示または三次元表示を行なっても良い。
In FIG. 4, a process flow in the case of performing a two-dimensional display (step S506) or a three-dimensional display (step S513) in real time while taking a series of catheter images 51 has been described. The control unit 21 may perform two-dimensional display or three-dimensional display in non-real time based on the data recorded in step S503.
図5Aは、画像取得用カテーテル40の動作を模式的に示す説明図である。図5Bは、画像取得用カテーテル40により撮影されたカテーテル画像51を模式的に示す説明図である。図5Cは、カテーテル画像51に基づいて生成された分類データ52を模式的に説明する説明図である。図5Aから図5Cを使用して、RT(Radius-Theta)形式とXY形式とを説明する。
FIG. 5A is an explanatory diagram schematically showing the operation of the image acquisition catheter 40. FIG. 5B is an explanatory diagram schematically showing a catheter image 51 taken by an image acquisition catheter 40. FIG. 5C is an explanatory diagram schematically explaining the classification data 52 generated based on the catheter image 51. The RT (Radius-Theta) format and the XY format will be described with reference to FIGS. 5A to 5C.
前述のとおり、画像取得用カテーテル40の内部でセンサ42が回転しながら超音波を送受信する。カテーテル制御部271は、図5Aに8本の矢印で模式的に示すように、画像取得用カテーテル40を中心として放射状の走査線データを取得する。
As described above, the sensor 42 rotates inside the image acquisition catheter 40 to transmit and receive ultrasonic waves. The catheter control unit 271 acquires radial scan line data centered on the image acquisition catheter 40, as schematically shown by eight arrows in FIG. 5A.
カテーテル制御部271は、走査線データに基づいてRT形式カテーテル画像518とXY形式カテーテル画像519の2通りの形式で、図5Bに示すカテーテル画像51を生成できる。RT形式カテーテル画像518は、それぞれの走査線データを互いに平行に並べて生成した画像である。RT形式カテーテル画像518の横方向は画像取得用カテーテル40からの距離を示す。
The catheter control unit 271 can generate the catheter image 51 shown in FIG. 5B in two formats, the RT format catheter image 518 and the XY format catheter image 519, based on the scanning line data. The RT format catheter image 518 is an image generated by arranging the scan line data in parallel with each other. The lateral direction of the RT format catheter image 518 indicates the distance from the image acquisition catheter 40.
RT形式カテーテル画像518の縦方向は走査角度を示す。1枚のRT形式カテーテル画像518は、センサ42が360度回転して取得した走査線データを走査角度順に平行に配列することにより形成される。
The vertical direction of the RT format catheter image 518 indicates the scanning angle. One RT-type catheter image 518 is formed by arranging the scan line data acquired by rotating the sensor 42 360 degrees in parallel in the order of scan angles.
図5Bにおいては、RT形式カテーテル画像518の左側は画像取得用カテーテル40に近い場所を示し、RT形式カテーテル画像518の右側は画像取得用カテーテル40から遠い場所を示す。
In FIG. 5B, the left side of the RT type catheter image 518 shows a place near the image acquisition catheter 40, and the right side of the RT type catheter image 518 shows a place far from the image acquisition catheter 40.
XY形式カテーテル画像519は、それぞれの走査線データを放射状に並べて補間することにより生成した画像である。XY形式カテーテル画像519はセンサ42の位置で画像取得用カテーテル40に対して垂直に被写体を切断した断層像を示す。
The XY format catheter image 519 is an image generated by arranging and interpolating each scan line data in a radial pattern. The XY format catheter image 519 shows a tomographic image in which the subject is cut perpendicular to the image acquisition catheter 40 at the position of the sensor 42.
図5Cに、カテーテル画像51を構成する各部分について、描出されている被写体ごとに分類した分類データ52を模式的に示す。分類データ52も、RT形式分類データ528とXY形式分類データ529の2通りの形式で表示可能である。RT形式とXY形式との間の画像変換方法は公知であるため、説明を省略する。
FIG. 5C schematically shows classification data 52 classified for each depicted subject for each portion constituting the catheter image 51. The classification data 52 can also be displayed in two formats, RT format classification data 528 and XY format classification data 529. Since the image conversion method between the RT format and the XY format is known, the description thereof will be omitted.
図5Cにおいては、太い右下がりのハッチングは心房壁および心室壁等の、画像取得用カテーテル40が挿入されている腔を形成する生体組織領域を示す。細い左下がりのハッチングは、画像取得用カテーテル40の先端部分が挿入されている血流領域である第1腔の内部を示す。細い右下がりのハッチングは第1腔以外の血流領域である第2腔の内部を示す。
In FIG. 5C, the thick downward hatching indicates the biological tissue region such as the atrial wall and the ventricular wall that forms the cavity into which the image acquisition catheter 40 is inserted. The narrow left-down hatching indicates the inside of the first cavity, which is the blood flow region into which the tip portion of the image acquisition catheter 40 is inserted. The narrow downward-sloping hatch indicates the inside of the second cavity, which is a blood flow region other than the first cavity.
右心房から左心房への心房中隔穿刺を行なう場合においては、第1腔は右心房であり、第2腔は左心房、右心室、左心室、大動脈および冠動脈等である。以下の説明では、第1腔の内部を第1内腔領域、第2腔の内部を第2内腔領域と記載する。
When performing atrial septal puncture from the right atrium to the left atrium, the first cavity is the right atrium, and the second cavity is the left atrium, right ventricle, left ventricle, aorta, coronary artery, etc. In the following description, the inside of the first lumen is referred to as a first lumen region, and the inside of the second lumen is referred to as a second lumen region.
太い左下がりのハッチングは、非生体組織領域のうち第1内腔領域でも第2内腔領域でもない非内腔領域を示す。非内腔領域には、心腔外領域および心臓構造より外部の領域等が含まれる。画像取得用カテーテル40の描出可能範囲が小さく左心房の遠位側の壁を十分に描出できない場合には、左心房の内部も非内腔領域に含まれる。同様に、左心室、肺動脈、肺静脈および大動脈弓等の内腔も、遠位側の壁を十分に描出できない場合には、非内腔領域に含まれる。
The thick, downward-sloping hatch indicates a non-luminal region that is neither the first lumen region nor the second lumen region in the non-living tissue region. The non-luminal region includes an extracardiac region, a region outside the heart structure, and the like. When the delineable range of the image acquisition catheter 40 is small and the wall on the distal side of the left atrium cannot be sufficiently visualized, the inside of the left atrium is also included in the non-luminal region. Similarly, lumens such as the left ventricle, pulmonary artery, pulmonary vein and aortic arch are also included in the non-luminal region if the distal wall cannot be adequately visualized.
黒塗りは、ブロッケンブロー針等の医療器具が描出された、医療器具領域を示す。以下の説明では、生体組織領域と非生体組織領域とを合わせて生体組織関連領域と記載する場合がある。
Black paint indicates the medical device area where medical devices such as Brocken blow needles are depicted. In the following description, the biological tissue region and the non-biological tissue region may be collectively referred to as a biological tissue-related region.
なお、医療器具は、画像取得用カテーテル40と同じ第1腔に挿入されるとは限らない。手技によっては、第2腔に医療器具が挿入される場合もある。
The medical device is not always inserted into the same first cavity as the image acquisition catheter 40. Depending on the procedure, a medical device may be inserted into the second cavity.
図5Cに示すハッチングおよび黒塗りは、それぞれの領域を区別できる態様の例示である。それぞれの領域はたとえば異なる色を使用して表示装置31に表示される。制御部21は、第1内腔領域、第2内腔領域および生体組織領域をそれぞれ区別できる態様で出力する第1態様出力部の機能を実現する。制御部21は、第1内腔領域、第2内腔領域、非内腔領域および生体組織領域をそれぞれ区別できる態様で出力する第2態様出力部の機能も実現する。
The hatching and blackening shown in FIG. 5C are examples of modes in which each region can be distinguished. Each area is displayed on the display device 31 using, for example, different colors. The control unit 21 realizes the function of the first aspect output unit that outputs the first lumen region, the second lumen region, and the biological tissue region in a manner that can be distinguished from each other. The control unit 21 also realizes the function of the second aspect output unit that outputs the first lumen region, the second lumen region, the non-luminal region, and the biological tissue region in a manner that can be distinguished from each other.
たとえば心房中隔穿刺を行なうためにブロッケンブロー針の位置を確認する場合等、IVRの手技中には、XY形式による表示が適している。しかしながら、XY表示では画像取得用カテーテル40近傍の情報は圧縮されてデータ量が減少し、画像取得用カテーテル40から離れた位置では補間により本来は存在しないデータが追加される。したがって、カテーテル画像51の解析を行なう場合には、RT形式画像を用いる方がXY形式画像を用いるよりも高精度の結果を得ることができる。
For example, when confirming the position of the Brocken blow needle to perform atrial septal puncture, the display in XY format is suitable during the IVR procedure. However, in the XY display, the information in the vicinity of the image acquisition catheter 40 is compressed and the amount of data is reduced, and data that does not originally exist is added by interpolation at a position away from the image acquisition catheter 40. Therefore, when analyzing the catheter image 51, it is possible to obtain more accurate results by using the RT format image than by using the XY format image.
以下の説明においては、制御部21はRT形式カテーテル画像518に基づいてRT形式分類データ528を生成する。制御部21は、XY形式カテーテル画像519を変換してRT形式カテーテル画像518を生成し、RT形式分類データ528を変換してXY形式分類データ529を生成する。
In the following description, the control unit 21 generates RT format classification data 528 based on the RT format catheter image 518. The control unit 21 converts the XY format catheter image 519 to generate the RT format catheter image 518, and converts the RT format classification data 528 to generate the XY format classification data 529.
分類データ52について、具体例を挙げて説明する。「生体組織領域」に分類された画素には「生体組織領域ラベル」が、「第1内腔領域」に分類された画素には「第1内腔領域ラベル」が、「第2内腔領域」に分類された画素には「第2内腔領域ラベル」が、「非内腔領域」に分類された画素には「非内腔領域ラベル」が、「医療器具領域」に分類された画素には「医療器具領域ラベル」が、「非生体組織領域」に分類された画素には「非生体組織領域ラベル」がそれぞれ記録される。各ラベルは、たとえば整数で示される。
The classification data 52 will be described with specific examples. The "living tissue area label" is attached to the pixels classified into the "living tissue region", and the "first lumen region label" is attached to the pixels classified into the "first lumen region", and the "second lumen region" is attached to the pixels. "Second lumen area label" for pixels classified as "", "non-luminal area label" for pixels classified as "non-luminal area", and pixels classified as "medical instrument area". The "medical instrument area label" is recorded in the cell, and the "non-living tissue area label" is recorded in the pixels classified into the "non-living tissue area". Each label is indicated by, for example, an integer.
なお、制御部21はXY形式カテーテル画像519に基づいてXY形式分類データ529を生成してもよい。制御部21は、XY形式分類データ529に基づいてRT形式分類データ528を生成してもよい。
The control unit 21 may generate XY format classification data 529 based on the XY format catheter image 519. The control unit 21 may generate RT format classification data 528 based on the XY format classification data 529.
図6は、医療器具学習済モデル611の構成を説明する説明図である。医療器具学習済モデル611は、カテーテル画像51を受け付けて、医療器具が描出されている位置に関する第1位置情報を出力するモデルである。医療器具学習済モデル611は、図4を使用して説明したステップS502を実現する。医療器具学習済モデル611の出力層は、第1位置情報を出力する第1位置情報出力部の機能を果たす。
FIG. 6 is an explanatory diagram illustrating the configuration of the medical device learned model 611. The medical device learned model 611 is a model that accepts the catheter image 51 and outputs the first position information regarding the position where the medical device is drawn. The medical device trained model 611 implements step S502 described with reference to FIG. The output layer of the medical device learned model 611 functions as a first position information output unit that outputs the first position information.
図6においては、医療器具学習済モデル611の入力はRT形式カテーテル画像518である。第1位置情報は、RT形式カテーテル画像518上の各部分についての医療器具が描出されている確率である。図6においては、医療器具が描出されている確率が高い場所を濃いハッチングで、医療器具が描出されている確率が低い場所をハッチング無しで示す。
In FIG. 6, the input of the medical device learned model 611 is the RT format catheter image 518. The first position information is the probability that the medical device for each part on the RT format catheter image 518 is drawn. In FIG. 6, the place where the probability that the medical device is drawn is shown by dark hatching, and the place where the probability that the medical device is drawn is low is shown without hatching.
医療器具学習済モデル611は、たとえばCNN(Convolutional Neural Network)のニューラルネットワーク構造を使用して、機械学習により生成される。医療器具学習済モデル611の生成に使用可能なCNNの例としては、R-CNN(Region Based Convolutional Neural Network)、YOLO(You only look once)、U-NetおよびGAN(Generative Adversarial Network)等が挙げられる。医療器具学習済モデル611は、CNN以外のニューラルネットワーク構造を用いて生成されてもよい。
The medical device learned model 611 is generated by machine learning, for example, using a neural network structure of CNN (Convolutional Neural Network). Examples of CNNs that can be used to generate the medical device trained model 611 include R-CNN (Region Based Convolutional Neural Network), YOLO (You only look once), U-Net, and GAN (Generative Adversarial Network). Be done. The medical device trained model 611 may be generated using a neural network structure other than CNN.
医療器具学習済モデル611は、時系列的に取得した複数のカテーテル画像51を受け付けて、最新のカテーテル画像51に対する第1位置情報を出力するモデルであってもよい。RNN(Recurrent Neural Network)等の時系列的な入力を受け付けるモデルを、前述のニューラルネットワーク構造と組み合わせて、医療器具学習済モデル611を生成できる。
The medical device learned model 611 may be a model that accepts a plurality of catheter images 51 acquired in time series and outputs the first position information with respect to the latest catheter image 51. A model that accepts time-series inputs such as RNN (Recurrent Neural Network) can be combined with the above-mentioned neural network structure to generate a medical device learned model 611.
RNNは、たとえば、LSTM(Long short-term memory)である。LSTMを使用する場合、医療器具学習済モデル611は過去に入力されたカテーテル画像51に関する情報を保持するメモリ部を備える。医療器具学習済モデル611は、メモリ部に保持した情報と、最新のカテーテル画像51とに基づいて、第1位置情報を出力する。
RNN is, for example, LSTM (Long short-term memory). When using LSTMs, the medical device learned model 611 includes a memory unit that holds information about the catheter image 51 previously input. The medical device learned model 611 outputs the first position information based on the information held in the memory unit and the latest catheter image 51.
時系列的に取得した複数のカテーテル画像51を使用する場合、医療器具学習済モデル611は過去に入力されたカテーテル画像51に基づく出力を、次のカテーテル画像51とともに入力する再帰入力部を備えてもよい。医療器具学習済モデル611は、最新のカテーテル画像51と、再帰入力部からの入力とに基づいて、第1位置情報を出力する。時系列的に取得したカテーテル画像51を用いることにより、画像ノイズ等の影響を受けにくく、高精度に第1位置情報を出力する医療器具学習済モデル611を実現できる。
When using a plurality of catheter images 51 acquired in chronological order, the medical device learned model 611 includes a retroactive input unit that inputs an output based on a previously input catheter image 51 together with the next catheter image 51. May be good. The medical device learned model 611 outputs the first position information based on the latest catheter image 51 and the input from the recursive input unit. By using the catheter image 51 acquired in time series, it is possible to realize a medical device learned model 611 that is less susceptible to image noise and the like and outputs the first position information with high accuracy.
医療器具学習済モデル611は、入力を受け付けたカテーテル画像51上の1個の画素の位置を用いて、医療器具が描出されている確率が高い場所を出力してもよい。たとえば医療器具学習済モデル611は、図6に示すようにカテーテル画像51上の各部位について医療器具が描出されている確率を算出した後に、確率が最も高い画素の位置を出力するモデルであってもよい。医療器具学習済モデル611は、医療器具が描出されている確率が所定の閾値を超える領域の重心の位置を出力してもよい。医療器具学習済モデル611は、医療器具が描出されている確率が所定の閾値を超える領域を出力してもよい。
The medical device learned model 611 may output a place where there is a high probability that the medical device is drawn by using the position of one pixel on the catheter image 51 that has received the input. For example, the medical device learned model 611 is a model that outputs the position of the pixel having the highest probability after calculating the probability that the medical device is drawn for each part on the catheter image 51 as shown in FIG. May be good. The medical device learned model 611 may output the position of the center of gravity of the region where the probability that the medical device is drawn exceeds a predetermined threshold value. The medical device learned model 611 may output a region where the probability that the medical device is drawn exceeds a predetermined threshold.
なお、複数の医療器具が同時に使用される場合がある。カテーテル画像51に複数の医療器具が描出されている場合、医療器具学習済モデル611は複数の医療器具それぞれの第1位置情報を出力するモデルであることが望ましい。
In addition, multiple medical devices may be used at the same time. When a plurality of medical devices are visualized on the catheter image 51, it is desirable that the medical device learned model 611 is a model that outputs the first position information of each of the plurality of medical devices.
医療器具学習済モデル611は、1本の医療器具の第1位置情報のみを出力するモデルであってもよい。制御部21は、医療器具学習済モデル611から出力された第1位置情報の周辺をマスクしたRT形式カテーテル画像518を医療器具学習済モデル611に入力して、2本目の医療器具の第1位置情報を取得できる。同様の処理を繰り返すことにより、制御部21は3本目以降の医療器具の第1位置情報も取得できる。
The medical device learned model 611 may be a model that outputs only the first position information of one medical device. The control unit 21 inputs the RT format catheter image 518 that masks the periphery of the first position information output from the medical device learned model 611 into the medical device learned model 611, and inputs the first position of the second medical device. Information can be obtained. By repeating the same process, the control unit 21 can also acquire the first position information of the third and subsequent medical devices.
図7は、分類モデル62の構成を説明する説明図である。分類モデル62は、カテーテル画像51を受け付けて、カテーテル画像51を構成する各部分について、描出されている被写体ごとに分類した分類データ52を出力するモデルである。分類モデル62は、図4を使用して説明したステップS504を実現する。
FIG. 7 is an explanatory diagram illustrating the configuration of the classification model 62. The classification model 62 is a model that accepts the catheter image 51 and outputs the classification data 52 that classifies each portion constituting the catheter image 51 according to the drawn subject. The classification model 62 implements step S504 described with reference to FIG.
具体例を挙げて説明する。分類モデル62は、入力されたRT形式カテーテル画像518を構成するそれぞれの画素を、たとえば「生体組織領域」、「第1内腔領域」、「第2内腔領域」、「非内腔領域」および「医療器具領域」に分類し、画素の位置と、分類結果を示すラベルとを関連づけたRT形式分類データ528を出力する。
I will explain with a concrete example. In the classification model 62, each pixel constituting the input RT format catheter image 518 is, for example, a “biological tissue region”, a “first lumen region”, a “second lumen region”, and a “non-luminal region”. And, it is classified into the "medical device area", and the RT format classification data 528 in which the position of the pixel and the label indicating the classification result are associated with each other is output.
分類モデル62は、カテーテル画像51をたとえば縦3画素、横3画素の合計9画素等の任意の大きさの領域に区切り、それぞれの領域を分類した分類データ52を出力してもよい。分類モデル62は、たとえばカテーテル画像51に対してセマンテックセグメンテーションを行なう学習済モデルである。分類モデル62の具体例については後述する。
The classification model 62 may divide the catheter image 51 into regions of arbitrary size such as, for example, 3 vertical pixels and 3 horizontal pixels for a total of 9 pixels, and output classification data 52 in which each region is classified. The classification model 62 is a trained model that performs semantic segmentation on, for example, the catheter image 51. A specific example of the classification model 62 will be described later.
図8は、位置情報に関する処理の概要を説明する説明図である。センサ42を画像取得用カテーテル40の長手方向に移動させながら、複数のカテーテル画像51を撮影する。図8において略円錐台の線画は、複数のカテーテル画像51に基づいて三次元構築した生体組織領域を模式的に示す。略円錐台の内部は、第1内腔領域を意味する。
FIG. 8 is an explanatory diagram illustrating an outline of processing related to location information. A plurality of catheter images 51 are taken while moving the sensor 42 in the longitudinal direction of the image acquisition catheter 40. In FIG. 8, the line drawing of the substantially truncated cone schematically shows a biological tissue region constructed three-dimensionally based on a plurality of catheter images 51. The interior of the substantially truncated cone means the first lumen region.
白丸および黒丸は、それぞれのカテーテル画像51から取得された医療器具の位置を示す。このうち黒丸は、白丸とは大きく離れた位置であるため、誤検出であると判定される。白丸を滑らかに連結する太線により、医療器具の形状を再現できる。×印は、検出できていない医療器具の位置情報を補完した補完情報を示す。
White circles and black circles indicate the positions of medical devices obtained from the respective catheter images 51. Of these, the black circle is located far away from the white circle, so it is determined to be an erroneous detection. The shape of the medical device can be reproduced by the thick line that smoothly connects the white circles. The x mark indicates complementary information that complements the position information of the medical device that has not been detected.
図8を使用して説明した処理の詳細については、実施の形態8で説明する。図8を使用して説明した処理により、図4を使用して説明したステップS511およびステップS512の処理が実現される。
The details of the process described with reference to FIG. 8 will be described in the eighth embodiment. The process described with reference to FIG. 8 realizes the process of steps S511 and S512 described with reference to FIG.
たとえば、医療器具と生体組織領域とが接触している場合、熟練した医師または検査技師等のユーザが1枚のカテーテル画像51を静止画の状態で読影しても、どこに医療器具が描出されているかの識別が困難である場合があることが知られている。しかし、動画によりカテーテル画像51を観察している場合には、ユーザは比較的容易に医療器具の位置を判断できる。ユーザは、前のフレームと同様の位置に医療器具が存在すると期待しながら読影するためである。
For example, when the medical device is in contact with the biomedical tissue area, even if a user such as a skilled doctor or a laboratory technician interprets one catheter image 51 in a still image state, the medical device is visualized anywhere. It is known that it may be difficult to distinguish between the two. However, when observing the catheter image 51 by moving images, the user can relatively easily determine the position of the medical device. This is because the user interprets the image while expecting that the medical device is in the same position as the previous frame.
図8を使用して説明した処理においては、複数のカテーテル画像51からそれぞれ取得した医療器具の位置情報を用いて、矛盾が生じないように医療器具を再構築する。このような処理を行なうことにより、ユーザが動画を観察する場合と同様に医療器具の位置を正確に判定し、三次元画像中に医療器具の形状を表示するカテーテルシステム10を実現できる。
In the process described with reference to FIG. 8, the medical device is reconstructed so as not to cause a contradiction by using the position information of the medical device acquired from each of the plurality of catheter images 51. By performing such processing, it is possible to realize a catheter system 10 that accurately determines the position of the medical device and displays the shape of the medical device in a three-dimensional image as in the case of observing a moving image by a user.
本実施の形態によると、ステップS506およびステップS513の表示により、画像取得用カテーテル40を用いて取得したカテーテル画像51の理解を支援するカテーテルシステム10を提供できる。本実施の形態のカテーテルシステム10を用いることにより、ユーザは医療器具の位置を正確に把握でき、IVRを安全に行なえる。
According to the present embodiment, the display of step S506 and step S513 can provide a catheter system 10 that supports understanding of the catheter image 51 acquired by using the image acquisition catheter 40. By using the catheter system 10 of the present embodiment, the user can accurately grasp the position of the medical device and can safely perform IVR.
[実施の形態2]
本実施の形態は、医療器具学習済モデル611の生成方法に関する。実施の形態1と共通する部分については、説明を省略する。なお、本実施の形態では図3を使用して説明した情報処理装置20を使用して医療器具学習済モデル611を生成する場合を例にして説明する。 [Embodiment 2]
The present embodiment relates to a method for generating a medical device learnedmodel 611. The description of the parts common to the first embodiment will be omitted. In this embodiment, a case where the medical device learned model 611 is generated by using the information processing apparatus 20 described with reference to FIG. 3 will be described as an example.
本実施の形態は、医療器具学習済モデル611の生成方法に関する。実施の形態1と共通する部分については、説明を省略する。なお、本実施の形態では図3を使用して説明した情報処理装置20を使用して医療器具学習済モデル611を生成する場合を例にして説明する。 [Embodiment 2]
The present embodiment relates to a method for generating a medical device learned
医療器具学習済モデル611は、情報処理装置20とは別のコンピュータ等を用いて作成されてもよい。機械学習が完了した医療器具学習済モデル611がネットワークを介して補助記憶装置23に複写されてもよい。一つのハードウェアで学習させた医療器具学習済モデル611を、複数の情報処理装置20で使用できる。
The medical device learned model 611 may be created by using a computer or the like different from the information processing device 20. The medical device learned model 611 for which machine learning has been completed may be copied to the auxiliary storage device 23 via a network. The medical device trained model 611 trained with one hardware can be used by a plurality of information processing devices 20.
図9は、医療器具位置訓練データDB(Database)71のレコードレイアウトを説明する説明図である。医療器具位置訓練データDB71は、カテーテル画像51と医療器具の位置情報とを関連づけて記録したデータベースであり、機械学習による医療器具学習済モデル611の訓練に使用される。
FIG. 9 is an explanatory diagram illustrating the record layout of the medical device position training data DB (Database) 71. The medical device position training data DB 71 is a database in which the catheter image 51 and the position information of the medical device are recorded in association with each other, and is used for training the medical device learned model 611 by machine learning.
医療器具位置訓練データDB71は、カテーテル画像フィールドおよび位置情報フィールドを有する。カテーテル画像フィールドには、RT形式カテーテル画像518等のカテーテル画像51が記録されている。カテーテル画像フィールドには、センサ42が受信した超音波信号を示す、いわゆる音線データが記録されていてもよい。カテーテル画像フィールドには、音線データに基づいて生成された走査線データが記録されていてもよい。
The medical device position training data DB 71 has a catheter image field and a position information field. In the catheter image field, a catheter image 51 such as an RT format catheter image 518 is recorded. In the catheter image field, so-called sound line data indicating the ultrasonic signal received by the sensor 42 may be recorded. The scan line data generated based on the sound line data may be recorded in the catheter image field.
位置情報フィールドには、カテーテル画像51に描出されている医療器具の位置情報が記録されている。位置情報は、たとえば後述するようにカテーテル画像51にラベラーがマーキングした1個の画素の位置を示す情報である。位置情報は、カテーテル画像51にラベラーがマーキングした点付近を中心とする円の領域を示す情報であってもよい。円は、カテーテル画像51に描出されている医療器具の大きさを超えない程度の寸法である。円は、たとえば縦横50画素以下の正方形に内接する大きさである。
In the position information field, the position information of the medical device depicted in the catheter image 51 is recorded. The position information is information indicating the position of one pixel marked by the labeler on the catheter image 51, for example, as will be described later. The position information may be information indicating a region of a circle centered on the vicinity of the point marked by the labeler on the catheter image 51. The circle is a dimension that does not exceed the size of the medical device depicted in the catheter image 51. The circle has a size inscribed in a square having 50 pixels or less in length and width, for example.
図10は、医療器具位置訓練データDB71の作成に用いる画面の例である。図10の画面には、RT形式カテーテル画像518とXY形式カテーテル画像519との一組のカテーテル画像51が表示されている。RT形式カテーテル画像518とXY形式カテーテル画像519とは、同一の音線データに基づいて作成された画像である。
FIG. 10 is an example of a screen used for creating the medical device position training data DB 71. On the screen of FIG. 10, a set of catheter images 51 of an RT format catheter image 518 and an XY format catheter image 519 is displayed. The RT format catheter image 518 and the XY format catheter image 519 are images created based on the same sound line data.
カテーテル画像51の下に、制御ボタンエリア782が表示されている。制御ボタンエリア782の上部に、表示中のカテーテル画像51のフレーム番号、および、ユーザが任意のフレーム番号を入力して表示をジャンプさせる際に用いるジャンプボタンが配置されている。
The control button area 782 is displayed below the catheter image 51. At the upper part of the control button area 782, a frame number of the catheter image 51 being displayed and a jump button used when the user inputs an arbitrary frame number to jump the display are arranged.
フレーム番号等の下側に、ユーザが早送り、巻き戻し、コマ送り等の操作を行なう際に用いる各種のボタンが配置されている。これらのボタンは、種々の画像再生装置等で一般的に使用されているものと同様であるため、説明を省略する。
Various buttons used by the user to perform operations such as fast forward, rewind, and frame advance are arranged below the frame number and the like. Since these buttons are the same as those generally used in various image reproduction devices and the like, the description thereof will be omitted.
本実施の形態のユーザは、あらかじめ記録されたカテーテル画像51を見て、医療器具の位置にラベルを付けることにより、訓練データを作成する担当者である。以下の説明では、訓練データを作成する担当者をラベラーと記載する。ラベラーは、カテーテル画像51の読影に習熟した医師、検査技師、または正確なラベル付けを行なえるように訓練を受けた者である。さらに以下の説明では、ラベルを付けるためにラベラーがカテーテル画像51にマーク等を付ける作業をマーキングと記載する場合がある。
The user of the present embodiment is a person in charge of creating training data by looking at the catheter image 51 recorded in advance and labeling the position of the medical device. In the following explanation, the person in charge of creating training data is referred to as a labeler. A labeler is a physician, laboratory technician, or trained to perform accurate labeling, who is proficient in interpreting catheter images 51. Further, in the following description, the work of the labeler marking the catheter image 51 for labeling may be described as marking.
ラベラーは、表示されたカテーテル画像51を観察して、医療器具が描出されている位置を判断する。一般的に、カテーテル画像51全体の面積に対して、医療器具が描出されている領域は非常に小さい。ラベラーは、医療器具が描出されている領域の略中央にカーソル781を動かして、クリック操作等によりマーキングを行なう。表示装置31がタッチパネルである場合には、ラベラーは指またはスタイラスペン等を用いたタップ操作によりマーキングを行なってもよい。ラベラーは、いわゆるフリック操作によりマーキングを行なってもよい。
The labeler observes the displayed catheter image 51 and determines the position where the medical device is visualized. Generally, the area where the medical device is visualized is very small with respect to the total area of the catheter image 51. The labeler moves the cursor 781 to substantially the center of the area where the medical device is drawn, and marks by a click operation or the like. When the display device 31 is a touch panel, the labeler may perform marking by a tap operation using a finger, a stylus pen, or the like. The labeler may perform marking by a so-called flick operation.
なお、ラベラーはRT形式カテーテル画像518とXY形式カテーテル画像519のどちらのカテーテル画像51にマーキングを行なってもよい。制御部21は、他方のカテーテル画像51の対応する位置に、マークを表示しても良い。
The labeler may mark the catheter image 51 of either the RT format catheter image 518 or the XY format catheter image 519. The control unit 21 may display a mark at the corresponding position of the other catheter image 51.
制御部21は、医療器具位置訓練データDB71に新規レコードを作成し、カテーテル画像51とラベラーがマーキングした位置とを関連づけて記録する。制御部21は、次のカテーテル画像51を表示装置31に表示する。以上の処理を多数回繰り返すことにより、医療器具位置訓練データDB71が作成される。
The control unit 21 creates a new record in the medical device position training data DB 71, and records the catheter image 51 and the position marked by the labeler in association with each other. The control unit 21 displays the next catheter image 51 on the display device 31. By repeating the above process many times, the medical device position training data DB 71 is created.
すなわちラベラーは、制御ボタンエリア782の各ボタンを操作することなく、カテーテル画像51上でクリック操作等を行なうだけで複数のカテーテル画像51に順次マーキングを行なえる。1本の医療器具が描出された1枚のカテーテル画像51に対してラベラーが行なう操作は、一回のクリック操作等のみである。
That is, the labeler can sequentially mark a plurality of catheter images 51 by simply performing a click operation or the like on the catheter image 51 without operating each button of the control button area 782. The labeler performs only one click operation or the like on one catheter image 51 in which one medical device is depicted.
前述のとおり、カテーテル画像51には複数の医療器具が描出される場合もある。ラベラーはそれぞれの医療器具を1回のクリック操作等でマーキングできる。なお以下の説明では、1枚のカテーテル画像51に1本の医療器具が描出されている場合を例にして説明する。
As described above, a plurality of medical devices may be visualized on the catheter image 51. The labeler can mark each medical device with a single click operation or the like. In the following description, a case where one medical device is depicted on one catheter image 51 will be described as an example.
図11は、医療器具位置訓練データDB71を作成するプログラムの処理の流れを説明するフローチャートである。情報処理装置20を用いて医療器具位置訓練データDB71を作成する場合を例にして説明する。図11のプログラムは情報処理装置20とは別のハードウェアで実行されてもよい。
FIG. 11 is a flowchart illustrating the flow of processing of the program for creating the medical device position training data DB 71. The case where the medical device position training data DB 71 is created by using the information processing device 20 will be described as an example. The program of FIG. 11 may be executed by hardware other than the information processing apparatus 20.
図11のプログラムの実行に先立ち、多数のカテーテル画像51が補助記憶装置23または外部の大容量記憶装置に記録されている。以下の説明では時系列的に撮影された複数のRT形式カテーテル画像518を含む動画データの形式でカテーテル画像51が補助記憶装置23に記録されている場合を例にして説明する。
Prior to the execution of the program of FIG. 11, a large number of catheter images 51 are recorded in the auxiliary storage device 23 or an external large-capacity storage device. In the following description, a case where the catheter image 51 is recorded in the auxiliary storage device 23 in the form of moving image data including a plurality of RT format catheter images 518 taken in time series will be described as an example.
制御部21は1フレームのRT形式カテーテル画像518を補助記憶装置23から取得する(ステップS671)。制御部21は、RT形式カテーテル画像518を変換してXY形式カテーテル画像519を生成する(ステップS672)。制御部21は、図10を使用して説明した画面を表示装置31に表示する(ステップS673)。
The control unit 21 acquires a 1-frame RT format catheter image 518 from the auxiliary storage device 23 (step S671). The control unit 21 converts the RT format catheter image 518 to generate the XY format catheter image 519 (step S672). The control unit 21 displays the screen described with reference to FIG. 10 on the display device 31 (step S673).
制御部21は、入力装置32を介してラベラーによる位置情報の入力操作を受け付ける(ステップS674)。入力操作は、具体的にはRT形式カテーテル画像518またはXY形式カテーテル画像519に対するクリック操作またはタップ操作等である。
The control unit 21 accepts a position information input operation by the labeler via the input device 32 (step S674). Specifically, the input operation is a click operation or a tap operation on the RT format catheter image 518 or the XY format catheter image 519.
制御部21は、入力操作を受け付けた位置に、小丸等のマークを表示する(ステップS675)。入力装置32を介した、表示装置31に表示した画像に対する入力操作の受付、および表示装置31へのマークの表示は従来から使用されているユーザインターフェイスであるため、詳細については説明を省略する。
The control unit 21 displays a mark such as a small circle at the position where the input operation is received (step S675). Since the reception of an input operation for the image displayed on the display device 31 and the display of the mark on the display device 31 via the input device 32 are user interfaces that have been conventionally used, the details thereof will be omitted.
制御部21は、ステップS674で入力操作を受け付けた画像がRT形式カテーテル画像518であるか否かを判定する(ステップS676)。RT形式カテーテル画像518であると判定した場合(ステップS676でYES)、制御部21はXY形式カテーテル画像519の対応する位置にもマークを表示する(ステップS677)。RT形式カテーテル画像518ではないと判定した場合(ステップS676でNO)、制御部21はRT形式カテーテル画像518の対応する位置にもマークを表示する(ステップS678)。
The control unit 21 determines whether or not the image for which the input operation is received in step S674 is the RT format catheter image 518 (step S676). When it is determined that the RT format catheter image 518 is used (YES in step S676), the control unit 21 also displays a mark at the corresponding position of the XY format catheter image 519 (step S677). If it is determined that the RT format catheter image 518 is not (NO in step S676), the control unit 21 also displays a mark at the corresponding position of the RT format catheter image 518 (step S678).
制御部21は、医療器具位置訓練データDB71に新規レコードを作成する。制御部21は、カテーテル画像51とラベラーが入力した位置情報とを関連づけて医療器具位置訓練データDB71に記録する(ステップS679)。
The control unit 21 creates a new record in the medical device position training data DB 71. The control unit 21 associates the catheter image 51 with the position information input by the labeler and records it in the medical device position training data DB 71 (step S679).
ステップS679で記録するカテーテル画像51は、ステップS671で取得したRT形式カテーテル画像518のみであっても、RT形式カテーテル画像518とステップS672で生成したXY形式カテーテル画像519との両方であってもよい。ステップS679で記録するカテーテル画像51は、センサ42が受信した一回転分の音線データ、または音線データを信号処理して生成した走査線データであってもよい。
The catheter image 51 recorded in step S679 may be only the RT format catheter image 518 acquired in step S671 or both the RT format catheter image 518 and the XY format catheter image 519 generated in step S672. .. The catheter image 51 recorded in step S679 may be the sound line data for one rotation received by the sensor 42 or the scan line data generated by signal processing the sound line data.
ステップS679で記録する位置情報は、たとえばラベラーが入力装置32を用いてクリック操作等を行なった位置に対応する、RT形式カテーテル画像518上の1個の画素の位置を示す情報である。位置情報は、ラベラーがクリック操作等を行なった位置とその周辺の範囲を示す情報であってもよい。
The position information recorded in step S679 is information indicating the position of one pixel on the RT format catheter image 518, which corresponds to the position where the labeler performs a click operation or the like using the input device 32, for example. The position information may be information indicating the position where the labeler has performed a click operation or the like and the range around the position.
制御部21は、処理を終了するか否かを判定する(ステップS680)。たとえば、補助記憶装置23に記録されたカテーテル画像51の処理を終了した場合、制御部21は処理を終了すると判定する。終了すると判定した場合(ステップS680でYES)、制御部21は処理を終了する。
The control unit 21 determines whether or not to end the process (step S680). For example, when the processing of the catheter image 51 recorded in the auxiliary storage device 23 is completed, the control unit 21 determines that the processing is completed. If it is determined to end (YES in step S680), the control unit 21 ends the process.
処理を終了しないと判定した場合(ステップS680でNO)、制御部21はステップS671に戻る。制御部21はステップS671で次のRT形式カテーテル画像518を取得し、ステップS672以下の処理を実行する。すなわち制御部21は制御ボタンエリア782に表示されたボタンに対する操作を待たずに、自動的に次のRT形式カテーテル画像518を取得して表示する。
If it is determined that the process is not completed (NO in step S680), the control unit 21 returns to step S671. The control unit 21 acquires the next RT format catheter image 518 in step S671 and executes the processing of step S672 or less. That is, the control unit 21 automatically acquires and displays the next RT format catheter image 518 without waiting for the operation of the button displayed in the control button area 782.
ステップS671からステップS680のループにより、制御部21は、補助記憶装置23に記録された多数のRT形式カテーテル画像518に基づく訓練データを、医療器具位置訓練データDB71に記録する。
By the loop from step S671 to step S680, the control unit 21 records the training data based on the large number of RT format catheter images 518 recorded in the auxiliary storage device 23 in the medical device position training data DB 71.
なお、制御部21は図10を使用して説明した画面にたとえば「保存ボタン」を表示し、「保存ボタン」の選択を受け付けた場合にステップS679を実行してもよい。さらに制御部21は、図10を使用して説明した画面にたとえば「AUTOボタン」を表示し、「AUTOボタン」の選択を受け付けている間は「保存ボタン」の選択を待たずに自動的にステップS679を実行してもよい。
Note that the control unit 21 may display, for example, a "save button" on the screen described with reference to FIG. 10, and execute step S679 when the selection of the "save button" is accepted. Further, the control unit 21 displays, for example, an "AUTO button" on the screen described using FIG. 10, and automatically automatically without waiting for the selection of the "save button" while accepting the selection of the "AUTO button". Step S679 may be executed.
以下の説明では、ステップS679で医療器具位置訓練データDB71に記録されたカテーテル画像51はRT形式カテーテル画像518であり、位置情報はRT形式カテーテル画像518上の1個の画素の位置である場合を例にして説明する。
In the following description, the catheter image 51 recorded in the medical device position training data DB 71 in step S679 is the RT format catheter image 518, and the position information is the position of one pixel on the RT format catheter image 518. Let's take an example.
図12は、医療器具学習済モデル611生成プログラムの処理の流れを説明するフローチャートである。図12のプログラムの実行に先立ち、たとえば畳込層、プーリング層および全結合層を組み合わせた未学習のモデルが準備されている。前述のとおり、未学習のモデルはたとえばCNNのモデルである。医療器具学習済モデル611の生成に使用可能なCNNの例としては、R-CNN、YOLO、U-NetおよびGAN等が挙げられる。医療器具学習済モデル611は、CNN以外のニューラルネットワーク構造を用いて生成されてもよい。
FIG. 12 is a flowchart illustrating the processing flow of the medical device learned model 611 generation program. Prior to the execution of the program of FIG. 12, an unlearned model combining, for example, a convolutional layer, a pooling layer, and a fully connected layer is prepared. As mentioned above, the unlearned model is, for example, the CNN model. Examples of CNNs that can be used to generate the medical device trained model 611 include R-CNN, YOLO, U-Net, GAN and the like. The medical device trained model 611 may be generated using a neural network structure other than CNN.
制御部21は、医療器具位置訓練データDB71から1エポックの訓練に使用する訓練レコードを取得する(ステップS571)。前述のとおり、医療器具位置訓練データDB71に記録された訓練レコードは、RT形式カテーテル画像518と、RT形式カテーテル画像518に描出されている医療器具の位置を示す座標との組み合わせである。
The control unit 21 acquires a training record used for training of one epoch from the medical device position training data DB 71 (step S571). As described above, the training record recorded in the medical device position training data DB 71 is a combination of the RT format catheter image 518 and the coordinates indicating the position of the medical device depicted in the RT format catheter image 518.
制御部21は、モデルの入力層にRT形式カテーテル画像518が入力された場合に、出力層から位置情報に対応する画素の位置が出力されるように、モデルのパラメータを調整する(ステップS572)。訓練レコードの取得、およびモデルのパラメータ調整においては、プログラムは、ユーザによる修正の受付、判断の根拠の提示、追加学習等を制御部21に実行させる機能を適宜有していても良い。
The control unit 21 adjusts the model parameters so that when the RT format catheter image 518 is input to the input layer of the model, the positions of the pixels corresponding to the position information are output from the output layer (step S572). .. In the acquisition of the training record and the parameter adjustment of the model, the program may appropriately have a function of accepting corrections by the user, presenting the basis for judgment, additional learning, and the like to be executed by the control unit 21.
制御部21は、処理を終了するか否かを判定する(ステップS573)。たとえば、制御部21は所定のエポック数の学習を終了した場合に、処理を終了すると判定する。制御部21は、医療器具位置訓練データDB71からテストデータを取得して機械学習中のモデルに入力し、所定の精度の出力が得られた場合に処理を終了すると判定してもよい。
The control unit 21 determines whether or not to end the process (step S573). For example, the control unit 21 determines that the process is completed when the learning of a predetermined number of epochs is completed. The control unit 21 may acquire test data from the medical device position training data DB 71 and input it to the model being machine-learned, and may determine that the process ends when an output with a predetermined accuracy is obtained.
処理を終了しないと判定した場合(ステップS573でNO)、制御部21はステップS571に戻る。処理を終了すると判定した場合(ステップS573でYES)、制御部21は学習済の医療器具位置訓練データDB71のパラメータを補助記憶装置23に記録する(ステップS574)。その後、制御部21は処理を終了する。以上の処理により、カテーテル画像51を受け付けて第1位置情報を出力する医療器具学習済モデル611が生成される。
If it is determined that the process is not completed (NO in step S573), the control unit 21 returns to step S571. When it is determined that the process is completed (YES in step S573), the control unit 21 records the parameters of the learned medical device position training data DB 71 in the auxiliary storage device 23 (step S574). After that, the control unit 21 ends the process. By the above processing, the medical device learned model 611 that receives the catheter image 51 and outputs the first position information is generated.
図12のプログラムの実行に先立ち、RNN等の時系列的な入力を受け付けるモデルが準備されていても良い。RNNは、たとえば、LSTMである。ステップS572において制御部21は、モデルの入力層に時系列的に撮影された複数のRT形式カテーテル画像518が入力された場合に、出力層から時系列的に最後のRT形式カテーテル画像518に関連づけられた位置情報に対応する画素の位置が出力されるように、モデルのパラメータを調整する。
Prior to the execution of the program of FIG. 12, a model that accepts time-series input such as RNN may be prepared. The RNN is, for example, an LSTM. In step S572, when a plurality of RT-type catheter images 518 captured in time series are input to the input layer of the model, the control unit 21 associates the output layer with the last RT-type catheter image 518 in time series. Adjust the model parameters so that the pixel positions corresponding to the given position information are output.
図13は、医療器具位置訓練データDB71にデータを追加するプログラムの処理の流れを説明するフローチャートである。図13のプログラムは、医療器具学習済モデル611を作成した後に、医療器具位置訓練データDB71に訓練データを追加するプログラムである。追加された訓練データは、医療器具学習済モデル611の追加学習に使用される。
FIG. 13 is a flowchart illustrating a processing flow of a program for adding data to the medical device position training data DB 71. The program of FIG. 13 is a program for adding training data to the medical device position training data DB 71 after creating the medical device learned model 611. The added training data is used for additional training of the medical device trained model 611.
図13のプログラムの実行に先立ち、まだ医療器具位置訓練データDB71の作成に使用されていない多数のカテーテル画像51が補助記憶装置23または外部の大容量記憶装置に記録されている。以下の説明では時系列的に撮影された複数のRT形式カテーテル画像518を含む動画データの形式でカテーテル画像51が補助記憶装置23に記録されている場合を例にして説明する。
Prior to the execution of the program of FIG. 13, a large number of catheter images 51 that have not yet been used for creating the medical device position training data DB 71 are recorded in the auxiliary storage device 23 or an external large-capacity storage device. In the following description, a case where the catheter image 51 is recorded in the auxiliary storage device 23 in the form of moving image data including a plurality of RT format catheter images 518 taken in time series will be described as an example.
制御部21は1フレームのRT形式カテーテル画像518を補助記憶装置23から取得する(ステップS701)。制御部21は、RT形式カテーテル画像518を医療器具学習済モデル611に入力して、第1位置情報を取得する(ステップS702)。
The control unit 21 acquires a 1-frame RT format catheter image 518 from the auxiliary storage device 23 (step S701). The control unit 21 inputs the RT format catheter image 518 into the medical device learned model 611 and acquires the first position information (step S702).
制御部21は、RT形式カテーテル画像518を変換してXY形式カテーテル画像519を生成する(ステップS703)。制御部21は、RT形式カテーテル画像518およびXY形式カテーテル画像519にそれぞれステップS702で取得した第1位置情報を示すマークを重畳させた状態で、図10を使用して説明した画面を表示装置31に表示する(ステップS704)。
The control unit 21 converts the RT format catheter image 518 to generate the XY format catheter image 519 (step S703). The control unit 21 displays the screen described with reference to FIG. 10 in a state where the mark indicating the first position information acquired in step S702 is superimposed on the RT format catheter image 518 and the XY format catheter image 519, respectively. Is displayed in (step S704).
ラベラーは、自動的に表示されたマークの位置が不適切だと判断した場合、一回のクリック操作等を行なって医療器具の正しい位置を入力する。すなわちラベラーは、自動的に表示されたマークに対する訂正指示を入力する。
If the labeler determines that the position of the automatically displayed mark is inappropriate, the labeler performs a single click operation or the like to input the correct position of the medical device. That is, the labeler inputs a correction instruction for the automatically displayed mark.
制御部21は、所定の時間内にラベラーによる入力装置32を介した入力操作を受け付けたか否かを判定する(ステップS705)。所定の時間は、ラベラーが適宜設定できることが望ましい。入力操作は、具体的にはRT形式カテーテル画像518またはXY形式カテーテル画像519に対するクリック操作またはタップ操作等である。
The control unit 21 determines whether or not the input operation via the input device 32 by the labeler has been accepted within a predetermined time (step S705). It is desirable that the labeler can appropriately set the predetermined time. Specifically, the input operation is a click operation or a tap operation on the RT format catheter image 518 or the XY format catheter image 519.
入力操作を受け付けたと判定した場合(ステップS705でYES)、制御部21は、入力操作を受け付けた位置に、小丸等のマークを表示する(ステップS706)。ステップS706で表示するマークは、ステップS702で取得した位置情報を示すマークとは異なる色、または異なる形状等であることが望ましい。なお、制御部21は、ステップS702で取得した位置情報を示すマークを消去してもよい。
When it is determined that the input operation has been accepted (YES in step S705), the control unit 21 displays a mark such as a small circle at the position where the input operation is accepted (step S706). It is desirable that the mark displayed in step S706 has a different color, a different shape, or the like from the mark indicating the position information acquired in step S702. The control unit 21 may erase the mark indicating the position information acquired in step S702.
制御部21は、ステップS705で入力操作を受け付けた画像がRT形式カテーテル画像518であるか否かを判定する(ステップS707)。RT形式カテーテル画像518であると判定した場合(ステップS707でYES)、制御部21はXY形式カテーテル画像519の対応する位置にもマークを表示する(ステップS708)。RT形式カテーテル画像518ではないと判定した場合(ステップS707でNO)、制御部21はRT形式カテーテル画像518の対応する位置にもマークを表示する(ステップS709)。
The control unit 21 determines whether or not the image for which the input operation is received in step S705 is the RT format catheter image 518 (step S707). When it is determined that the RT format catheter image 518 is used (YES in step S707), the control unit 21 also displays a mark at the corresponding position of the XY format catheter image 519 (step S708). If it is determined that the RT format catheter image 518 is not (NO in step S707), the control unit 21 also displays a mark at the corresponding position of the RT format catheter image 518 (step S709).
制御部21は、医療器具位置訓練データDB71に新規レコードを作成する。制御部21は、カテーテル画像51とラベラーが入力した位置情報とを関連づけた訂正データを、医療器具位置訓練データDB71に記録する(ステップS710)。
The control unit 21 creates a new record in the medical device position training data DB 71. The control unit 21 records the correction data in which the catheter image 51 and the position information input by the labeler are associated with each other in the medical device position training data DB 71 (step S710).
入力操作を受け付けていないと判定した場合(ステップS705でNO)、制御部21は医療器具位置訓練データDB71に新規レコードを作成する。制御部21は、カテーテル画像51とステップS532で取得した第1位置情報とを関連づけた非訂正データを医療器具位置訓練データDB71に記録する(ステップS711)。
If it is determined that the input operation is not accepted (NO in step S705), the control unit 21 creates a new record in the medical device position training data DB 71. The control unit 21 records the uncorrected data associated with the catheter image 51 and the first position information acquired in step S532 in the medical device position training data DB 71 (step S711).
ステップS710またはステップS711の終了後、制御部21は、処理を終了するか否かを判定する(ステップS712)。たとえば、補助記憶装置23に記録されたカテーテル画像51の処理を終了した場合、制御部21は処理を終了すると判定する。終了すると判定した場合(ステップS712でYES)、制御部21は処理を終了する。
After the end of step S710 or step S711, the control unit 21 determines whether or not to end the process (step S712). For example, when the processing of the catheter image 51 recorded in the auxiliary storage device 23 is completed, the control unit 21 determines that the processing is completed. If it is determined to end (YES in step S712), the control unit 21 ends the process.
処理を終了しないと判定した場合(ステップS712でNO)、制御部21はステップS701に戻る。制御部21はステップS701で次のRT形式カテーテル画像518を取得し、ステップS702以下の処理を実行する。ステップS701からステップS712のループにより、制御部21は、補助記憶装置23に記録された多数のRT形式カテーテル画像518に基づく訓練データを、医療器具位置訓練データDB71に追加する。
If it is determined that the process is not completed (NO in step S712), the control unit 21 returns to step S701. The control unit 21 acquires the next RT format catheter image 518 in step S701 and executes the processing of step S702 or less. By the loop from step S701 to step S712, the control unit 21 adds training data based on a large number of RT format catheter images 518 recorded in the auxiliary storage device 23 to the medical device position training data DB 71.
なお、制御部21は図10を使用して説明した画面にたとえば医療器具学習済モデル611による出力を承認する「OKボタン」を表示してもよい。「OKボタン」の選択を受け付けた場合、制御部21はステップS705で「NO」である旨の指示を受け付けたと判定して、ステップS711を実行する。
Note that the control unit 21 may display, for example, an "OK button" for approving the output by the medical device learned model 611 on the screen described using FIG. When the selection of the "OK button" is accepted, the control unit 21 determines in step S705 that the instruction to the effect of "NO" has been accepted, and executes step S711.
本実施の形態によると、ラベラーは一回のクリック操作または一回のタップ操作等の一回の操作のみでカテーテル画像51に描出された1本の医療器具をマーキングできる。制御部21は、いわゆるダブルクリック操作またはダブルタップ操作により1本の医療器具をマーキングする操作を受け付けてもよい。医療器具の境界線をマーキングする場合に比べて、マーキング作業を大幅に省力化できるため、ラベラーの負担を軽減できる。本実施の形態によると、短い時間で多くの訓練データを作成できる。
According to the present embodiment, the labeler can mark one medical device drawn on the catheter image 51 with only one operation such as one click operation or one tap operation. The control unit 21 may accept an operation of marking one medical device by a so-called double-click operation or double-tap operation. Compared to the case of marking the boundary line of a medical device, the marking work can be significantly reduced, so that the burden on the labeler can be reduced. According to this embodiment, a lot of training data can be created in a short time.
本実施の形態によると、カテーテル画像51に複数の医療器具が描出される場合、ラベラーはそれぞれの医療器具を1回のクリック操作等でマーキングできる。
According to this embodiment, when a plurality of medical devices are drawn on the catheter image 51, the labeler can mark each medical device with a single click operation or the like.
なお、制御部21は図10を使用して説明した画面にたとえば「OKボタン」を表示し、「OKボタン」の選択を受け付けた場合にステップS679を実行してもよい。
Note that the control unit 21 may display, for example, an "OK button" on the screen described with reference to FIG. 10, and execute step S679 when the selection of the "OK button" is accepted.
本実施の形態によると、医療器具学習済モデル611を使用して取得した位置情報をカテーテル画像51に重畳表示することにより、ラベラーの負担を軽減しながら、追加の訓練データを速やかに作成できる。
According to this embodiment, by superimposing and displaying the position information acquired by using the medical device learned model 611 on the catheter image 51, it is possible to quickly create additional training data while reducing the burden on the labeler.
[変形例2-1]
医療器具位置訓練データDB71は、医療器具の種類を記録するフィールドを有してもよい。そのようにする場合、図10を使用して説明した画面において、制御部21は「ブロッケンブロー針」、「ガイドワイヤ」、「バルーンカテーテル」等、医療器具の種類の入力を受け付ける。 [Modification 2-1]
The medical device positiontraining data DB 71 may have a field for recording the type of medical device. In such a case, on the screen described with reference to FIG. 10, the control unit 21 receives an input of a type of medical device such as a “Brocken blow needle”, a “guide wire”, and a “balloon catheter”.
医療器具位置訓練データDB71は、医療器具の種類を記録するフィールドを有してもよい。そのようにする場合、図10を使用して説明した画面において、制御部21は「ブロッケンブロー針」、「ガイドワイヤ」、「バルーンカテーテル」等、医療器具の種類の入力を受け付ける。 [Modification 2-1]
The medical device position
このようにして作成した医療器具位置訓練データDB71を用いて機械学習を行なうことにより、医療器具の位置に加えて医療器具の種類を出力する医療器具学習済モデル611が生成される。
By performing machine learning using the medical device position training data DB 71 created in this way, a medical device learned model 611 that outputs the type of the medical device in addition to the position of the medical device is generated.
[実施の形態3]
本実施の形態は、2個の学習済モデルを使用して、カテーテル画像51から医療器具の位置に関する第2位置情報を取得するカテーテルシステム10に関する。実施の形態2と共通する部分については、説明を省略する。 [Embodiment 3]
This embodiment relates to acatheter system 10 that uses two trained models to acquire second position information about the position of a medical device from a catheter image 51. The description of the parts common to the second embodiment will be omitted.
本実施の形態は、2個の学習済モデルを使用して、カテーテル画像51から医療器具の位置に関する第2位置情報を取得するカテーテルシステム10に関する。実施の形態2と共通する部分については、説明を省略する。 [Embodiment 3]
This embodiment relates to a
図14は、医療器具の描出を説明する説明図である。図14においては、RT形式カテーテル画像518およびXY形式カテーテル画像519に描出される医療器具を強調して示す。
FIG. 14 is an explanatory diagram illustrating the depiction of the medical device. In FIG. 14, the medical devices depicted in the RT format catheter image 518 and the XY format catheter image 519 are highlighted.
一般的に、医療器具は生体組織に比べて超音波を強く反射する。センサ42から放射された超音波は医療器具よりも遠方には到達しにくい。そのため医療器具は、画像取得用カテーテル40に近い側を示す高エコー領域と、その後方に続く低エコー領域とにより描出される。医療器具の後方に続く低エコー領域を音響陰影と記載する。図14においては、音響陰影の部分を縦線のハッチングで示す。
In general, medical instruments reflect ultrasonic waves more strongly than living tissues. The ultrasonic waves emitted from the sensor 42 are harder to reach farther than the medical device. Therefore, the medical device is visualized by a hyperechoic region indicating the side closer to the image acquisition catheter 40 and a hypoechoic region following the hyperechoic region. The hypoechoic region that follows the back of the medical device is referred to as acoustic shading. In FIG. 14, the portion of the acoustic shadow is shown by hatching of vertical lines.
RT形式カテーテル画像518においては、音響陰影は水平方向の直線状に描出される。XY形式カテーテル画像519においては、音響陰影は扇形に描出される。いずれの場合も、音響陰影よりも画像取得用カテーテル40に近い部位に高輝度領域が描出される。なお、高輝度領域は走査線方向に沿って規則的に繰り返す、いわゆる多重エコーの態様で描出される場合もある。
In the RT format catheter image 518, the acoustic shadow is drawn in a straight line in the horizontal direction. In the XY format catheter image 519, the acoustic shadow is visualized in a fan shape. In either case, a high-luminance region is visualized in a portion closer to the image acquisition catheter 40 than the acoustic shadow. The high-luminance region may be visualized in the form of so-called multiple echo, which repeats regularly along the scanning line direction.
RT形式カテーテル画像518の走査角度方向、すなわち図14における横方向の特徴に基づいて、医療器具が描出されている走査角度を判定できる。
The scanning angle in which the medical device is drawn can be determined based on the scanning angle direction of the RT format catheter image 518, that is, the lateral feature in FIG.
図15は、角度学習済モデル612の構成を説明する説明図である。角度学習済モデル612は、カテーテル画像51を受け付けて医療器具が描出されている走査角度に関する走査角度情報を出力するモデルである。
FIG. 15 is an explanatory diagram illustrating the configuration of the angle-learned model 612. The angle-learned model 612 is a model that accepts the catheter image 51 and outputs scanning angle information regarding the scanning angle on which the medical device is drawn.
図15においては、RT形式カテーテル画像518を受け付けて、それぞれの走査角度、すなわちRT形式カテーテル画像518の縦方向において、医療器具が描出されている確率を示す走査角度情報を出力する角度学習済モデル612を模式的に示す。なお、医療器具は複数の走査角度にわたって描出されるため、走査角度情報の出力される確率の合計は100%を超える。角度学習済モデル612は医療器具が描出されている確率が高い角度を抽出して出力してもよい。
In FIG. 15, an angle-learned model that accepts an RT-type catheter image 518 and outputs scanning angle information indicating the probability that a medical device is drawn in each scanning angle, that is, in the vertical direction of the RT-type catheter image 518. 612 is schematically shown. Since the medical device is drawn over a plurality of scanning angles, the total probability of outputting the scanning angle information exceeds 100%. The angle-learned model 612 may extract and output an angle at which the probability that the medical device is drawn is high.
角度学習済モデル612は、機械学習により生成される。図9を使用して説明した医療器具位置訓練データDB71の位置情報フィールドから、位置情報のうちの走査角度を抽出することにより、角度学習済モデル612を生成する訓練データに使用できる。
The angle-learned model 612 is generated by machine learning. By extracting the scanning angle of the position information from the position information field of the medical device position training data DB 71 described with reference to FIG. 9, it can be used for the training data to generate the angle-learned model 612.
図12のフローチャートを使用して、角度学習済モデル612を生成する処理の概要を説明する。図12のプログラムの実行に先立ち、たとえば畳込層、プーリング層および全結合層を組み合わせたCNN等の未学習のモデルが準備されている。図12のプログラムにより、準備されたモデルの各パラメータが調整されて、機械学習が行なわれる。
The outline of the process for generating the angle-learned model 612 will be described using the flowchart of FIG. Prior to the execution of the program of FIG. 12, an unlearned model such as a CNN that combines a convolutional layer, a pooling layer, and a fully connected layer is prepared. The program of FIG. 12 adjusts each parameter of the prepared model and performs machine learning.
制御部21は、医療器具位置訓練データDB71から1エポックの訓練に使用する訓練レコードを取得する(ステップS571)。前述のとおり、医療器具位置訓練データDB71に記録された訓練レコードは、RT形式カテーテル画像518と、RT形式カテーテル画像518に描出されている医療器具の位置を示す座標との組み合わせである。
The control unit 21 acquires a training record used for training of one epoch from the medical device position training data DB 71 (step S571). As described above, the training record recorded in the medical device position training data DB 71 is a combination of the RT format catheter image 518 and the coordinates indicating the position of the medical device depicted in the RT format catheter image 518.
制御部21は、モデルの入力層にRT形式カテーテル画像518が入力された場合に、出力層から位置情報に対応する走査角度が出力されるように、モデルのパラメータを調整する(ステップS572)。訓練レコードの取得、およびモデルのパラメータ調整においては、プログラムは、ユーザによる修正の受付、判断の根拠の提示、追加学習等を制御部21に実行させる機能を適宜有していても良い。
The control unit 21 adjusts the model parameters so that when the RT format catheter image 518 is input to the input layer of the model, the scanning angle corresponding to the position information is output from the output layer (step S572). In the acquisition of the training record and the parameter adjustment of the model, the program may appropriately have a function of accepting corrections by the user, presenting the basis for judgment, additional learning, and the like to be executed by the control unit 21.
制御部21は、処理を終了するか否かを判定する(ステップS573)。たとえば、制御部21は所定のエポック数の学習を終了した場合に、処理を終了すると判定する。制御部21は、医療器具位置訓練データDB71からテストデータを取得して機械学習中のモデルに入力し、所定の精度の出力が得られた場合に処理を終了すると判定してもよい。
The control unit 21 determines whether or not to end the process (step S573). For example, the control unit 21 determines that the process is completed when the learning of a predetermined number of epochs is completed. The control unit 21 may acquire test data from the medical device position training data DB 71 and input it to the model being machine-learned, and may determine that the process ends when an output with a predetermined accuracy is obtained.
処理を終了しないと判定した場合(ステップS573でNO)、制御部21はステップS571に戻る。処理を終了すると判定した場合(ステップS573でYES)、制御部21は学習済の医療器具位置訓練データDB71のパラメータを補助記憶装置23に記録する(ステップS574)。その後、制御部21は処理を終了する。以上の処理により、カテーテル画像51を受け付けて走査角度に関する情報を出力する角度学習済モデル612が生成される。
If it is determined that the process is not completed (NO in step S573), the control unit 21 returns to step S571. When it is determined that the process is completed (YES in step S573), the control unit 21 records the parameters of the learned medical device position training data DB 71 in the auxiliary storage device 23 (step S574). After that, the control unit 21 ends the process. By the above processing, an angle-learned model 612 that receives the catheter image 51 and outputs information regarding the scanning angle is generated.
図12のプログラムの実行に先立ち、RNN等の時系列的な入力を受け付けるモデルが準備されていても良い。RNNは、たとえば、LSTMである。ステップS572において制御部21は、モデルの入力層に時系列的に撮影された複数のRT形式カテーテル画像518が入力された場合に、出力層から時系列的に最後のRT形式カテーテル画像518に関連づけられた走査角度に関する情報が出力されるように、モデルのパラメータを調整する。
Prior to the execution of the program of FIG. 12, a model that accepts time-series input such as RNN may be prepared. The RNN is, for example, an LSTM. In step S572, when a plurality of RT-type catheter images 518 captured in time series are input to the input layer of the model, the control unit 21 associates the output layer with the last RT-type catheter image 518 in time series. Adjust the model parameters to output information about the scan angle.
角度学習済モデル612を使用する代わりに、制御部21はパターンマッチングにより医療器具が描出されている走査角度を判定してもよい。
Instead of using the angle-learned model 612, the control unit 21 may determine the scanning angle on which the medical device is drawn by pattern matching.
図16は、位置情報モデル619を説明する説明図である。位置情報モデル619は、RT形式カテーテル画像518を受け付けて、描出されている医療器具の位置を示す第2位置情報を出力するモデルである。位置情報モデル619は、医療器具学習済モデル611、角度学習済モデル612および位置情報合成部615を含む。
FIG. 16 is an explanatory diagram illustrating the position information model 619. The position information model 619 is a model that accepts the RT format catheter image 518 and outputs the second position information indicating the position of the drawn medical device. The position information model 619 includes a medical device learned model 611, an angle learned model 612, and a position information synthesis unit 615.
医療器具学習済モデル611と角度学習済モデル612との双方に、同一のRT形式カテーテル画像518が入力される。医療器具学習済モデル611から、第1位置情報が出力される。図6を使用して説明したように、第1位置情報は、RT形式カテーテル画像518上の各部位において医療器具が描出されている確率である。以下の説明では、画像取得用カテーテル40の中心からの距離がr、走査角度がθである位置において医療器具が描出されている確率をP1(r,θ)で示す。
The same RT format catheter image 518 is input to both the medical device trained model 611 and the angle trained model 612. The first position information is output from the medical device learned model 611. As described with reference to FIG. 6, the first position information is the probability that the medical device is visualized at each site on the RT format catheter image 518. In the following description, the probability that the medical device is visualized at the position where the distance from the center of the image acquisition catheter 40 is r and the scanning angle is θ is shown by P1 (r, θ).
角度学習済モデル612から、走査角度情報が出力される。走査角度情報は、それぞれの走査角度において医療器具が描出されている確率である。以下の説明では、走査角度θの方向に医療器具が描出されている確率をPt(θ)で示す。
Scanning angle information is output from the angle-learned model 612. The scanning angle information is the probability that the medical device is depicted at each scanning angle. In the following description, the probability that the medical device is drawn in the direction of the scanning angle θ is shown by Pt (θ).
第1位置情報と、走査角度情報とが位置情報合成部615で合成されて、第2位置情報が生成される。第2位置情報は、第1位置情報と同様に、RT形式カテーテル画像518上の各部位において医療器具が描出されている確率である。位置情報合成部615の入力端は、第1位置情報取得部の機能および走査角度情報取得部の機能を果たす。
The first position information and the scanning angle information are combined by the position information synthesizing unit 615 to generate the second position information. The second position information is the probability that the medical device is visualized at each site on the RT format catheter image 518, similarly to the first position information. The input end of the position information synthesis unit 615 fulfills the functions of the first position information acquisition unit and the scanning angle information acquisition unit.
なお、医療器具はある程度の広がりをもってRT形式カテーテル画像518に描出されるため、P1の総和およびPtの総和は、いずれも1よりも大きい場合がある。画像取得用カテーテル40の中心からの距離がr、走査角度がθである位置における第2位置情報P2(r,θ)は、たとえば(1-1)式により算出される。
P2(r,θ)=P1(r,θ)+kPt(θ) ‥‥‥ (1-1)
kは、第1位置情報と走査角度情報との間の重みづけに関する係数である。 Since the medical device is visualized on the RTformat catheter image 518 with a certain extent, the total sum of P1 and the total sum of Pt may be larger than 1. The second position information P2 (r, θ) at the position where the distance from the center of the image acquisition catheter 40 is r and the scanning angle is θ is calculated by, for example, Eq. (1-1).
P2 (r, θ) = P1 (r, θ) + kPt (θ) ‥‥‥ (1-1)
k is a coefficient relating to the weighting between the first position information and the scanning angle information.
P2(r,θ)=P1(r,θ)+kPt(θ) ‥‥‥ (1-1)
kは、第1位置情報と走査角度情報との間の重みづけに関する係数である。 Since the medical device is visualized on the RT
P2 (r, θ) = P1 (r, θ) + kPt (θ) ‥‥‥ (1-1)
k is a coefficient relating to the weighting between the first position information and the scanning angle information.
第2位置情報P2(r,θ)は、(1-2)式により算出されてもよい。
P2(r,θ)=P1(r,θ)×Pt(θ) ‥‥‥ (1-2) The second position information P2 (r, θ) may be calculated by the equation (1-2).
P2 (r, θ) = P1 (r, θ) × Pt (θ) ‥‥‥ (1-2)
P2(r,θ)=P1(r,θ)×Pt(θ) ‥‥‥ (1-2) The second position information P2 (r, θ) may be calculated by the equation (1-2).
P2 (r, θ) = P1 (r, θ) × Pt (θ) ‥‥‥ (1-2)
第2位置情報P2(r,θ)は、(1-3)式により算出されてもよい。(1-3)式は、第1位置情報と、走査角度情報との平均値を算出する式である。
P2(r,θ)=(P1(r,θ)+Pt(θ))/2 ‥‥‥ (1-3) The second position information P2 (r, θ) may be calculated by the equation (1-3). Equation (1-3) is an equation for calculating the average value of the first position information and the scanning angle information.
P2 (r, θ) = (P1 (r, θ) + Pt (θ)) / 2 ‥‥‥ (1-3)
P2(r,θ)=(P1(r,θ)+Pt(θ))/2 ‥‥‥ (1-3) The second position information P2 (r, θ) may be calculated by the equation (1-3). Equation (1-3) is an equation for calculating the average value of the first position information and the scanning angle information.
P2 (r, θ) = (P1 (r, θ) + Pt (θ)) / 2 ‥‥‥ (1-3)
なお、(1-1)式から(1-3)式における第2位置情報P2(r,θ)は、いずれも確率ではなく医療器具が描出されている可能性の大小を相対的に示す数値である。第1位置情報と走査角度情報とを合成することにより、走査角度方向に関する精度が向上する。なお、第2位置情報は、P2(r,θ)の値が最も大きい位置に関する情報であってもよい。第2位置情報は、(1-1)式から(1-3)式までに例示した式以外の関数により定められてもよい。
The second position information P2 (r, θ) in Eqs. (1-1) to (1-3) is not a probability but a numerical value that relatively indicates the magnitude of the possibility that the medical device is drawn. Is. By synthesizing the first position information and the scanning angle information, the accuracy in the scanning angle direction is improved. The second position information may be information about the position where the value of P2 (r, θ) is the largest. The second position information may be determined by a function other than the equations exemplified by the equations (1-1) to (1-3).
第2位置情報は、図4を使用して説明したステップS502において取得される医療器具の位置情報の例示である。医療器具学習済モデル611、角度学習済モデル612および位置情報合成部615は、連携して図4を使用して説明したステップS502を実現する。位置情報合成部615の出力端は、第1位置情報と走査角度情報に基づいて第2位置情報を出力する第2位置情報出力部の機能を果たす。
The second position information is an example of the position information of the medical device acquired in step S502 described with reference to FIG. The medical device trained model 611, the angle trained model 612, and the position information synthesizing unit 615 cooperate to realize step S502 described with reference to FIG. The output end of the position information synthesis unit 615 functions as a second position information output unit that outputs the second position information based on the first position information and the scanning angle information.
図17は、実施の形態3のプログラムの処理の流れを説明するフローチャートである。図17を使用して説明するフローチャートは、図4を使用して説明したステップS502の処理の詳細を示す。
FIG. 17 is a flowchart illustrating a processing flow of the program of the third embodiment. The flowchart described with reference to FIG. 17 shows the details of the process of step S502 described with reference to FIG.
制御部21は1フレームのRT形式カテーテル画像518を取得する(ステップS541)。制御部21は、RT形式カテーテル画像518を医療器具学習済モデル611に入力して、第1位置情報を取得する(ステップS542)。制御部21は、RT形式カテーテル画像518を角度学習済モデル612に入力して、走査角度情報を取得する(ステップS543)。
The control unit 21 acquires a 1-frame RT format catheter image 518 (step S541). The control unit 21 inputs the RT format catheter image 518 into the medical device learned model 611 and acquires the first position information (step S542). The control unit 21 inputs the RT format catheter image 518 into the angle-learned model 612 and acquires scanning angle information (step S543).
制御部21は、たとえば(1-1)式または(1-2)式に基づいて、第2位置情報を算出する(ステップS544)。その後、制御部21は処理を終了する。以後、制御部21はステップS544で算出した第2位置情報を、ステップS502における位置情報に使用する。
The control unit 21 calculates the second position information based on, for example, the equation (1-1) or the equation (1-2) (step S544). After that, the control unit 21 ends the process. After that, the control unit 21 uses the second position information calculated in step S544 for the position information in step S502.
本実施の形態によると、カテーテル画像51に描出されている医療器具の位置情報を精度良く算出するカテーテルシステム10を提供できる。
According to the present embodiment, it is possible to provide the catheter system 10 that accurately calculates the position information of the medical device depicted in the catheter image 51.
[実施の形態4]
本実施の形態は、図7を使用して説明した分類モデル62の具体例に関する。図18は、分類モデル62の構成を説明する説明図である。分類モデル62は、第1分類学習済モデル621および分類データ変換部629を含む。 [Embodiment 4]
The present embodiment relates to a specific example of theclassification model 62 described with reference to FIG. FIG. 18 is an explanatory diagram illustrating the configuration of the classification model 62. The classification model 62 includes a first classification trained model 621 and a classification data conversion unit 629.
本実施の形態は、図7を使用して説明した分類モデル62の具体例に関する。図18は、分類モデル62の構成を説明する説明図である。分類モデル62は、第1分類学習済モデル621および分類データ変換部629を含む。 [Embodiment 4]
The present embodiment relates to a specific example of the
第1分類学習済モデル621は、RT形式カテーテル画像518を受け付けて、RT形式カテーテル画像518を構成する各部分について、「生体組織領域」、「非生体組織領域」および「医療器具領域」に分類した第1分類データ521を出力する。第1分類学習済モデル621は、さらにそれぞれの部分について分類結果の信頼度、すなわち分類結果が正しい確率を出力する。第1分類学習済モデル621の出力層は、第1分類データ521を出力する第1分類データ出力部の機能を果たす。
The first classification trained model 621 accepts the RT format catheter image 518 and classifies each part constituting the RT format catheter image 518 into a "living tissue region", a "non-living tissue region", and a "medical device region". The first classification data 521 is output. The first classification trained model 621 further outputs the reliability of the classification result for each part, that is, the probability that the classification result is correct. The output layer of the first classification trained model 621 fulfills the function of the first classification data output unit that outputs the first classification data 521.
図18の右上の図は、第1分類データ521をRT形式で模式的に示す。太い右下がりのハッチングは心房壁および心室壁等の生体組織領域を示す。黒塗りは、ブロッケンブロー針等の医療器具が描出された、医療器具領域を示す。格子状のハッチングは、医療器具領域でも生体組織領域でもない、非生体組織領域を示す。
The upper right figure of FIG. 18 schematically shows the first classification data 521 in RT format. Thick, downward-sloping hatches indicate biological tissue areas such as the atrioventricular and ventricular walls. Black paint indicates the medical device area where medical devices such as Brocken blow needles are depicted. Lattice hatches indicate non-living tissue areas that are neither medical device areas nor living tissue areas.
第1分類データ521は、分類データ変換部629により分類データ52に変換される。図18の右下の図は、RT形式分類データ528を模式的に示す。非生体組織領域は、第1内腔領域、第2内腔領域および非内腔領域の3種類に分類されている。図5Cと同様に、細い左下がりのハッチングは第1内腔領域を示す。細い右下がりのハッチングは第2内腔領域を示す。太い左下がりのハッチングは非内腔領域を示す。
The first classification data 521 is converted into classification data 52 by the classification data conversion unit 629. The lower right figure of FIG. 18 schematically shows RT format classification data 528. The non-living tissue region is classified into three types: a first lumen region, a second lumen region, and a non-luminal region. Similar to FIG. 5C, the narrow left-sloping hatch indicates the first lumen region. A narrow downward-sloping hatch indicates the second luminal region. Thick, downward-sloping hatches indicate non-luminal areas.
分類データ変換部629が行なう処理の概要を説明する。非生体組織領域のうち、画像取得用カテーテル40に接する領域、すなわち第1分類データ521において一番右側の領域は、第1内腔領域に分類される。非生体組織領域のうち、生体組織領域で囲まれている領域は、第2内腔領域に分類される。なお、第2内腔領域の分類は、RT形式カテーテル画像518の上端と下端とを接続して、円筒形状にした状態で判定することが望ましい。非生体組織領域のうち第1内腔領域でも第2内腔領域でもない領域は、非内腔領域に分類される。
The outline of the processing performed by the classification data conversion unit 629 will be described. Among the non-living tissue regions, the region in contact with the image acquisition catheter 40, that is, the region on the rightmost side in the first classification data 521 is classified into the first lumen region. Among the non-living tissue regions, the region surrounded by the living tissue region is classified into the second lumen region. It is desirable that the classification of the second lumen region is determined in a state where the upper end and the lower end of the RT type catheter image 518 are connected to form a cylindrical shape. A region of the non-living tissue region that is neither the first lumen region nor the second lumen region is classified as a non-luminal region.
図19は、第1訓練データを説明する説明図である。第1訓練データは、機械学習により第1分類学習済モデル621を生成する際に用いられる。以下の説明では、図3を使用して説明した情報処理装置20を使用して第1訓練データを作成する場合を例にして説明する。第1訓練データは、情報処理装置20とは別のコンピュータ等を用いて作成されてもよい。
FIG. 19 is an explanatory diagram illustrating the first training data. The first training data is used when the first classification trained model 621 is generated by machine learning. In the following description, a case where the first training data is created by using the information processing apparatus 20 described with reference to FIG. 3 will be described as an example. The first training data may be created by using a computer or the like different from the information processing apparatus 20.
制御部21は、RT形式カテーテル画像518とXY形式カテーテル画像519の2種類のカテーテル画像51を表示装置31に表示する。ラベラーは、表示されたカテーテル画像51を観察して、「第1内腔領域と生体組織領域との境界線」、「第2内腔領域と生体組織領域との境界線」、「非内腔領域と生体組織領域との境界線」および「医療器具領域の外形線」の4種類の境界線データをマーキングする。
The control unit 21 displays two types of catheter images 51, an RT format catheter image 518 and an XY format catheter image 519, on the display device 31. The labeler observes the displayed catheter image 51 and observes "the boundary line between the first lumen region and the living tissue region", "the boundary line between the second lumen region and the living tissue region", and "non-lumen". Marking four types of boundary line data, "the boundary line between the area and the living tissue area" and "the outline of the medical device area".
なお、ラベラーはRT形式カテーテル画像518とXY形式カテーテル画像519のどちらのカテーテル画像51にマーキングを行なってもよい。制御部21は、他方のカテーテル画像51の対応する位置に、マーキングに対応する境界線を表示する。以上により、ラベラーはRT形式カテーテル画像518とXY形式カテーテル画像519との双方を確認して、適切なマーキングを行なえる。
The labeler may mark the catheter image 51 of either the RT format catheter image 518 or the XY format catheter image 519. The control unit 21 displays a boundary line corresponding to the marking at the corresponding position of the other catheter image 51. From the above, the labeler can confirm both the RT format catheter image 518 and the XY format catheter image 519 and perform appropriate marking.
ラベラーは、マーキングした4種類の境界線データで区切られたそれぞれの領域が、「生体組織領域」、「非生体組織領域」および「医療器具領域」のいずれであるかを入力する。なお、制御部21が領域を自動判定し、必要に応じてラベラーが修正指示を行なってもよい。以上の処理により、カテーテル画像51の各領域が「生体組織領域」、「非生体組織領域」および「医療器具領域」のいずれに分類されるかを明示した第1分類データ521が作成される。
The labeler inputs whether each area separated by the four types of marked boundary line data is a "living tissue area", a "non-living tissue area", or a "medical instrument area". The control unit 21 may automatically determine the area, and the labeler may give a correction instruction as necessary. By the above processing, the first classification data 521 which clearly indicates whether each region of the catheter image 51 is classified into the "living tissue region", the "non-living tissue region", or the "medical device region" is created.
第1分類データ521について、具体例を挙げて説明する。「生体組織領域」に分類された画素には「生体組織領域ラベル」が、「第1内腔領域」に分類された画素には「第1内腔領域ラベル」が、「第2内腔領域」に分類された画素には「第2内腔領域ラベル」が、「非内腔領域」に分類された画素には「非内腔領域ラベルが、「医療器具領域」に分類された画素には「医療器具領域ラベル」が、「非生体組織領域」に分類された画素には「非生体組織領域ラベル」それぞれ記録される。各ラベルは、たとえば整数で示される。第1分類データ521は、画素の位置とラベルとを関連づけたラベルデータの例示である。
The first classification data 521 will be described with specific examples. The "living tissue area label" is attached to the pixels classified into the "living tissue region", and the "first lumen region label" is attached to the pixels classified into the "first lumen region", and the "second lumen region" is attached to the pixels. The "second lumen area label" is for the pixels classified as "", and the "non-luminal area label" is for the pixels classified as "non-lumen area", and the pixels are classified as "medical instrument area". The "medical device area label" is recorded in the pixels classified into the "non-living tissue area", and the "non-living tissue area label" is recorded in each of the pixels. Each label is indicated by, for example, an integer. The first classification data 521 is an example of label data in which a pixel position and a label are associated with each other.
制御部21は、カテーテル画像51と第1分類データ521とを関連づけて記録する。以上の処理を繰り返して多数組のデータを記録することにより、第1訓練データDBが作成される。以下の説明では第1訓練データDBにRT形式カテーテル画像518と、RT形式の第1分類データ521とを関連づけて記録した第1訓練データDBを例にして説明する。
The control unit 21 records the catheter image 51 and the first classification data 521 in association with each other. By repeating the above process and recording a large number of sets of data, the first training data DB is created. In the following description, the first training data DB recorded by associating the RT format catheter image 518 with the RT format first classification data 521 in the first training data DB will be described as an example.
なお、制御部21はXY形式カテーテル画像519に基づいてXY形式分類データ529を生成してもよい。制御部21は、XY形式分類データ529に基づいてRT形式分類データ528を生成してもよい。
The control unit 21 may generate XY format classification data 529 based on the XY format catheter image 519. The control unit 21 may generate RT format classification data 528 based on the XY format classification data 529.
図12のフローチャートを使用して、第1分類学習済モデル621を生成する処理の概要を説明する。図12のプログラムの実行に先立ち、たとえばセマンテックセグメンテーションを実現するU-Net構造等の未学習のモデルが準備されている。
The outline of the process for generating the first classification trained model 621 will be described using the flowchart of FIG. Prior to the execution of the program of FIG. 12, an unlearned model such as a U-Net structure that realizes semantic segmentation is prepared.
U-Net構造は、多層のエンコーダ層と、その後ろに接続された多層のデコーダ層とを含む。それぞれのエンコーダ層は、プーリング層と畳込層とを含む。セマンテックセグメンテーションにより、入力された画像を構成するそれぞれの画素に対してラベルが付与される。なお、未学習のモデルは、Mask R-CNNモデル、その他任意の画像のセグメンテーションを実現するモデルであってもよい。
The U-Net structure includes a multi-layered encoder layer and a multi-layered decoder layer connected behind the multilayer encoder layer. Each encoder layer includes a pooling layer and a convolutional layer. Semantic segmentation assigns a label to each pixel that makes up the input image. The unlearned model may be a Mask R-CNN model or a model that realizes segmentation of any other image.
制御部21は、第1訓練データDBから1エポックの訓練に使用する訓練レコードを取得する(ステップS571)。制御部21は、モデルの入力層にRT形式カテーテル画像518が入力された場合に、出力層からRT形式の第1分類データ521が出力されるように、モデルのパラメータを調整する(ステップS572)。訓練レコードの取得、およびモデルのパラメータ調整においては、プログラムは、ユーザによる修正の受付、判断の根拠の提示、追加学習等を制御部21に実行させる機能を適宜有していても良い。
The control unit 21 acquires a training record used for training of one epoch from the first training data DB (step S571). The control unit 21 adjusts the model parameters so that when the RT format catheter image 518 is input to the input layer of the model, the RT format first classification data 521 is output from the output layer (step S572). .. In the acquisition of the training record and the parameter adjustment of the model, the program may appropriately have a function of accepting corrections by the user, presenting the basis for judgment, additional learning, and the like to be executed by the control unit 21.
制御部21は、処理を終了するか否かを判定する(ステップS573)。たとえば、制御部21は所定のエポック数の学習を終了した場合に、処理を終了すると判定する。制御部21は、第1訓練データDBからテストデータを取得して機械学習中のモデルに入力し、所定の精度の出力が得られた場合に処理を終了すると判定してもよい。
The control unit 21 determines whether or not to end the process (step S573). For example, the control unit 21 determines that the process is completed when the learning of a predetermined number of epochs is completed. The control unit 21 may acquire test data from the first training data DB, input it to the model being machine-learned, and determine that the process ends when an output with a predetermined accuracy is obtained.
処理を終了しないと判定した場合(ステップS573でNO)、制御部21はステップS571に戻る。処理を終了すると判定した場合(ステップS573でYES)、制御部21は学習済の第1分類学習済モデル621のパラメータを補助記憶装置23に記録する(ステップS574)。その後、制御部21は処理を終了する。以上の処理により、カテーテル画像51を受け付けて第1分類データ521を出力する第1分類学習済モデル621が生成される。
If it is determined that the process is not completed (NO in step S573), the control unit 21 returns to step S571. When it is determined that the processing is completed (YES in step S573), the control unit 21 records the parameters of the trained first classification trained model 621 in the auxiliary storage device 23 (step S574). After that, the control unit 21 ends the process. By the above processing, the first classification trained model 621 that accepts the catheter image 51 and outputs the first classification data 521 is generated.
図12のプログラムの実行に先立ち、時系列的な入力を受け付けるモデルが準備されていても良い。時系列的な入力を受け付けるモデルは、たとえば過去に入力されたRT形式カテーテル画像518に関する情報を保持するメモリ部を備える。時系列的な入力を受け付けるモデルは、過去に入力されたRT形式カテーテル画像518に対する出力を、次のRT形式カテーテル画像518と共に入力する再帰入力部を備えてもよい。
Prior to the execution of the program of FIG. 12, a model that accepts time-series input may be prepared. The model that accepts the time-series input includes, for example, a memory unit that holds information about the RT format catheter image 518 input in the past. The model that accepts the time-series input may include a recursive input unit that inputs the output for the RT format catheter image 518 input in the past together with the next RT format catheter image 518.
時系列的に取得したカテーテル画像51を用いることにより、画像ノイズ等の影響を受けにくく、高精度に第1分類データ521を出力する第1分類学習済モデル621を実現できる。
By using the catheter image 51 acquired in time series, it is possible to realize the first classification trained model 621 that is less susceptible to image noise and outputs the first classification data 521 with high accuracy.
第1分類学習済モデル621は、情報処理装置20とは別のコンピュータ等を用いて作成されてもよい。機械学習が完了した第1分類学習済モデル621がネットワークを介して補助記憶装置23に複写されてもよい。一つのハードウェアで学習させた第1分類学習済モデル621を、複数の情報処理装置20で使用できる。
The first classification trained model 621 may be created by using a computer or the like different from the information processing apparatus 20. The first classification trained model 621 for which machine learning has been completed may be copied to the auxiliary storage device 23 via the network. The first classification trained model 621 trained by one hardware can be used by a plurality of information processing devices 20.
図20は、実施の形態4のプログラムの処理の流れを説明するフローチャートである。図20を使用して説明するフローチャートは、図7を使用して説明した分類モデル62が行なう処理の詳細を示す。
FIG. 20 is a flowchart illustrating a processing flow of the program of the fourth embodiment. The flowchart described with reference to FIG. 20 shows the details of the processing performed by the classification model 62 described with reference to FIG.
制御部21は1フレームのRT形式カテーテル画像518を取得する(ステップS551)。制御部21は、RT形式カテーテル画像518を第1分類学習済モデル621に入力して、第1分類データ521を取得する(ステップS552)。制御部21は、第1分類データ521から連続した1個の非生体組織領域を抽出する(ステップS553)。なお、非生体組織領域の抽出以降の処理は、RT形式カテーテル画像518の上端と下端とを接続して、円筒形状にした状態で行なわれることが望ましい。
The control unit 21 acquires a 1-frame RT format catheter image 518 (step S551). The control unit 21 inputs the RT format catheter image 518 into the first classification learned model 621 and acquires the first classification data 521 (step S552). The control unit 21 extracts one continuous non-living tissue region from the first classification data 521 (step S553). It is desirable that the processing after the extraction of the non-living tissue region is performed in a cylindrical shape by connecting the upper end and the lower end of the RT format catheter image 518.
制御部21は、ステップS552で抽出した非生体組織領域が、画像取得用カテーテル40に接する側、すなわちRT形式カテーテル画像518の左端に接する部分であるか否かを判定する(ステップS554)。画像取得用カテーテル40に接する側であると判定した場合(ステップS554でYES)、制御部21はステップS553で抽出した非生体組織領域は第1内腔領域であると判定する(ステップS555)。
The control unit 21 determines whether or not the non-living tissue region extracted in step S552 is the side in contact with the image acquisition catheter 40, that is, the portion in contact with the left end of the RT format catheter image 518 (step S554). When it is determined that the side is in contact with the image acquisition catheter 40 (YES in step S554), the control unit 21 determines that the non-living tissue region extracted in step S553 is the first lumen region (step S555).
画像取得用カテーテル40に接する部分ではないと判定した場合(ステップS554でNO)、制御部21はステップS552で抽出した非生体組織領域が、生体組織領域に囲まれているか否かを判定する(ステップS556)。生体組織領域に囲まれていると判定した場合(ステップS556でYES)、制御部21はステップS553で抽出した非生体組織領域は第2内腔領域であると判定する(ステップS557)。ステップS555およびステップS557により、制御部21は内腔領域抽出部の機能を実現する。
When it is determined that the portion is not in contact with the image acquisition catheter 40 (NO in step S554), the control unit 21 determines whether or not the non-living tissue region extracted in step S552 is surrounded by the living tissue region (NO). Step S556). When it is determined that the living tissue region is surrounded (YES in step S556), the control unit 21 determines that the non-living tissue region extracted in step S553 is the second lumen region (step S557). By step S555 and step S557, the control unit 21 realizes the function of the lumen region extraction unit.
生体組織領域に囲まれていないと判定した場合(ステップS556でNO)、制御部21はステップS553で抽出した非生体組織領域は非内腔領域であると判定する(ステップS558)。
When it is determined that the living tissue region is not surrounded (NO in step S556), the control unit 21 determines that the non-living tissue region extracted in step S553 is a non-luminal region (step S558).
ステップS555、ステップS557またはステップS558の終了後、制御部21はすべての非生体組織領域の処理を終了したか否かを判定する(ステップS559)。終了していないと判定した場合(ステップS559でNO)、制御部21はステップS553に戻る。終了したと判定した場合(ステップS559でYES)、制御部21は処理を終了する。
After the completion of step S555, step S557 or step S558, the control unit 21 determines whether or not the processing of all non-living tissue regions has been completed (step S559). If it is determined that the process has not been completed (NO in step S559), the control unit 21 returns to step S553. When it is determined that the process is completed (YES in step S559), the control unit 21 ends the process.
制御部21は、ステップS553からステップS559までの処理により、分類データ変換部629の機能を実現する。
The control unit 21 realizes the function of the classification data conversion unit 629 by the processing from step S553 to step S559.
なお、第1分類学習済モデル621は、XY形式カテーテル画像519を生体組織領域と非生体組織領域と医療器具領域とに分類するモデルであってもよい。第1分類学習済モデル621はRT形式カテーテル画像518を生体組織領域と非生体組織領域とに分類するモデルであってもよい。そのようにする場合、ラベラーは医療器具領域に関するマーキングを行なわなくてもよい。
The first classification learned model 621 may be a model that classifies the XY format catheter image 519 into a living tissue region, a non-living tissue region, and a medical instrument region. The first classification trained model 621 may be a model that classifies the RT format catheter image 518 into a living tissue region and a non-living tissue region. In doing so, the labeler does not have to make markings for the medical device area.
本実施の形態によると、カテーテル画像51を生体組織領域と非生体組織領域と医療器具領域とに分類する第1分類学習済モデル621を生成できる。本実施の形態によると、生成した第1分類学習済モデル621を使用して、分類データ52を生成するカテーテルシステム10を提供できる。
According to this embodiment, it is possible to generate a first classification trained model 621 that classifies the catheter image 51 into a living tissue region, a non-living tissue region, and a medical instrument region. According to this embodiment, the generated first classification trained model 621 can be used to provide a catheter system 10 that generates classification data 52.
[変形例4-1]
ラベラーは、マーキングした4種類の境界線データで区切られたそれぞれの領域が、「生体組織領域」、「第1内腔領域」、「第2内腔領域」「非内腔領域」および「医療器具領域」のいずれであるかを入力してもよい。このようにして作成された第1訓練データDBを用いて機械学習を行なうことにより、カテーテル画像51を「生体組織領域」、「第1内腔領域」、「第2内腔領域」、「非内腔領域」および「医療器具領域」に分類する第1分類学習済モデル621を生成できる。 [Modification 4-1]
In the labeler, each region separated by the four types of marked boundary line data is "biological tissue region", "first lumen region", "second lumen region", "non-luminal region" and "medical treatment". You may enter which of the "instrument areas". By performing machine learning using the first training data DB created in this way, thecatheter image 51 can be converted into a "living tissue region", a "first lumen region", a "second lumen region", and a "non-living tissue region". It is possible to generate a first-class trained model 621 that classifies into the "luminal region" and the "medical instrument region".
ラベラーは、マーキングした4種類の境界線データで区切られたそれぞれの領域が、「生体組織領域」、「第1内腔領域」、「第2内腔領域」「非内腔領域」および「医療器具領域」のいずれであるかを入力してもよい。このようにして作成された第1訓練データDBを用いて機械学習を行なうことにより、カテーテル画像51を「生体組織領域」、「第1内腔領域」、「第2内腔領域」、「非内腔領域」および「医療器具領域」に分類する第1分類学習済モデル621を生成できる。 [Modification 4-1]
In the labeler, each region separated by the four types of marked boundary line data is "biological tissue region", "first lumen region", "second lumen region", "non-luminal region" and "medical treatment". You may enter which of the "instrument areas". By performing machine learning using the first training data DB created in this way, the
以上により、分類データ変換部629を使用せずに、カテーテル画像51を「生体組織領域」、「第1内腔領域」、「第2内腔領域」、「非内腔領域」および「医療器具領域」に分類する分類モデル62を実現できる。
As described above, the catheter image 51 can be displayed as a "living tissue region", a "first lumen region", a "second lumen region", a "non-luminal region", and a "medical instrument" without using the classification data conversion unit 629. A classification model 62 classified into "regions" can be realized.
[実施の形態5]
本実施の形態は、2個の分類学習済モデルからそれぞれ出力された分類データ52を合成する合成分類モデル626を用いるカテーテルシステム10に関する。実施の形態4と共通する部分については、説明を省略する。 [Embodiment 5]
The present embodiment relates to acatheter system 10 using a synthetic classification model 626 that synthesizes classification data 52 output from each of the two classification-learned models. The description of the parts common to the fourth embodiment will be omitted.
本実施の形態は、2個の分類学習済モデルからそれぞれ出力された分類データ52を合成する合成分類モデル626を用いるカテーテルシステム10に関する。実施の形態4と共通する部分については、説明を省略する。 [Embodiment 5]
The present embodiment relates to a
図21は、実施の形態5の分類モデル62の構成を説明する説明図である。分類モデル62は、合成分類モデル626および分類データ変換部629を含む。合成分類モデル626は、第1分類学習済モデル621、第2分類学習済モデル622および分類データ合成部628を含む。第1分類学習済モデル621は、実施の形態4と同様であるため、説明を省略する。
FIG. 21 is an explanatory diagram illustrating the configuration of the classification model 62 of the fifth embodiment. The classification model 62 includes a synthetic classification model 626 and a classification data conversion unit 629. The synthetic classification model 626 includes a first classification trained model 621, a second classification trained model 622, and a classification data synthesis unit 628. Since the first classification trained model 621 is the same as that of the fourth embodiment, the description thereof will be omitted.
第2分類学習済モデル622は、RT形式カテーテル画像518を受け付けて、RT形式カテーテル画像518を構成する各部分について、「生体組織領域」、「非生体組織領域」および「医療器具領域」に分類した第2分類データ522を出力するモデルである。第2分類学習済モデル622は、さらにそれぞれの部分について分類結果の信頼度、すなわち分類結果が正しい確率を出力する。第2分類学習済モデル622の詳細については後述する。
The second classification trained model 622 accepts the RT format catheter image 518 and classifies each part constituting the RT format catheter image 518 into a "living tissue region", a "non-living tissue region", and a "medical device region". This is a model that outputs the second classification data 522. The second classification trained model 622 further outputs the reliability of the classification result for each part, that is, the probability that the classification result is correct. The details of the second classification trained model 622 will be described later.
分類データ合成部628は、第1分類データ521と第2分類データ522とを合成して、合成分類データ526を生成する。すなわち、分類データ合成部628の入力端は、第1分類データ取得部および第2分類データ取得部の機能を実現する。分類データ合成部628の出力端は、合成分類データ出力部の機能を実現する。
The classification data synthesis unit 628 synthesizes the first classification data 521 and the second classification data 522 to generate synthetic classification data 526. That is, the input end of the classification data synthesis unit 628 realizes the functions of the first classification data acquisition unit and the second classification data acquisition unit. The output end of the classification data synthesis unit 628 realizes the function of the composition classification data output unit.
合成分類データ526の詳細については後述する。合成分類データ526は、分類データ変換部629により分類データ52に変換される。分類データ変換部629が行なう処理は、実施の形態4と同様であるため、説明を省略する。
The details of the synthetic classification data 526 will be described later. The synthetic classification data 526 is converted into classification data 52 by the classification data conversion unit 629. Since the processing performed by the classification data conversion unit 629 is the same as that of the fourth embodiment, the description thereof will be omitted.
図22は、第2訓練データを説明する説明図である。第2訓練データは、機械学習により第2分類学習済モデル622を生成する際に用いられる。以下の説明では、図3を使用して説明した情報処理装置20を使用して第2訓練データを作成する場合を例にして説明する。第2訓練データは、情報処理装置20とは別のコンピュータ等を用いて作成されてもよい。
FIG. 22 is an explanatory diagram illustrating the second training data. The second training data is used when generating the second classification trained model 622 by machine learning. In the following description, a case where the second training data is created by using the information processing apparatus 20 described with reference to FIG. 3 will be described as an example. The second training data may be created by using a computer or the like different from the information processing apparatus 20.
制御部21は、RT形式カテーテル画像518とXY形式カテーテル画像519の2種類のカテーテル画像51を表示装置31に表示する。ラベラーは、表示されたカテーテル画像51を観察して、「第1内腔領域と生体組織領域との境界線」および「医療器具領域の外形線」の2種類の境界線データをマーキングする。
The control unit 21 displays two types of catheter images 51, an RT format catheter image 518 and an XY format catheter image 519, on the display device 31. The labeler observes the displayed catheter image 51 and marks two types of boundary line data, "the boundary line between the first lumen region and the biological tissue region" and "the outline of the medical device region".
なお、ラベラーはRT形式カテーテル画像518とXY形式カテーテル画像519のどちらのカテーテル画像51にマーキングを行なってもよい。制御部21は、他方のカテーテル画像51の対応する位置に、マーキングに対応する境界線を表示する。以上により、ラベラーはRT形式カテーテル画像518とXY形式カテーテル画像519との双方を確認して、適切なマーキングを行なえる。
The labeler may mark the catheter image 51 of either the RT format catheter image 518 or the XY format catheter image 519. The control unit 21 displays a boundary line corresponding to the marking at the corresponding position of the other catheter image 51. From the above, the labeler can confirm both the RT format catheter image 518 and the XY format catheter image 519 and perform appropriate marking.
ラベラーは、マーキングした2種類の境界線データで区切られたそれぞれの領域が、「生体組織領域」、「非生体組織領域」および「医療器具領域」のいずれであるかを入力する。なお、制御部21が領域を自動判定し、必要に応じてラベラーが修正指示を行なってもよい。以上の処理により、カテーテル画像51の各部分が「生体組織領域」、「非生体組織領域」および「医療器具領域」のいずれの領域に分類されるかを明示した第2分類データ522が作成される。
The labeler inputs whether each area separated by the two types of marked boundary line data is a "living tissue area", a "non-living tissue area", or a "medical instrument area". The control unit 21 may automatically determine the area, and the labeler may give a correction instruction as necessary. By the above processing, the second classification data 522 which clearly indicates whether each part of the catheter image 51 is classified into the "living tissue region", the "non-living tissue region", or the "medical instrument region" is created. To.
第2分類データ522について、具体例を挙げて説明する。「生体組織領域」に分類された画素には「生体組織領域ラベル」が、「非生体組織領域」に分類された画素には「非生体組織領域ラベル」が、「医療器具領域」に分類された画素には「医療器具領域ラベル」がそれぞれ記録される。各ラベルは、たとえば整数で示される。第2分類データ522は、画素の位置とラベルとを関連づけたラベルデータの例示である。
The second classification data 522 will be described with specific examples. Pixels classified into "living tissue area" are classified into "living tissue area label", and pixels classified into "non-living tissue area" are classified into "non-living tissue area label" into "medical instrument area". A "medical device area label" is recorded on each of the pixels. Each label is indicated by, for example, an integer. The second classification data 522 is an example of label data in which pixel positions and labels are associated with each other.
制御部21は、カテーテル画像51と第2分類データ522とを関連づけて記録する。以上の処理を繰り返して多数組のデータを記録することにより、第2訓練データDBが作成される。第2訓練データDBを用いて実施の形態4で説明した機械学習と同様の処理を行なうことにより、第2分類学習済モデル622を生成できる。
The control unit 21 records the catheter image 51 and the second classification data 522 in association with each other. The second training data DB is created by repeating the above processing and recording a large number of sets of data. The second classification trained model 622 can be generated by performing the same processing as the machine learning described in the fourth embodiment using the second training data DB.
なお、第2分類学習済モデル622は、XY形式カテーテル画像519を生体組織領域と非生体組織領域と医療器具領域とに分類するモデルであってもよい。第2分類学習済モデル622はRT形式カテーテル画像518を生体組織領域と非生体組織領域とに分類するモデルであってもよい。そのようにする場合、ラベラーは医療器具領域に関するマーキングを行なわなくてもよい。
The second classification learned model 622 may be a model that classifies the XY format catheter image 519 into a living tissue region, a non-living tissue region, and a medical instrument region. The second classification trained model 622 may be a model that classifies the RT format catheter image 518 into a living tissue region and a non-living tissue region. In doing so, the labeler does not have to make markings for the medical device area.
第2分類データ522の作成は、第1分類データ521の作成に比べると短時間で行なえる。第2分類データ522を作成するラベラーのトレーニングは、第1分類データ521を作成するラベラーのトレーニングに比べて短時間で行なえる。以上により、第2訓練データDBには第1訓練DBに比べて多量の訓練データを登録可能である。
The creation of the second classification data 522 can be performed in a shorter time than the creation of the first classification data 521. The training of the labeler for creating the second classification data 522 can be performed in a shorter time than the training of the labeler for creating the first classification data 521. As described above, a large amount of training data can be registered in the second training data DB as compared with the first training DB.
多量の訓練データを使用できるため、第1内腔領域と生体組織領域との境界および医療器具領域の外形については、第1分類学習済モデル621よりも高い精度で識別可能な第2分類学習済モデル622を生成できる。しかしながら、第2分類学習済モデル622は第1内腔領域以外の非生体組織領域については学習していないため、生体組織領域との識別を行なえない。
Since a large amount of training data can be used, the boundary between the first lumen region and the biological tissue region and the outer shape of the medical instrument region have been trained in the second classification, which can be identified with higher accuracy than the first classification trained model 621. Model 622 can be generated. However, since the second classification trained model 622 does not learn about the non-living tissue region other than the first lumen region, it cannot be distinguished from the living tissue region.
分類データ合成部628が行なう処理について説明する。第1分類学習済モデル621と第2分類学習済モデル622との双方に、同一のRT形式カテーテル画像518が入力される。医療器具学習済モデル611から、第1分類データ521が出力される。第2分類学習済モデル622から第2分類データ522が出力される。
The processing performed by the classification data synthesis unit 628 will be described. The same RT format catheter image 518 is input to both the first classification trained model 621 and the second classification trained model 622. The first classification data 521 is output from the medical device learned model 611. The second classification data 522 is output from the second classification trained model 622.
以下の説明では、第1分類学習済モデル621、第2分類学習済モデル622とも、RT形式カテーテル画像518のそれぞれの画素について、分類したラベルと当該ラベルの信頼度とを出力する場合を例にして説明する。なお、第1分類学習済モデル621および第2分類学習済モデル622は、たとえばRT形式カテーテル画像518の縦3画素、横3画素の合計9画素等の範囲ごとに分類したラベルと確率とを出力してもよい。
In the following description, for both the first classification trained model 621 and the second classification trained model 622, the classified label and the reliability of the label are output for each pixel of the RT format catheter image 518 as an example. I will explain. The first classification trained model 621 and the second classification trained model 622 output labels and probabilities classified by range, for example, a total of 9 pixels of 3 vertical pixels and 3 horizontal pixels of the RT format catheter image 518. You may.
画像取得用カテーテル40の中心からの距離がr、走査角度がθである画素について、第1分類学習済モデル621が生体組織領域であることの信頼度をQ1t(r,θ)で示す。なお、第1分類学習済モデル621が生体組織領域以外の領域に分類した画素については、Q1t(r,θ)=0である。
For pixels whose distance from the center of the image acquisition catheter 40 is r and whose scanning angle is θ, the reliability that the first classification trained model 621 is a living tissue region is shown by Q1t (r, θ). Q1t (r, θ) = 0 for the pixels classified by the first classification trained model 621 into a region other than the biological tissue region.
同様に画像取得用カテーテル40の中心からの距離がr、走査角度がθである画素について、第2分類学習済モデル622が生体組織領域であることの信頼度をQ2t(r,θ)で示す。なお、第2分類学習済モデル622が生体組織領域以外の領域に分類した画素については、Q2t(r,θ)=0である。
Similarly, for pixels whose distance from the center of the image acquisition catheter 40 is r and whose scanning angle is θ, the reliability that the second classification trained model 622 is a living tissue region is shown by Q2t (r, θ). .. Q2t (r, θ) = 0 for the pixels classified by the second classification trained model 622 into a region other than the biological tissue region.
分類データ合成部628は、たとえば(5-1)式に基づいて合成値Qt(r,θ)を算出する。なお、Qt(r,θ)は生体組織領域であるという分類が正しい確率ではなく、生体組織領域であることの信頼度の大小を相対的に示す数値である。
Qt(r,θ)=Q1t(r,θ)×Q2t(r,θ) ‥‥‥(5-1)
分類データ合成部628は、Qt(r,θ)が0.5以上の画素を生体組織領域に分類する。 The classificationdata synthesis unit 628 calculates the composite value Qt (r, θ) based on, for example, equation (5-1). It should be noted that Qt (r, θ) is not a correct probability that the classification is a living tissue region, but is a numerical value that relatively indicates the magnitude of the reliability of being a living tissue region.
Qt (r, θ) = Q1t (r, θ) × Q2t (r, θ) ‥‥‥ (5-1)
The classificationdata synthesis unit 628 classifies pixels having a Qt (r, θ) of 0.5 or more into a biological tissue region.
Qt(r,θ)=Q1t(r,θ)×Q2t(r,θ) ‥‥‥(5-1)
分類データ合成部628は、Qt(r,θ)が0.5以上の画素を生体組織領域に分類する。 The classification
Qt (r, θ) = Q1t (r, θ) × Q2t (r, θ) ‥‥‥ (5-1)
The classification
同様に、第1分類学習済モデル621が医療器具領域であることの信頼度をQ1c(r,θ)で示し、第2分類学習済モデル622が医療器具領域であることの信頼度をQ2c(r,θ)で示す。
Similarly, the reliability that the first category trained model 621 is in the medical device area is indicated by Q1c (r, θ), and the reliability that the second category trained model 622 is in the medical device area is Q2c (r, θ). It is shown by r, θ).
分類データ合成部628は、たとえば(5-2)式に基づいて合成値Qc(r,θ)を算出する。なお、Qc(r,θ)も医療器具領域であるという分類が正しい確率ではなく、医療器具領域であることの信頼性の大小を相対的に示す数値である。
Qc(r,θ)=Q1c(r,θ)×Q2c(r,θ) ‥‥‥(5-2) The classificationdata synthesis unit 628 calculates the composite value Qc (r, θ) based on, for example, Eq. (5-2). It should be noted that Qc (r, θ) is not a correct probability of being classified as a medical device area, but is a numerical value that relatively indicates the degree of reliability of being a medical device area.
Qc (r, θ) = Q1c (r, θ) × Q2c (r, θ) ‥‥‥ (5-2)
Qc(r,θ)=Q1c(r,θ)×Q2c(r,θ) ‥‥‥(5-2) The classification
Qc (r, θ) = Q1c (r, θ) × Q2c (r, θ) ‥‥‥ (5-2)
分類データ合成部628は、Qc(r,θ)が0.5以上の画素を医療器具領域に分類する。分類データ合成部628は、医療器具領域にも生体組織領域にも分類されなかった画素を、非生体組織領域に分類する。以上により、分類データ合成部628は第1分類データ521と第2分類データ522とを合成した合成分類データ526を生成する。合成分類データ526は、分類データ変換部629によりRT形式分類データ528に変換される。
The classification data synthesis unit 628 classifies pixels having a Qc (r, θ) of 0.5 or more into the medical device area. The classification data synthesis unit 628 classifies the pixels that are not classified into the medical device area or the living tissue area into the non-living tissue area. As described above, the classification data synthesizing unit 628 generates the synthetic classification data 526 by synthesizing the first classification data 521 and the second classification data 522. The synthetic classification data 526 is converted into RT format classification data 528 by the classification data conversion unit 629.
なお、(5-1)式および(5-2)式は例示である。分類データ合成部628が分類を行なう際の閾値も例示である。分類データ合成部628は、第1分類データ521と第2分類データ522とを受け付けて合成分類データ526を出力する学習済モデルであってもよい。
Eqs. (5-1) and (5-2) are examples. The threshold value when the classification data synthesis unit 628 performs classification is also an example. The classification data synthesis unit 628 may be a trained model that accepts the first classification data 521 and the second classification data 522 and outputs the synthetic classification data 526.
第1分類データ521は、実施の形態4で説明した分類データ変換部629により「生体組織領域」、「第1内腔領域」、「第2内腔領域」、「非内腔領域」および「医療器具領域」に分類された後に、分類データ合成部628に入力されてもよい。
The first classification data 521 is obtained by the classification data conversion unit 629 described in the fourth embodiment as a "living tissue region", a "first lumen region", a "second lumen region", a "non-lumen region" and a "non-luminal region". After being classified into the "medical device area", it may be input to the classification data synthesis unit 628.
第1分類学習済モデル621は、変形例4-1で説明したカテーテル画像51を「生体組織領域」、「第1内腔領域」、「第2内腔領域」、「非内腔領域」および「医療器具領域」に分類するモデルであってもよい。
The first classification trained model 621 uses the catheter image 51 described in the modified example 4-1 as a "living tissue region", a "first lumen region", a "second lumen region", a "non-luminal region", and a "non-luminal region". It may be a model classified into the "medical device area".
非生体組織領域を「第1内腔領域」、「第2内腔領域」および「非内腔領域」に分類済のデータが分類データ合成部628に入力される場合、分類データ合成部628は「生体組織領域」、「第1内腔領域」、「第2内腔領域」、「非内腔領域」および「医療器具領域」に分類済の合成分類データ526を出力できる。そのようにする場合、合成分類データ526を分類データ変換部629に入力してRT形式分類データ528に変換する必要はない。
When the data in which the non-living tissue region has been classified into the "first lumen region", "second lumen region" and "non-lumen region" is input to the classification data synthesis unit 628, the classification data synthesis unit 628 It is possible to output synthetic classification data 526 that has been classified into a "living tissue region", a "first lumen region", a "second lumen region", a "non-luminal region", and a "medical instrument region". In such a case, it is not necessary to input the synthetic classification data 526 into the classification data conversion unit 629 and convert it into the RT format classification data 528.
図23は、実施の形態5のプログラムの処理の流れを説明するフローチャートである。図23を使用して説明するフローチャートは、図7を使用して説明した分類モデル62が行なう処理の詳細を示す。
FIG. 23 is a flowchart illustrating a processing flow of the program of the fifth embodiment. The flowchart described with reference to FIG. 23 shows the details of the processing performed by the classification model 62 described with reference to FIG.
制御部21は1フレームのRT形式カテーテル画像518を取得する(ステップS581)。ステップS581により、制御部21は画像取得部の機能を実現する。制御部21は、RT形式カテーテル画像518を第1分類学習済モデル621に入力して、第1分類データ521を取得する(ステップS582)。制御部21は、RT形式カテーテル画像518を第2分類学習済モデル622に入力して、第2分類データ522を取得する(ステップS583)。
The control unit 21 acquires a 1-frame RT format catheter image 518 (step S581). By step S581, the control unit 21 realizes the function of the image acquisition unit. The control unit 21 inputs the RT format catheter image 518 into the first classification learned model 621 and acquires the first classification data 521 (step S582). The control unit 21 inputs the RT format catheter image 518 into the second classification trained model 622 and acquires the second classification data 522 (step S583).
制御部21は、分類合成のサブルーチンを起動する(ステップS584)。分類合成のサブルーチンは、第1分類データ521と第2分類データ522とを合成して、合成分類データ526を生成するサブルーチンである。分類合成のサブルーチンの処理の流れは後述する。
The control unit 21 activates a classification / synthesis subroutine (step S584). The classification / synthesis subroutine is a subroutine that synthesizes the first classification data 521 and the second classification data 522 to generate the synthesis classification data 526. The processing flow of the classification synthesis subroutine will be described later.
制御部21は、合成分類データ526から連続した1個の非生体組織領域を抽出する(ステップS585)。なお、非生体組織領域の抽出以降の処理は、RT形式カテーテル画像518の上端と下端とを接続して、円筒形状にした状態で行なわれることが望ましい。
The control unit 21 extracts one continuous non-living tissue region from the synthetic classification data 526 (step S585). It is desirable that the processing after the extraction of the non-living tissue region is performed in a cylindrical shape by connecting the upper end and the lower end of the RT format catheter image 518.
制御部21は、ステップS585で抽出した非生体組織領域が、画像取得用カテーテル40に接する側であるか否かを判定する(ステップS554)。以後、ステップS559までの処理は、図20を使用して説明した実施の形態4のプログラムの処理の流れと同一であるため、説明を省略する。
The control unit 21 determines whether or not the non-living tissue region extracted in step S585 is on the side in contact with the image acquisition catheter 40 (step S554). Hereinafter, since the processing up to step S559 is the same as the processing flow of the program of the fourth embodiment described with reference to FIG. 20, the description thereof will be omitted.
制御部21はすべての非生体組織領域の処理を終了したか否かを判定する(ステップS559)。終了していないと判定した場合(ステップS559でNO)、制御部21はステップS585に戻る。終了したと判定した場合(ステップS559でYES)、制御部21は処理を終了する。
The control unit 21 determines whether or not the processing of all non-living tissue regions has been completed (step S559). If it is determined that the process has not been completed (NO in step S559), the control unit 21 returns to step S585. When it is determined that the process is completed (YES in step S559), the control unit 21 ends the process.
図24は、分類合成のサブルーチンの処理の流れを説明するフローチャートである。分類合成のサブルーチンは、第1分類データ521と第2分類データ522とを合成して、合成分類データ526を生成するサブルーチンである。
FIG. 24 is a flowchart illustrating the processing flow of the classification / synthesis subroutine. The classification / synthesis subroutine is a subroutine that synthesizes the first classification data 521 and the second classification data 522 to generate the synthesis classification data 526.
制御部21は、処理を行なう画素を選択する(ステップS601)。制御部21は、第1分類データ521から処理中の画素が生体組織領域であることの信頼度Q1t(r,θ)を取得する(ステップS602)。制御部21は、第2分類データ522から処理中の画素が生体組織領域であることの信頼度Q2t(r,θ)を取得する(ステップS603)。
The control unit 21 selects a pixel to be processed (step S601). The control unit 21 acquires the reliability Q1t (r, θ) that the pixel being processed is a living tissue region from the first classification data 521 (step S602). The control unit 21 acquires the reliability Q2t (r, θ) that the pixel being processed is a living tissue region from the second classification data 522 (step S603).
制御部21は、たとえば(5-1)式に基づいて合成値Qt(r,θ)を算出する(ステップS604)。制御部21は、合成値Qt(r,θ)が所定の閾値以上であるか否かを判定する(ステップS605)。所定の閾値は、たとえば0.5である。
The control unit 21 calculates the combined value Qt (r, θ) based on the equation (5-1), for example (step S604). The control unit 21 determines whether or not the combined value Qt (r, θ) is equal to or greater than a predetermined threshold value (step S605). The predetermined threshold is, for example, 0.5.
所定の閾値以上であると判定した場合(ステップS605でYES)、制御部21は処理中の画素を「生体組織領域」に分類する(ステップS606)。所定の閾値未満であると判定した場合(ステップS605でNO)、制御部21は、第1分類データ521から処理中の画素が医療器具領域であることの信頼度Q1c(r,θ)を取得する(ステップS611)。制御部21は、第2分類データ522から処理中の画素が医療器具領域であることの信頼度Q2c(r,θ)を取得する(ステップS612)。
When it is determined that the threshold value is equal to or higher than the predetermined threshold value (YES in step S605), the control unit 21 classifies the pixel being processed into the "living tissue region" (step S606). When it is determined that the value is less than a predetermined threshold value (NO in step S605), the control unit 21 acquires the reliability Q1c (r, θ) that the pixel being processed is the medical device area from the first classification data 521. (Step S611). The control unit 21 acquires the reliability Q2c (r, θ) that the pixel being processed is in the medical device region from the second classification data 522 (step S612).
制御部21は、たとえば(5-2)式に基づいて合成値Qc(r,θ)を算出する(ステップS613)。制御部21は、合成値Qc(r,θ)が所定の閾値以上であるか否かを判定する(ステップS614)。所定の閾値は、たとえば0.5である。
The control unit 21 calculates the combined value Qc (r, θ) based on the equation (5-2), for example (step S613). The control unit 21 determines whether or not the combined value Qc (r, θ) is equal to or greater than a predetermined threshold value (step S614). The predetermined threshold is, for example, 0.5.
所定の閾値以上であると判定した場合(ステップS614でYES)、制御部21は処理中の画素を「医療器具領域」に分類する(ステップS615)。所定の閾値未満であると判定した場合(ステップS614でNO)、制御部21は、処理中の画素を「非生体組織領域」に分類する(ステップS616)。
When it is determined that the threshold value is equal to or higher than the predetermined threshold value (YES in step S614), the control unit 21 classifies the pixel being processed into the “medical device area” (step S615). When it is determined that the value is less than a predetermined threshold value (NO in step S614), the control unit 21 classifies the pixel being processed into a “non-living tissue region” (step S616).
ステップS606、ステップS615またはステップS616の終了後、制御部21はすべての画素の処理を終了したか否かを判定する(ステップS607)。終了していないと判定した場合(ステップS607でNO)、制御部21はステップS601に戻る。終了したと判定した場合(ステップS607でYES)、制御部21は処理を終了する。制御部21は、分類合成のサブルーチンにより、分類データ合成部628の機能を実現する。
After the end of step S606, step S615 or step S616, the control unit 21 determines whether or not the processing of all the pixels is completed (step S607). If it is determined that the process has not been completed (NO in step S607), the control unit 21 returns to step S601. When it is determined that the process is completed (YES in step S607), the control unit 21 ends the process. The control unit 21 realizes the function of the classification data synthesis unit 628 by the subroutine of the classification synthesis.
本実施の形態によると、2個の分類学習済モデルからそれぞれ出力された分類データ52を合成した合成分類データ526を用いてRT形式分類データ528を生成するカテーテルシステム10を提供できる。比較的容易に多数の訓練データを収集して、分類精度を高められる第2分類学習済モデル622と、訓練データの収集に手間が掛かる第1分類学習済モデル621とを組み合わせて使用することにより、学習済モデルの生成コストと、分類精度とのバランスのよいカテーテルシステム10を提供できる。
According to the present embodiment, it is possible to provide a catheter system 10 that generates RT format classification data 528 using synthetic classification data 526 that synthesizes classification data 52 output from each of the two classification learned models. By using a combination of the second classification trained model 622, which can collect a large number of training data relatively easily and improve the classification accuracy, and the first classification trained model 621, which takes time to collect training data. , It is possible to provide a catheter system 10 having a good balance between the cost of generating a trained model and the accuracy of classification.
[実施の形態6]
本実施の形態は、医療器具の位置情報をヒントに使用して、カテーテル画像51を構成する各部分について分類を行なうカテーテルシステム10に関する。実施の形態1と共通する部分については、説明を省略する。 [Embodiment 6]
The present embodiment relates to acatheter system 10 that classifies each portion constituting the catheter image 51 by using the position information of a medical device as a hint. The description of the parts common to the first embodiment will be omitted.
本実施の形態は、医療器具の位置情報をヒントに使用して、カテーテル画像51を構成する各部分について分類を行なうカテーテルシステム10に関する。実施の形態1と共通する部分については、説明を省略する。 [Embodiment 6]
The present embodiment relates to a
図25は、ヒント有学習済モデル631の構成を説明する説明図である。ヒント有学習済モデル631は、図7を使用して説明した分類モデル62の代わりに、図4を使用して説明したステップS604で使用される。
FIG. 25 is an explanatory diagram illustrating the configuration of the hinted trained model 631. The hinted trained model 631 is used in step S604 described using FIG. 4 instead of the classification model 62 described using FIG.
ヒント有学習済モデル631は、RT形式カテーテル画像518と、RT形式カテーテル画像518に描出されている医療器具の位置情報とを受け付けて、RT形式カテーテル画像518を構成する各部分について、「生体組織領域」、「非生体組織領域」および「医療器具領域」に分類したヒント有分類データ561を出力するモデルである。第1分類学習済モデル621は、さらにそれぞれの部分について分類結果の信頼度、すなわち分類結果が正しい確率を出力する。
The trained model 631 receives the RT format catheter image 518 and the position information of the medical device depicted in the RT format catheter image 518, and for each part constituting the RT format catheter image 518, "living tissue". This model outputs hinted classification data 561 classified into "region", "non-living tissue region", and "medical device region". The first classification trained model 621 further outputs the reliability of the classification result for each part, that is, the probability that the classification result is correct.
図26は、ヒント有訓練データDB72のレコードレイアウトを説明する説明図である。ヒント有訓練データDB72は、カテーテル画像51と、カテーテル画像51に描出されている医療器具の位置情報と、カテーテル画像51を構成する各部分について、描出されている被写体ごとに分類した分類データ52とを関連づけて記録したデータベースである。
FIG. 26 is an explanatory diagram illustrating the record layout of the training data DB 72 with hints. The training data DB 72 with hints includes the catheter image 51, the position information of the medical device depicted in the catheter image 51, and the classification data 52 in which each part constituting the catheter image 51 is classified according to the drawn subject. It is a database that records in association with.
分類データ52は、たとえば図19を使用して説明した手順に基づいて、ラベラーが作成したデータである。ヒント有訓練データDB72を用いて実施の形態4で説明した機械学習と同様の処理を行なうことにより、ヒント有学習済モデル631を生成できる。
The classification data 52 is data created by the labeler based on the procedure described using, for example, FIG. The hinted training model 631 can be generated by performing the same processing as the machine learning described in the fourth embodiment using the hinted training data DB 72.
図27は、実施の形態6のプログラムの処理の流れを説明するフローチャートである。図27を使用して説明するフローチャートは、図4を使用して説明したステップS504で行なう処理の詳細を示す。
FIG. 27 is a flowchart illustrating a processing flow of the program of the sixth embodiment. The flowchart described with reference to FIG. 27 shows the details of the process performed in step S504 described with reference to FIG.
制御部21は1フレームのRT形式カテーテル画像518を取得する(ステップS621)。制御部21は、RT形式カテーテル画像518をたとえば図6を使用して説明した医療器具学習済モデル611に入力して、医療器具の位置情報を取得する(ステップS622)。制御部21は、RT形式カテーテル画像518と位置情報とをヒント有学習済モデル631に入力して、ヒント有分類データ561を取得する(ステップS623)。
The control unit 21 acquires a 1-frame RT format catheter image 518 (step S621). The control unit 21 inputs the RT format catheter image 518 into the medical device learned model 611 described using, for example, FIG. 6 to acquire the position information of the medical device (step S622). The control unit 21 inputs the RT format catheter image 518 and the position information into the hinted trained model 631 and acquires the hinted classification data 561 (step S623).
制御部21は、ヒント有分類データ561から連続した1個の非生体組織領域を抽出する(ステップS624)。なお、非生体組織領域の抽出以降の処理は、RT形式カテーテル画像518の上端と下端とを接続して、円筒形状にした状態で行なわれることが望ましい。
The control unit 21 extracts one continuous non-living tissue region from the hinted classification data 561 (step S624). It is desirable that the processing after the extraction of the non-living tissue region is performed in a cylindrical shape by connecting the upper end and the lower end of the RT format catheter image 518.
制御部21は、ステップS624で抽出した非生体組織領域が、画像取得用カテーテル40に接する側であるか否かを判定する(ステップS554)。以後、ステップS559までの処理は、図20を使用して説明した実施の形態4のプログラムの処理の流れと同一であるため、説明を省略する。
The control unit 21 determines whether or not the non-living tissue region extracted in step S624 is on the side in contact with the image acquisition catheter 40 (step S554). Hereinafter, since the processing up to step S559 is the same as the processing flow of the program of the fourth embodiment described with reference to FIG. 20, the description thereof will be omitted.
制御部21はすべての非生体組織領域の処理を終了したか否かを判定する(ステップS559)。終了していないと判定した場合(ステップS559でNO)、制御部21はステップS624に戻る。終了したと判定した場合(ステップS559でYES)、制御部21は処理を終了する。
The control unit 21 determines whether or not the processing of all non-living tissue regions has been completed (step S559). If it is determined that the process has not been completed (NO in step S559), the control unit 21 returns to step S624. When it is determined that the process is completed (YES in step S559), the control unit 21 ends the process.
本実施の形態によると、医療器具の位置情報をヒントとして入力することにより、分類データ52を精度良く生成するカテーテルシステム10を提供できる。
According to the present embodiment, the catheter system 10 that accurately generates the classification data 52 can be provided by inputting the position information of the medical device as a hint.
[変形例6-1]
図28は、変形例のプログラムの処理の流れを説明するフローチャートである。図28を使用して説明する処理は、図27を使用して説明した処理の代わりに実行される。 [Modification 6-1]
FIG. 28 is a flowchart illustrating a processing flow of the program of the modified example. The process described with reference to FIG. 28 is performed in place of the process described with reference to FIG. 27.
図28は、変形例のプログラムの処理の流れを説明するフローチャートである。図28を使用して説明する処理は、図27を使用して説明した処理の代わりに実行される。 [Modification 6-1]
FIG. 28 is a flowchart illustrating a processing flow of the program of the modified example. The process described with reference to FIG. 28 is performed in place of the process described with reference to FIG. 27.
制御部21は1フレームのRT形式カテーテル画像518を取得する(ステップS621)。制御部21は、医療器具の位置情報を取得する(ステップS622)。制御部21は、医療器具の位置情報の取得に成功したか否かを判定する(ステップS631)。たとえば、医療器具学習済モデル611から出力される信頼度が閾値よりも高い場合に、制御部21は位置情報の取得に成功したと判定する。
The control unit 21 acquires a 1-frame RT format catheter image 518 (step S621). The control unit 21 acquires the position information of the medical device (step S622). The control unit 21 determines whether or not the acquisition of the position information of the medical device is successful (step S631). For example, when the reliability output from the medical device learned model 611 is higher than the threshold value, the control unit 21 determines that the acquisition of the position information is successful.
なお、ステップS631の「成功」は、RT形式カテーテル画像518に医療器具が描出されており、閾値よりも高い信頼度で当該医療器具の位置情報を制御部21が取得できたことを意味する。「成功」しない場合には、たとえば、RT形式カテーテル画像518の撮影範囲に医療器具が存在しない場合、および、医療器具が生体組織領域の表面に密着していて明瞭に描出されない場合が含まれる。
Note that "success" in step S631 means that the medical device is visualized on the RT format catheter image 518, and the control unit 21 can acquire the position information of the medical device with a reliability higher than the threshold value. The unsuccessful case includes, for example, the absence of a medical device in the imaging range of the RT format catheter image 518, and the case where the medical device is in close contact with the surface of the biological tissue area and is not clearly visualized.
位置情報の取得に成功したと判定した場合(ステップS631でYES)、制御部21は、RT形式カテーテル画像518と位置情報とをヒント有学習済モデル631に入力して、ヒント有分類データ561を取得する(ステップS623)。位置情報の取得に成功していないと判定した場合(ステップS631でNO)、制御部21はRT形式カテーテル画像518をヒント無学習済モデル632に入力してヒント無分類データを取得する(ステップS632)。
When it is determined that the acquisition of the position information is successful (YES in step S631), the control unit 21 inputs the RT format catheter image 518 and the position information into the hinted trained model 631 and inputs the hinted classification data 561. Acquire (step S623). When it is determined that the acquisition of the position information is not successful (NO in step S631), the control unit 21 inputs the RT format catheter image 518 into the hint-unlearned model 632 and acquires the hint-uncategorized data (step S632). ).
ヒント無学習済モデル632は、たとえば図7、図18または図21を使用して説明した分類モデル62である。同様にヒント無分類データは、分類モデル62から出力される分類データ52である。
The hint unlearned model 632 is a classification model 62 described using, for example, FIG. 7, FIG. 18 or FIG. 21. Similarly, the hint unclassified data is the classification data 52 output from the classification model 62.
ステップS623またはステップS632の終了後、制御部21は、ヒント有分類データ561または分類モデル62から連続した1個の非生体組織領域を抽出する(ステップS624)。以後の処理は、図27を使用して説明した処理の流れと同一であるため、説明を省略する。
After the end of step S623 or step S632, the control unit 21 extracts one continuous non-living tissue region from the hinted classification data 561 or the classification model 62 (step S624). Since the subsequent processing is the same as the processing flow described with reference to FIG. 27, the description thereof will be omitted.
ヒント有分類データ561は、第1データの例示である。ヒント有学習済モデル631は、カテーテル画像51と医療器具の位置情報とを入力した場合に第1データを出力する第1学習済モデルの例示である。ヒント有学習済モデル631の出力層は、第1データを出力する第1データ出力部の例示である。
The hinted classification data 561 is an example of the first data. The hinted trained model 631 is an example of the first trained model that outputs the first data when the catheter image 51 and the position information of the medical device are input. The output layer of the hint trained model 631 is an example of a first data output unit that outputs the first data.
ヒント無分類データは、第2データの例示である。ヒント無学習済モデル632は、カテーテル画像51を入力した場合に第2データを出力する第2学習済モデルおよび第2モデルの例示である。ヒント無学習済モデル632の出力層は、第2データ出力部の例示である。
Hint uncategorized data is an example of the second data. The hint unlearned model 632 is an example of the second trained model and the second model that output the second data when the catheter image 51 is input. The output layer of the hint unlearned model 632 is an example of the second data output unit.
本変形例によると、位置情報の取得に成功していない場合には、位置情報の入力を必要としない分類モデル62を使用する。したがって、誤ったヒントをヒント有学習済モデル631に入力することによる誤動作を防止するカテーテルシステム10を提供できる。
According to this modification, if the acquisition of location information is not successful, the classification model 62 that does not require input of location information is used. Therefore, it is possible to provide a catheter system 10 that prevents a malfunction due to inputting an erroneous hint into the hint-learned model 631.
[実施の形態7]
本実施の形態は、ヒント有学習済モデル631の出力とヒント無学習済モデル632の出力とを合成して合成データ536を生成するカテーテルシステム10に関する。実施の形態6と共通する部分については説明を省略する。なお、合成データ536は、図4を使用して説明したステップS504の出力である分類データ52の代わりに使用されるデータである。 [Embodiment 7]
The present embodiment relates to acatheter system 10 that synthesizes the output of the hinted trained model 631 and the output of the hintless trained model 632 to generate synthetic data 536. The description of the parts common to the sixth embodiment will be omitted. The synthetic data 536 is data used in place of the classification data 52, which is the output of step S504 described with reference to FIG.
本実施の形態は、ヒント有学習済モデル631の出力とヒント無学習済モデル632の出力とを合成して合成データ536を生成するカテーテルシステム10に関する。実施の形態6と共通する部分については説明を省略する。なお、合成データ536は、図4を使用して説明したステップS504の出力である分類データ52の代わりに使用されるデータである。 [Embodiment 7]
The present embodiment relates to a
図29は、実施の形態7の分類モデル62の構成を説明する説明図である。分類モデル62は、位置分類解析部66および第3合成部543を含む。位置分類解析部66は、位置情報取得部65、ヒント有学習済モデル631、ヒント無学習済モデル632、第1合成部541および第2合成部542を含む。
FIG. 29 is an explanatory diagram illustrating the configuration of the classification model 62 of the seventh embodiment. The classification model 62 includes a position classification analysis unit 66 and a third synthesis unit 543. The position classification analysis unit 66 includes a position information acquisition unit 65, a hinted learning model 631, a hintless learning model 632, a first synthesis unit 541 and a second synthesis unit 542.
位置情報取得部65は、たとえば図6を使用して説明した医療器具学習済モデル611または、図16を使用して説明した位置情報モデル619から医療器具が描出されている位置を示す位置情報を取得する。ヒント有学習済モデル631は、実施の形態6と同様であるため、説明を省略する。ヒント無学習済モデル632は、たとえば図7、図18または図21を使用して説明した分類モデル62である。
The position information acquisition unit 65 obtains position information indicating the position where the medical device is drawn from, for example, the medical device learned model 611 described using FIG. 6 or the position information model 619 described using FIG. get. Since the hinted trained model 631 is the same as that of the sixth embodiment, the description thereof will be omitted. The hint unlearned model 632 is a classification model 62 described using, for example, FIG. 7, FIG. 18 or FIG. 21.
第1合成部541の動作を説明する。第1合成部541は、ヒント有学習済モデル631から出力されたヒント有分類データ561と、ヒント無学習済モデル632から出力されたヒント無分類データとを合成して、分類情報を作成する。第1合成部541の入力端は、ヒント有分類データ561を取得する第1データ取得部、および、ヒント無分類データを取得する第2データ取得部の機能を果たす。第1合成部541の出力端は、ヒント有分類データ561とヒント無分類データとを合成した第1合成データを出力する第1合成データ出力部の機能を果たす。
The operation of the first synthesis unit 541 will be described. The first synthesis unit 541 creates classification information by synthesizing the hinted classification data 561 output from the hinted trained model 631 and the hinted unclassified data output from the hintless learned model 632. The input end of the first synthesis unit 541 functions as a first data acquisition unit for acquiring hinted classification data 561 and a second data acquisition unit for acquiring hint unclassified data. The output end of the first synthesis unit 541 functions as a first composition data output unit that outputs the first composition data obtained by combining the hinted classification data 561 and the hint unclassification data.
非生体組織領域が、第1内腔領域、第2内腔領域および非内腔領域に分類されていないデータが入力された場合、第1合成部541は分類データ変換部629の機能を果たして非生体組織領域の分類を行なう。
When data is input in which the non-living tissue region is not classified into the first lumen region, the second lumen region, and the non-lumen region, the first synthesis unit 541 fulfills the function of the classification data conversion unit 629 and is not classified. Classify the biological tissue area.
たとえば第1合成部541は、位置情報取得部65が位置情報の取得に成功した場合には、ヒント有学習済モデル631の重みをヒント無学習済モデル632の重みよりも大きくして、両者を合成する。画像の重みづけ合成を行なう方法は公知であるため、説明を省略する。
For example, when the position information acquisition unit 65 succeeds in acquiring the position information, the first synthesis unit 541 sets the weight of the hint trained model 631 to be larger than the weight of the hint unlearned model 632, and sets both. Synthesize. Since the method of weighting and compositing images is known, the description thereof will be omitted.
第1合成部541は、位置情報取得部65が取得した位置情報の信頼度に基づいて、ヒント有分類データ561とヒント無分類データとの重みづけを定めて合成しても良い。
The first synthesis unit 541 may synthesize by defining the weighting of the hint classified data 561 and the hint unclassified data based on the reliability of the position information acquired by the position information acquisition unit 65.
第1合成部541は、ヒント有分類データ561およびヒント無分類データのそれぞれの領域の信頼度に基づいてヒント有分類データ561とヒント無分類データとを合成しても良い。分類データ52の信頼度に基づく合成は、たとえば実施の形態5で説明した分類データ合成部628と同様の処理により実行できる。
The first synthesis unit 541 may synthesize the hinted classification data 561 and the hint unclassified data based on the reliability of each region of the hinted classification data 561 and the hint unclassified data. The synthesis based on the reliability of the classification data 52 can be executed, for example, by the same processing as the classification data synthesis unit 628 described in the fifth embodiment.
なお、第1合成部541は、ヒント有学習済モデル631およびヒント無学習済モデル632から出力された医療器具領域を、隣接する非生体組織領域と同様に扱う。たとえば、第1内腔領域に医療器具領域が存在する場合、第1合成部541は医療器具領域を第1内腔領域と同様に扱う。同様に第2内腔領域に医療器具領域が存在する場合、第1合成部541は医療器具領域を第2内腔領域と同様に扱う。
The first synthesis unit 541 treats the medical device region output from the hinted trained model 631 and the hintless trained model 632 in the same manner as the adjacent non-living tissue region. For example, when the medical instrument region exists in the first lumen region, the first synthesis unit 541 treats the medical instrument region in the same manner as the first lumen region. Similarly, when the medical instrument region exists in the second lumen region, the first synthesis unit 541 treats the medical instrument region in the same manner as the second lumen region.
ヒント有学習済モデル631またはヒント無学習済モデル632のいずれか一方に、医療器具領域を出力しない学習済モデルを使用してもよい。したがって、図29の中央部に示すように、第1合成部541から出力される分類情報は、医療器具領域に関する情報を含まない。
A trained model that does not output the medical device area may be used for either the hinted trained model 631 or the hintless trained model 632. Therefore, as shown in the central portion of FIG. 29, the classification information output from the first synthesis unit 541 does not include information regarding the medical device region.
第1合成部541は、位置情報取得部65が位置情報の取得に成功したか否かに基づいて、ヒント有分類データ561とヒント無分類データとを切り替えるスイッチの機能を果たしてもよい。第1合成部541は、さらに分類データ変換部629の機能を果たしてもよい。
The first synthesis unit 541 may function as a switch for switching between hinted classification data 561 and hintless classification data based on whether or not the position information acquisition unit 65 succeeds in acquiring the position information. The first synthesis unit 541 may further function as the classification data conversion unit 629.
具体的には、位置情報取得部65が位置情報の取得に成功した場合、第1合成部541は、ヒント有学習済モデル631から出力されたヒント有分類データ561に基づいて分類情報を出力する。位置情報取得部65が位置情報の取得に成功していない場合、第1合成部541は、ヒント無学習済モデル632から出力されたヒント無分類データに基づいて分類情報を出力する。
Specifically, when the position information acquisition unit 65 succeeds in acquiring the position information, the first synthesis unit 541 outputs the classification information based on the hinted classification data 561 output from the hinted trained model 631. .. When the position information acquisition unit 65 has not succeeded in acquiring the position information, the first synthesis unit 541 outputs the classification information based on the hint unclassified data output from the hint unlearned model 632.
第2合成部542の動作を説明する。位置情報取得部65が位置情報の取得に成功した場合、第2合成部542はヒント有学習済モデル631から出力された医療器具領域を出力する。位置情報取得部65が位置情報の取得に成功しなかった場合、第2合成部542はヒント無分類データに含まれる医療器具領域を出力する。
The operation of the second synthesis unit 542 will be described. When the position information acquisition unit 65 succeeds in acquiring the position information, the second synthesis unit 542 outputs the medical device area output from the hint-learned model 631. When the position information acquisition unit 65 does not succeed in acquiring the position information, the second synthesis unit 542 outputs the medical device area included in the hint unclassified data.
なお、ヒント無学習済モデル632に図21を使用して説明した第2分類学習済モデル622を使用することが望ましい。前述のとおり、第2分類学習済モデル622の学習には多数の訓練データを使用できるため、医療器具領域を精度よく抽出できる。
It is desirable to use the second classification trained model 622 described using FIG. 21 for the hint unlearned model 632. As described above, since a large amount of training data can be used for training the second classification trained model 622, the medical device area can be accurately extracted.
位置情報取得部65が位置情報の取得に成功しなかった場合、第2合成部542はヒント有分類データ561に含まれる医療器具領域と、ヒント無分類データに含まれる医療器具領域とを合成して出力してもよい。ヒント有分類データ561とヒント無分類データとの合成は、たとえば実施の形態5で説明した分類データ合成部628と同様の処理により実行できる。
When the position information acquisition unit 65 does not succeed in acquiring the position information, the second synthesis unit 542 synthesizes the medical device area included in the hint classified data 561 and the medical device area included in the hint unclassified data. May be output. The synthesis of the hint classified data 561 and the hint unclassified data can be executed, for example, by the same processing as the classification data synthesis unit 628 described in the fifth embodiment.
第2合成部542の出力端は、ヒント有分類データ561の医療器具領域とヒント無分類データの医療器具領域とを合成した第2合成データを出力する第2合成データ出力部の機能を果たす。
The output end of the second synthesis unit 542 fulfills the function of the second synthetic data output unit that outputs the second synthetic data obtained by combining the medical device area of the hint classified data 561 and the medical device area of the hint unclassified data.
第3合成部543の動作を説明する。第3合成部543は、第1合成部541から出力された分類情報に、第2合成部542から出力された医療器具領域を重畳した合成データ536を出力する。図29においては重畳された医療器具領域を黒塗りで示す。
The operation of the third synthesis unit 543 will be described. The third synthesis unit 543 outputs synthetic data 536 in which the medical device region output from the second synthesis unit 542 is superimposed on the classification information output from the first synthesis unit 541. In FIG. 29, the superimposed medical device area is shown in black.
第1合成部541の代わりに第3合成部543が、非生体組織領域を第1内腔領域、第2内腔領域および非内腔領域に分類する分類データ変換部629の機能を果たしてもよい。
Instead of the first synthesis unit 541, the third synthesis unit 543 may perform the function of the classification data conversion unit 629 that classifies the non-living tissue region into the first lumen region, the second lumen region, and the non-lumen region. ..
位置分類解析部66を構成する複数の学習済モデルのうちの一部または全部が、時系列的に取得した複数のカテーテル画像51を受け付けて、最新のカテーテル画像51に対する情報を出力するモデルであってもよい。
A part or all of the plurality of trained models constituting the position classification analysis unit 66 is a model that accepts a plurality of catheter images 51 acquired in time series and outputs information for the latest catheter image 51. You may.
本実施の形態によると、医療器具の位置情報を高精度に取得して、分類情報と組み合わせて出力するカテーテルシステム10を提供できる。制御部21は、画像取得用カテーテル40の長手方向に沿って連続的に撮影された複数のカテーテル画像51のそれぞれに基づいて合成データ536を生成した後に、当該合成データ536を積層することで生体組織及び医療器具の三次元データを構築して、表示してもよい。
According to this embodiment, it is possible to provide a catheter system 10 that acquires position information of a medical device with high accuracy and outputs it in combination with classification information. The control unit 21 generates synthetic data 536 based on each of a plurality of catheter images 51 continuously imaged along the longitudinal direction of the image acquisition catheter 40, and then stacks the synthetic data 536 to form a living body. Three-dimensional data of tissues and medical devices may be constructed and displayed.
[変形例7-1]
図30は、変形例の分類モデル62の構成を説明する説明図である。位置分類解析部66に、X%ヒント学習済モデル639が追加されている。X%ヒント学習済モデル639は、ヒント有訓練データDB72を使用して学習を行なう際に、訓練データのうちのXパーセントでは位置情報を入力し、(100-X)パーセントでは位置情報を入力しない条件で学習を行なったモデルである。以下の説明では、X%ヒント学習済モデル639から出力されるデータを、X%ヒント分類データと記載する。 [Modification 7-1]
FIG. 30 is an explanatory diagram illustrating the configuration of theclassification model 62 of the modified example. An X% hint trained model 639 has been added to the position classification analysis unit 66. In the X% hint trained model 639, when training is performed using the hinted training data DB 72, the position information is input in X% of the training data, and the position information is not input in (100-X)%. It is a model that has been trained under the conditions. In the following description, the data output from the X% hint trained model 639 will be referred to as X% hint classification data.
図30は、変形例の分類モデル62の構成を説明する説明図である。位置分類解析部66に、X%ヒント学習済モデル639が追加されている。X%ヒント学習済モデル639は、ヒント有訓練データDB72を使用して学習を行なう際に、訓練データのうちのXパーセントでは位置情報を入力し、(100-X)パーセントでは位置情報を入力しない条件で学習を行なったモデルである。以下の説明では、X%ヒント学習済モデル639から出力されるデータを、X%ヒント分類データと記載する。 [Modification 7-1]
FIG. 30 is an explanatory diagram illustrating the configuration of the
X%ヒント学習済モデル639は、Xが「100」である場合にはヒント有学習済モデル631と同一であり、Xが「0」である場合にはヒント無学習済モデル632と同一である。Xは、たとえば「50」である。
The X% hint trained model 639 is the same as the hinted trained model 631 when X is "100", and is the same as the hint unlearned model 632 when X is "0". .. X is, for example, "50".
第1合成部541は、ヒント有学習済モデル631と、ヒント無学習済モデル632と、X%ヒント学習済モデル639とのそれぞれから取得した分類データ52を、所定の重みづけに基づいて合成したデータを出力する。重みづけは、位置情報取得部65が位置情報の取得に成功したか否かによって変化する。
The first synthesis unit 541 synthesized the classification data 52 acquired from each of the hint trained model 631, the hint unlearned model 632, and the X% hint trained model 639 based on a predetermined weighting. Output data. The weighting changes depending on whether or not the position information acquisition unit 65 succeeds in acquiring the position information.
たとえば、位置情報取得部65が位置情報の取得に成功した場合、ヒント有学習済モデル631の出力とX%ヒント学習済モデル639の出力とが合成される。位置情報取得部65が位置情報の取得に失敗した場合、ヒント無学習済モデル632の出力とX%ヒント学習済モデル639の出力が合成される。合成時の重みづけは、位置情報取得部65が取得した位置情報の信頼度に基づいて変化してもよい。
For example, when the position information acquisition unit 65 succeeds in acquiring the position information, the output of the hint learned model 631 and the output of the X% hint learned model 639 are combined. When the position information acquisition unit 65 fails to acquire the position information, the output of the hint-unlearned model 632 and the output of the X% hint-learned model 639 are combined. The weighting at the time of synthesis may be changed based on the reliability of the position information acquired by the position information acquisition unit 65.
位置分類解析部66に、複数のX%ヒント学習済モデル639が含まれていてもよい。たとえば、Xが「20」のX%ヒント学習済モデル639と、Xが「50」のX%ヒント学習済モデル639とを組み合わせて使用できる。
The position classification analysis unit 66 may include a plurality of X% hint trained models 639. For example, an X% hint trained model 639 in which X is “20” and an X% hint trained model 639 in which X is “50” can be used in combination.
臨床現場においては、カテーテル画像51から医療器具領域を抽出できない場合がある。たとえば、医療器具が第1腔に挿入されていない場合、および、医療器具が生体組織の表面に密着している場合等である。本変形例によると、このような臨床現場での実際の状況に合致した分類モデル62を実現できる。そのため、位置情報の検出および分類を精度良く行なえるカテーテルシステム10を提供できる。
In clinical practice, it may not be possible to extract the medical device area from the catheter image 51. For example, when the medical device is not inserted into the first cavity, or when the medical device is in close contact with the surface of the living tissue. According to this modification, it is possible to realize a classification model 62 that matches the actual situation in such a clinical setting. Therefore, it is possible to provide a catheter system 10 capable of accurately detecting and classifying position information.
[実施の形態8]
本実施の形態は、カテーテル画像51の三次元表示に関する。実施の形態7と共通する部分については、説明を省略する。図31は、実施の形態8の処理の概要を説明する説明図である。 [Embodiment 8]
The present embodiment relates to a three-dimensional display of thecatheter image 51. The description of the parts common to the seventh embodiment will be omitted. FIG. 31 is an explanatory diagram illustrating an outline of the process of the eighth embodiment.
本実施の形態は、カテーテル画像51の三次元表示に関する。実施の形態7と共通する部分については、説明を省略する。図31は、実施の形態8の処理の概要を説明する説明図である。 [Embodiment 8]
The present embodiment relates to a three-dimensional display of the
本実施の形態においては、画像取得用カテーテル40の長手方向に沿って連続的に撮影された複数のRT形式カテーテル画像518を使用する。制御部21は、複数のRT形式カテーテル画像518を、それぞれ実施の形態7で説明した位置分類解析部66に入力する。位置分類解析部66から、それぞれのRT形式カテーテル画像518に対応する分類情報と医療器具領域とが出力される。制御部21は、分類情報と医療器具情報とを第3合成部543に入力して合成データ536を合成する。
In this embodiment, a plurality of RT-type catheter images 518 continuously taken along the longitudinal direction of the image acquisition catheter 40 are used. The control unit 21 inputs a plurality of RT-type catheter images 518 to the position classification analysis unit 66 described in the seventh embodiment, respectively. The position classification analysis unit 66 outputs classification information and a medical device area corresponding to each RT format catheter image 518. The control unit 21 inputs the classification information and the medical device information into the third synthesis unit 543 to synthesize the synthesis data 536.
制御部21は、複数の合成データ536に基づいて、生体組織の三次元構造を示す生体三次元データ551を作成する。生体三次元データ551は、たとえば三次元空間における体積格子ごとに、生体組織ラベル、第1内腔領域ラベル、第2内腔領域ラベル、非内腔領域ラベル等を示す値を記録した、ボクセルデータである。生体三次元データ551は、それぞれの領域の境界を示す複数のポリゴンにより構成されたポリゴンデータであってもよい。複数枚のRT形式のデータに基づいて三次元データ55を作成する方法は公知であるため、説明を省略する。
The control unit 21 creates biological three-dimensional data 551 showing the three-dimensional structure of biological tissue based on a plurality of synthetic data 536. The biological three-dimensional data 551 is voxel data in which values indicating a biological tissue label, a first lumen region label, a second lumen region label, a non-luminal region label, and the like are recorded for each volume lattice in a three-dimensional space, for example. Is. The biological three-dimensional data 551 may be polygon data composed of a plurality of polygons indicating boundaries of each region. Since a method of creating three-dimensional data 55 based on a plurality of RT format data is known, the description thereof will be omitted.
制御部21は、位置分類解析部66に含まれる位置情報取得部65から、それぞれのRT形式カテーテル画像518に描出されている医療器具の位置を示す位置情報を取得する。制御部21は、複数の位置情報に基づいて、医療器具の三次元的な形状を示す医療器具三次元データ552を作成する。医療器具三次元データ552の詳細については、後述する。
The control unit 21 acquires position information indicating the position of the medical device depicted in each RT format catheter image 518 from the position information acquisition unit 65 included in the position classification analysis unit 66. The control unit 21 creates medical device three-dimensional data 552 showing the three-dimensional shape of the medical device based on a plurality of position information. The details of the medical device three-dimensional data 552 will be described later.
制御部21は、生体三次元データ551と医療器具三次元データ552とを合成して三次元データ55を生成する。三次元データ55は、図4を使用して説明したステップS513の「3D表示」に使用される。なお、三次元データ55を合成する際に、制御部21は合成データ536に含まれる医療器具領域を空白領域、または、非生体領域に置換した後に、医療器具三次元データ552を合成する。制御部21は、位置分類解析部66に含まれる第1合成部541から出力される分類情報を用いて生体三次元データ551を生成してもよい。
The control unit 21 synthesizes the biological three-dimensional data 551 and the medical device three-dimensional data 552 to generate the three-dimensional data 55. The three-dimensional data 55 is used for the "3D display" of step S513 described with reference to FIG. When synthesizing the three-dimensional data 55, the control unit 21 replaces the medical device region included in the synthetic data 536 with a blank area or a non-biological area, and then synthesizes the medical device three-dimensional data 552. The control unit 21 may generate biological three-dimensional data 551 using the classification information output from the first synthesis unit 541 included in the position classification analysis unit 66.
図32Aから図32Dは、位置情報の修正過程の概要を説明する説明図である。図32Aから図32Dは、画像取得用カテーテル40を図の右方向に引きながらカテーテル画像51を撮影している状態を時系列順に示す模式図である。太い円筒は、第1腔の内面を模式的に示す。
FIGS. 32A to 32D are explanatory views for explaining the outline of the process of correcting the position information. 32A to 32D are schematic views showing a state in which the catheter image 51 is taken while pulling the image acquisition catheter 40 to the right in the figure in chronological order. The thick cylinder schematically shows the inner surface of the first cavity.
図32Aにおいては、3枚のカテーテル画像51が撮影済である。それぞれのカテーテル画像51から抽出された医療器具の位置情報を白丸で示す。図32Bは、4枚目のカテーテル画像51を撮影した状態を示す。4枚目のカテーテル画像51から抽出された医療器具の位置情報を黒丸で示す。
In FIG. 32A, three catheter images 51 have already been taken. The position information of the medical device extracted from each catheter image 51 is indicated by a white circle. FIG. 32B shows a state in which the fourth catheter image 51 is taken. The position information of the medical device extracted from the fourth catheter image 51 is shown by a black circle.
先に撮影された3枚のカテーテル画像51とは明らかに異なる場所に医療器具が検出されている。一般的に、IVRで使用する医療器具は、ある程度の剛性を有し急激に屈曲することは考えにくい。したがって、黒丸で示す位置情報は誤検出である可能性が高い。
The medical device was detected in a place clearly different from the three catheter images 51 taken earlier. In general, medical instruments used in IVR have a certain degree of rigidity and it is unlikely that they will bend sharply. Therefore, the position information indicated by the black circle is likely to be an erroneous detection.
図32Cにおいては、さらに2枚のカテーテル画像51を撮影済である。それぞれのカテーテル画像51から抽出された医療器具の位置情報を白丸で示す。5個の白丸は、画像取得用カテーテル40の長手方向に沿ってほぼ一列に並んでいるが、黒丸は大きく離れており、誤検出であることが明らかである。
In FIG. 32C, two more catheter images 51 have been taken. The position information of the medical device extracted from each catheter image 51 is indicated by a white circle. The five white circles are lined up in a substantially row along the longitudinal direction of the image acquisition catheter 40, but the black circles are far apart, and it is clear that the detection is false.
図32Dにおいては、5個の白丸に基づいて補完した位置情報を×印で示す。黒丸で示す位置情報の代わりに、×印で示す位置情報を使用することにより、第1腔内の医療器具の形状を三次元画像に正しく表示できる。
In FIG. 32D, the position information complemented based on the five white circles is indicated by a cross. By using the position information indicated by the cross instead of the position information indicated by the black circle, the shape of the medical device in the first cavity can be correctly displayed on the three-dimensional image.
なお、位置情報取得部65が位置情報の取得に成功しなかった場合、制御部21は位置分類解析部66に含まれる第2合成部542から取得した医療器具領域の代表点を位置情報に使用してもよい。たとえば医療器具領域の重心を代表点に使用できる。
If the position information acquisition unit 65 does not succeed in acquiring the position information, the control unit 21 uses the representative point of the medical device area acquired from the second synthesis unit 542 included in the position classification analysis unit 66 for the position information. You may. For example, the center of gravity of the medical device area can be used as a representative point.
図33は、実施の形態8のプログラムの処理の流れを説明するフローチャートである。図33を使用して説明するプログラムは、図4を使用して説明したステップS505でユーザが三次元表示を指定していると判定した場合(ステップS505で3D)に実行されるプログラムである。
FIG. 33 is a flowchart illustrating a processing flow of the program of the eighth embodiment. The program described with reference to FIG. 33 is a program executed when it is determined in step S505 described with reference to FIG. 4 that the user has specified the three-dimensional display (3D in step S505).
図33のプログラムは、画像取得用カテーテル40の長手方向に沿って、複数のカテーテル画像51が撮影されている途中で実行可能である。図33のプログラムの実行に先立ち、撮影済のカテーテル画像51についてはそれぞれ分類情報および位置情報が生成済であり、補助記憶装置23または外部の大容量記憶装置に記憶されている場合を例にして説明する。
The program of FIG. 33 can be executed while a plurality of catheter images 51 are being imaged along the longitudinal direction of the image acquisition catheter 40. Prior to the execution of the program of FIG. 33, the catheter image 51 that has been imaged has been generated with classification information and position information, respectively, and is stored in the auxiliary storage device 23 or an external large-capacity storage device as an example. explain.
制御部21は、1枚のカテーテル画像51に対応する位置情報を取得し、主記憶装置22または補助記憶装置23に記録する(ステップS641)。なお、制御部21は、一連のカテーテル画像51のうち、先に記憶されたカテーテル画像51から順番に処理を行なう。ステップS641において制御部21は、一連のカテーテル画像51のうち最初の数枚のカテーテル画像51から位置情報を取得して記録してもよい。
The control unit 21 acquires the position information corresponding to one catheter image 51 and records it in the main storage device 22 or the auxiliary storage device 23 (step S641). The control unit 21 processes the catheter image 51 stored earlier in the series of catheter images 51 in order. In step S641, the control unit 21 may acquire and record position information from the first few catheter images 51 in the series of catheter images 51.
制御部21は、次の1枚のカテーテル画像51に対応する位置情報を取得する(ステップS642)。以下の説明では、処理中の位置情報を第1位置情報と記載する。制御部21は、ステップS641および過去のステップS641で取得した位置情報の中から、第1位置情報に最も近接する位置情報を抽出する(ステップS643)。以下の説明ではステップS643で抽出した位置情報を、第2位置情報と記載する。
The control unit 21 acquires the position information corresponding to the next one catheter image 51 (step S642). In the following description, the position information being processed is described as the first position information. The control unit 21 extracts the position information closest to the first position information from the position information acquired in step S641 and the past step S641 (step S643). In the following description, the position information extracted in step S643 will be referred to as a second position information.
なお、ステップS642においては、複数のカテーテル画像51を画像取得用カテーテル40に直交する一つの平面に投影した状態で、位置情報間の距離を比較する。すなわち、第2位置情報を抽出する際には、画像取得用カテーテル40の長手方向の距離については考慮しない。
In step S642, the distances between the position information are compared in a state where a plurality of catheter images 51 are projected onto one plane orthogonal to the image acquisition catheter 40. That is, when extracting the second position information, the distance in the longitudinal direction of the image acquisition catheter 40 is not taken into consideration.
制御部21は、第1位置情報と第2位置情報との間の距離が所定の閾値以下であるか否かを判定する(ステップS644)。閾値は、たとえば3ミリメートルである。閾値以下であると判定した場合(ステップS644でYES)、制御部21は第2位置情報を主記憶装置22または補助記憶装置23に記録する(ステップS645)。
The control unit 21 determines whether or not the distance between the first position information and the second position information is equal to or less than a predetermined threshold value (step S644). The threshold is, for example, 3 millimeters. When it is determined that the value is equal to or less than the threshold value (YES in step S644), the control unit 21 records the second position information in the main storage device 22 or the auxiliary storage device 23 (step S645).
閾値を超えると判定した場合(ステップS644でNO)、またはステップS645の終了後、制御部21は、記録されている位置情報の処理を終了したか否かを判定する(ステップS646)。終了していないと判定した場合(ステップS646でNO)、制御部21はステップS642に戻る。
When it is determined that the threshold value is exceeded (NO in step S644), or after the end of step S645, the control unit 21 determines whether or not the processing of the recorded position information is completed (step S646). If it is determined that the process has not been completed (NO in step S646), the control unit 21 returns to step S642.
図32において黒丸で示した位置情報は、ステップS644で閾値を超えると判定される位置情報の例示である。制御部21は、このような位置情報をステップS645で記録せずに無視する。制御部21は、ステップS644でNOと判定した場合の処理により、所定の条件を満たさない位置情報を除外する除外部の機能を実現する。なお制御部21は、ステップS644で閾値を超えると判定した位置情報に「エラー」を示すフラグを付けて記録してもよい。
The position information indicated by the black circle in FIG. 32 is an example of the position information determined to exceed the threshold value in step S644. The control unit 21 ignores such position information without recording it in step S645. The control unit 21 realizes the function of the exclusion unit that excludes the position information that does not satisfy the predetermined condition by the processing when it is determined as NO in step S644. The control unit 21 may add a flag indicating an "error" to the position information determined to exceed the threshold value in step S644 and record it.
終了したと判定した場合(ステップS646でYES)、制御部21は、ステップS641およびステップS645で記録した位置情報に基づいて、位置情報の補完を行なえるか否かを判定する(ステップS647)。可能であると判定した場合(ステップS647でYES)、制御部21は位置情報を補完する(ステップS648)。
When it is determined that the process is completed (YES in step S646), the control unit 21 determines whether or not the position information can be complemented based on the position information recorded in steps S641 and S645 (step S647). When it is determined that it is possible (YES in step S647), the control unit 21 complements the position information (step S648).
ステップS648において制御部21は、たとえばステップS644で閾値を超えると判定された位置情報を代替する位置情報を補完する。制御部21は、カテーテル画像51同士の間の位置情報を補完してもよい。補完は、たとえば線形補間、スプライン補間、ラグランジェ補間またはニュートン補間等の任意の手法を用いて行なえる。制御部21は、ステップS648により位置情報に補完情報を加える補完部の機能を実現する。
In step S648, the control unit 21 complements the position information that substitutes for the position information determined to exceed the threshold value in, for example, step S644. The control unit 21 may complement the position information between the catheter images 51. Completion can be performed using any method such as linear interpolation, spline interpolation, Lagrange interpolation or Newton interpolation. The control unit 21 realizes the function of the complement unit that adds the complement information to the position information in step S648.
位置情報の補完を行なえないと判定した場合(ステップS647でNO)、またはステップS648の終了後、制御部21は、三次元表示のサブルーチンを起動する(ステップS649)。三次元表示のサブルーチンは、一連のカテーテル画像51に基づく三次元表示を行なうサブルーチンである。三次元表示のサブルーチンの処理の流れは後述する。
When it is determined that the position information cannot be complemented (NO in step S647), or after the end of step S648, the control unit 21 activates the subroutine of the three-dimensional display (step S649). The three-dimensional display subroutine is a subroutine that performs three-dimensional display based on a series of catheter images 51. The processing flow of the three-dimensional display subroutine will be described later.
制御部21は、処理を終了するか否かを判定する(ステップS650)。たとえば、MDU33により新たなプルバック操作、すなわち三次元画像の生成に用いるカテーテル画像51の撮影が開始された場合、制御部21は処理を終了すると判定する。
The control unit 21 determines whether or not to end the process (step S650). For example, when the MDU 33 starts a new pullback operation, that is, the imaging of the catheter image 51 used for generating the three-dimensional image, the control unit 21 determines that the process is completed.
処理を終了しないと判定した場合(ステップS650でNO)、制御部21はステップS642に戻る。処理を終了すると判定した場合(ステップS650でYES)、制御部21は処理を終了する。
If it is determined that the process is not completed (NO in step S650), the control unit 21 returns to step S642. When it is determined to end the process (YES in step S650), the control unit 21 ends the process.
なお、制御部21は、図33のプログラムの実行と平行して、新たに撮影されたカテーテル画像51に基づいて、分類情報および位置情報を生成して記録する。すなわち、ステップS646で終了したと判定した場合に、ステップS647以降が実行されるが、ステップS647からステップS650までを実行する間に、新たな位置情報および分類情報が生成されている可能性がある。
Note that the control unit 21 generates and records classification information and position information based on the newly captured catheter image 51 in parallel with the execution of the program of FIG. 33. That is, if it is determined that the process is completed in step S646, the steps S647 and subsequent steps are executed, but there is a possibility that new position information and classification information are generated during the execution from step S647 to step S650. ..
図34は、三次元表示のサブルーチンの処理の流れを説明するフローチャートである。三次元表示のサブルーチンは、一連のカテーテル画像51に基づく三次元表示を行なうサブルーチンである。三次元表示のサブルーチンにより、制御部21は三次元出力部の機能を実現する。
FIG. 34 is a flowchart illustrating the processing flow of the subroutine of the three-dimensional display. The three-dimensional display subroutine is a subroutine that performs three-dimensional display based on a series of catheter images 51. The control unit 21 realizes the function of the three-dimensional output unit by the subroutine of the three-dimensional display.
制御部21は、一連のカテーテル画像51に対応する合成データ536を取得する(ステップS661)。制御部21は、一連の合成データ536に基づいて生体組織の三次元構造を示す生体三次元データ551を作成する(ステップS662)。
The control unit 21 acquires synthetic data 536 corresponding to a series of catheter images 51 (step S661). The control unit 21 creates biological three-dimensional data 551 showing the three-dimensional structure of biological tissue based on a series of synthetic data 536 (step S662).
なお、前述のとおり三次元データ55を合成する際に、制御部21は合成データ536に含まれる医療器具領域を空白領域、または、非生体領域に置換した後に、医療器具三次元データ552を合成する。制御部21は、位置分類解析部66に含まれる第1合成部541から出力される分類情報を用いて生体三次元データ551を生成してもよい。制御部21は、図18を使用して説明した第1分類データ521に基づいて生体三次元データ551を生成してもよい。すなわち制御部21は、複数の第1分類データ521に直接的に基づいて生体三次元データ551を生成できる。
As described above, when synthesizing the three-dimensional data 55, the control unit 21 replaces the medical device region included in the synthetic data 536 with a blank area or a non-biological area, and then synthesizes the medical device three-dimensional data 552. do. The control unit 21 may generate biological three-dimensional data 551 using the classification information output from the first synthesis unit 541 included in the position classification analysis unit 66. The control unit 21 may generate the biological three-dimensional data 551 based on the first classification data 521 described with reference to FIG. That is, the control unit 21 can generate the biological three-dimensional data 551 directly based on the plurality of first classification data 521.
制御部21は、複数の第1分類データ521に間接的に基づいて生体三次元データ551を生成してもよい。「間接的に基づく」とは、たとえば図31を使用して説明したように、複数の第1分類データ521を用いて生成した複数の合成データ536に基づいて生体三次元データ551を生成することを意味する。制御部21は、第1複数の分類データ521を用いて生成した合成データ536とは異なる複数のデータに基づいて、生体三次元データ551を生成してもよい。
The control unit 21 may generate biological three-dimensional data 551 indirectly based on a plurality of first classification data 521. “Indirectly based” means generating biological three-dimensional data 551 based on a plurality of synthetic data 536 generated using a plurality of first classification data 521, as described using, for example, FIG. 31. Means. The control unit 21 may generate biological three-dimensional data 551 based on a plurality of data different from the synthetic data 536 generated by using the first plurality of classification data 521.
制御部21は、図33を使用して説明したプログラムのステップS641およびステップS645で記録された一連の位置情報およびステップS648で補完された補完情報により定められる曲線に太さ情報を付与する(ステップS663)。太さ情報は、IVRの手技で一般的に使用される医療器具の太さであることが望ましい。制御部21は、使用中の医療器具に関する情報を受け付けて、当該医療器具に対応する太さ情報を付与してもよい。太さ情報の付与により、医療器具の三次元形状が再現される。
The control unit 21 adds thickness information to the curve defined by the series of position information recorded in steps S641 and S645 of the program described with reference to FIG. 33 and the complementary information supplemented in step S648 (step). S663). The thickness information is preferably the thickness of a medical device commonly used in IVR procedures. The control unit 21 may receive information about the medical device in use and add thickness information corresponding to the medical device. By adding the thickness information, the three-dimensional shape of the medical device is reproduced.
制御部21は、ステップS662で生成した生体三次元データ551に、ステップS662で生成した医療器具の三次元形状を合成する(ステップS664)。制御部21は合成後の三次元データ55を表示装置31に表示する(ステップS665)。
The control unit 21 synthesizes the three-dimensional shape of the medical device generated in step S662 with the biological three-dimensional data 551 generated in step S662 (step S664). The control unit 21 displays the synthesized three-dimensional data 55 on the display device 31 (step S665).
制御部21は、三次元表示した画像に対する回転、断面の変更、拡大、縮小等の指示をユーザから受け付けて、表示を変更する。三次元表示した画像に対する指示の受付および表示の変更は従来から行われているため、説明を省略する。制御部21は処理を終了する。
The control unit 21 receives instructions from the user such as rotation, change of cross section, enlargement, reduction, etc. for the three-dimensionally displayed image, and changes the display. Since the reception of instructions and the change of the display for the three-dimensionally displayed image have been performed conventionally, the description thereof will be omitted. The control unit 21 ends the process.
本実施の形態によると、位置情報の誤検出による影響を除去し、当な形状の医療器具を表示するカテーテルシステム10を提供できる。ユーザは、たとえばブロッケンブロー針と、卵円窩との位置関係を容易に把握して、IVRの手技を行なえる。
According to the present embodiment, it is possible to provide a catheter system 10 that eliminates the influence of erroneous detection of position information and displays a medical device having a proper shape. The user can easily perform the IVR procedure by easily grasping the positional relationship between the Brocken-blow needle and the fossa ovalis, for example.
なお、ステップS643からステップS645の処理を行なう代わりに、複数の位置情報をクラスタリング処理することにより、他の位置情報から大きく離れた異常な位置情報を除去してもよい。
Instead of performing the processes of steps S643 to S645, it is possible to remove abnormal position information that is significantly distant from other position information by performing clustering processing of a plurality of position information.
[変形例8-1]
本変形例は、医療器具が誤検出されていない場合には、カテーテル画像51から検出した医療器具領域に基づいて三次元表示を行なうカテーテルシステム10に関する。実施の形態8と共通する部分については、説明を省略する。 [Modification 8-1]
The present modification relates to acatheter system 10 that performs three-dimensional display based on the medical device region detected from the catheter image 51 when the medical device is not erroneously detected. The description of the parts common to the eighth embodiment will be omitted.
本変形例は、医療器具が誤検出されていない場合には、カテーテル画像51から検出した医療器具領域に基づいて三次元表示を行なうカテーテルシステム10に関する。実施の形態8と共通する部分については、説明を省略する。 [Modification 8-1]
The present modification relates to a
制御部21は、図34を使用して説明したサブルーチンのステップS663において、たとえばヒント有学習済モデル631またはヒント無学習済モデル632から出力された医療器具領域に基づいて医療器具の太さを定める。ただし、位置情報が誤りであると判定されたカテーテル画像51については、前後のカテーテル画像51の医療器具領域に基づいて太さ情報を補完する。
In step S663 of the subroutine described with reference to FIG. 34, the control unit 21 determines the thickness of the medical device based on the medical device area output from, for example, the hint trained model 631 or the hint unlearned model 632. .. However, for the catheter image 51 for which the position information is determined to be incorrect, the thickness information is supplemented based on the medical device area of the anterior-posterior catheter image 51.
本変形例によると、たとえばシースから針を突出させた状態の医療器具等、途中で太さが変化する医療器具を三次元画像中に適切に表示するカテーテルシステム10を提供できる。
According to this modification, it is possible to provide a catheter system 10 that appropriately displays a medical device whose thickness changes in the middle, such as a medical device in which a needle is projected from a sheath, in a three-dimensional image.
[実施の形態9]
本実施の形態は、ラジアル走査型の画像取得用カテーテル40を用いて取得したRT形式カテーテル画像518を処理する学習済モデルに適したパディング処理に関する。実施の形態1と共通する部分については、説明を省略する。 [Embodiment 9]
The present embodiment relates to a padding process suitable for a trained model for processing an RT-type catheter image 518 acquired using a radial scanning image acquisition catheter 40. The description of the parts common to the first embodiment will be omitted.
本実施の形態は、ラジアル走査型の画像取得用カテーテル40を用いて取得したRT形式カテーテル画像518を処理する学習済モデルに適したパディング処理に関する。実施の形態1と共通する部分については、説明を省略する。 [Embodiment 9]
The present embodiment relates to a padding process suitable for a trained model for processing an RT-
パディング処理は、畳込み処理を行なう前に入力データの周囲にデータを付加する処理である。画像の入力を受け付ける入力層の直後の畳込み処理においては、入力データは入力された画像である。入力層の直後以外の畳込み処理においては、入力データは前の段階で抽出された特徴マップである。画像データを処理する学習済モデルにおいては、畳込層に入力された入力データの周囲に「0」のデータを付与する、いわゆるゼロパディング処理が一般的に行なわれる。
The padding process is a process of adding data around the input data before performing the convolution process. In the convolution process immediately after the input layer that accepts the input of the image, the input data is the input image. In the convolution process other than immediately after the input layer, the input data is the feature map extracted in the previous stage. In a trained model that processes image data, so-called zero padding processing is generally performed in which "0" data is added around the input data input to the convolutional layer.
図35は、実施の形態9のパディング処理を説明する説明図である。図35の右端は、畳込層に入力される入力データの模式図である。畳込層は、たとえば医療器具学習済モデル611に含まれる第1畳込層、および、角度学習済モデル612に含まれる第2畳込層の例示である。畳込層は、ラジアル走査型の画像取得用カテーテル40を用いて撮影したカテーテル画像51の処理に用いる任意の学習済モデルに含まれる畳込層であってもよい。
FIG. 35 is an explanatory diagram illustrating the padding process of the ninth embodiment. The right end of FIG. 35 is a schematic diagram of input data input to the convolutional layer. The convolutional layer is an example of a first convolutional layer included in the medical device trained model 611 and a second convolutional layer included in the angle trained model 612, for example. The convolutional layer may be the convolutional layer included in any trained model used to process the catheter image 51 taken with the radial scanning image acquisition catheter 40.
入力データはRT形式であり、横方向はセンサ42からの距離に、縦方向は走査角度に対応する。入力データの右上端部および左下端部を拡大した模式図を、図35の中央に示す。それぞれの枠は画素に対応し、枠内の数値は画素値に対応する。
The input data is in RT format, the horizontal direction corresponds to the distance from the sensor 42, and the vertical direction corresponds to the scanning angle. An enlarged schematic diagram of the upper right end portion and the left lower end portion of the input data is shown in the center of FIG. 35. Each frame corresponds to a pixel, and the numerical value in the frame corresponds to a pixel value.
図35の右端は、本実施の形態のパディング処理を行なった後のデータの模式図である。斜体で示す数値が、パディング処理により付加されたデータを示す。入力データの左右の端に「0」のデータが付加される。入力データの上端に、パディング処理を行なう前にデータの下端の「A」で示すデータがコピーされる。入力データの下端に、パディング処理を行なう前にデータの上端の「B」で示すデータがコピーされている。
The right end of FIG. 35 is a schematic diagram of the data after the padding process of the present embodiment is performed. The numbers shown in italics indicate the data added by the padding process. "0" data is added to the left and right ends of the input data. The data indicated by "A" at the lower end of the data is copied to the upper end of the input data before the padding process is performed. The data indicated by "B" at the upper end of the data is copied to the lower end of the input data before the padding process is performed.
すなわち図35の右端においては、走査角度が小さい側の外側に走査角度が大きい側と同じデータが付加されており、走査角度が大きい側の外側に走査角度が小さい側と同じデータが付加されている。以下の説明では、図35を使用して説明したパディング処理をポーラーパディング処理と記載する。
That is, at the right end of FIG. 35, the same data as the side with a large scanning angle is added to the outside of the side with a small scanning angle, and the same data as the side with a small scanning angle is added to the outside of the side with a large scanning angle. There is. In the following description, the padding process described with reference to FIG. 35 will be referred to as a polar padding process.
ラジアル走査型の画像取得用カテーテル40ではRT形式カテーテル画像518の上端と下端とは略同一である。たとえば一つの医療器具または病変部等が、RT形式カテーテル画像518の上下に分離する場合がある。ポーラーパディング処理は、このような特徴を利用した処理である。
In the radial scanning type image acquisition catheter 40, the upper end and the lower end of the RT type catheter image 518 are substantially the same. For example, one medical device or lesion may be separated above and below the RT format catheter image 518. The polar padding process is a process that utilizes such characteristics.
本実施の形態によると、RT形式の画像の上下の情報を十分に反映する学習済モデルを生成できる。
According to this embodiment, it is possible to generate a trained model that sufficiently reflects the information above and below the RT format image.
学習済モデルが含むすべての畳込層でポーラーパディング処理を行なっても、一部の畳込層でポーラーパディング処理を行なってもよい。
The polar padding process may be performed on all the convolutional layers included in the trained model, or the polar padding process may be performed on some convolutional layers.
図35は入力データの四方にそれぞれ1個のデータを付加するパディング処理を行なう例を示すが、パディング処理は複数個のデータを付加する処理であっても良い。畳込み処理に用いるフィルタのサイズおよびストライド量に応じて、ポーラーパディング処理で付加するデータの数が選択される。
FIG. 35 shows an example of performing a padding process in which one data is added to each of the four sides of the input data, but the padding process may be a process of adding a plurality of data. The number of data to be added in the polar padding process is selected according to the size of the filter used in the convolution process and the amount of stride.
[変形例9-1]
図36は、変形例のポーラーパディング処理を説明する説明図である。本変形例のポーラーパディング処理は、RT形式カテーテル画像518を最初に処理する段階の畳込層に有効である。 [Modification 9-1]
FIG. 36 is an explanatory diagram illustrating a polar padding process of a modified example. The polar padding process of this variant is effective for the convolutional layer at the stage of first processing the RTformat catheter image 518.
図36は、変形例のポーラーパディング処理を説明する説明図である。本変形例のポーラーパディング処理は、RT形式カテーテル画像518を最初に処理する段階の畳込層に有効である。 [Modification 9-1]
FIG. 36 is an explanatory diagram illustrating a polar padding process of a modified example. The polar padding process of this variant is effective for the convolutional layer at the stage of first processing the RT
図36の上側は、センサ42を右向きに引きながらラジアル走査を行なう状態を模式的に示す。センサ42が一回転する間に取得した走査線データに基づいて図36の左下に模式的に示す1枚のRT形式カテーテル画像518が生成される。RT形式カテーテル画像518は、センサ42の回転に従い上側から下側に向けて形成される。
The upper side of FIG. 36 schematically shows a state in which radial scanning is performed while pulling the sensor 42 to the right. Based on the scan line data acquired while the sensor 42 makes one rotation, one RT-type catheter image 518 schematically shown in the lower left of FIG. 36 is generated. The RT format catheter image 518 is formed from the upper side to the lower side according to the rotation of the sensor 42.
図36の右下は、RT形式カテーテル画像518にパディング処理を行なった状態を模式的に示す。RT形式カテーテル画像518の上側に、左下がりのハッチングで示す一回転前のRT形式カテーテル画像518の終端部のデータが付加される。RT形式カテーテル画像518の下側に、右下がりのハッチングで示す一回転後ろのRT形式カテーテル画像518の開始部のデータが付加される。RT形式カテーテル画像518の左右に「0」のデータが付加される。
The lower right of FIG. 36 schematically shows a state in which the RT format catheter image 518 is padded. The data at the end of the RT-type catheter image 518 before one rotation, which is shown by hatching downward to the left, is added to the upper side of the RT-type catheter image 518. Below the RT format catheter image 518, the data at the start of the RT format catheter image 518 one turn behind, which is shown by hatching downward to the right, is added. Data of "0" is added to the left and right of the RT format catheter image 518.
本変形例によると、実際の走査線データに基づいたパディング処理を行なうため、さらに正確にRT形式の画像の上下の情報を十分に反映する学習済モデルを生成できる。
According to this modification, since the padding process is performed based on the actual scanning line data, it is possible to generate a trained model that sufficiently reflects the information above and below the RT format image more accurately.
[実施の形態10]
図37は、実施の形態10のカテーテルシステム10の構成を説明する説明図である。本実施の形態は、カテーテル制御装置27と、MDU33と、画像取得用カテーテル40と、汎用のコンピュータ90と、プログラム97とを組み合わせて動作させることにより、本実施の形態のカテーテルシステム10を実現する形態に関する。実施の形態1と共通する部分については、説明を省略する。 [Embodiment 10]
FIG. 37 is an explanatory diagram illustrating the configuration of thecatheter system 10 of the tenth embodiment. In the present embodiment, the catheter system 10 of the present embodiment is realized by operating the catheter control device 27, the MDU 33, the image acquisition catheter 40, the general-purpose computer 90, and the program 97 in combination. Regarding morphology. The description of the parts common to the first embodiment will be omitted.
図37は、実施の形態10のカテーテルシステム10の構成を説明する説明図である。本実施の形態は、カテーテル制御装置27と、MDU33と、画像取得用カテーテル40と、汎用のコンピュータ90と、プログラム97とを組み合わせて動作させることにより、本実施の形態のカテーテルシステム10を実現する形態に関する。実施の形態1と共通する部分については、説明を省略する。 [Embodiment 10]
FIG. 37 is an explanatory diagram illustrating the configuration of the
カテーテル制御装置27は、MDU33の制御、センサ42の制御、および、センサ42から受信した信号に基づく横断層像および縦断層像の生成等を行なう、IVUS用の超音波診断装置である。カテーテル制御装置27の機能および構成は、従来から使用されている超音波診断装置と同様であるため、説明を省略する。
The catheter control device 27 is an ultrasonic diagnostic device for IVUS that controls the MDU 33, controls the sensor 42, and generates a transverse layer image and a longitudinal tomographic image based on the signal received from the sensor 42. Since the function and configuration of the catheter control device 27 are the same as those of the ultrasonic diagnostic device conventionally used, the description thereof will be omitted.
本実施の形態のカテーテルシステム10は、コンピュータ90を含む。コンピュータ90は、制御部21、主記憶装置22、補助記憶装置23、通信部24、表示部25、入力部26、読取部29およびバスを備える。コンピュータ90は、汎用のパーソナルコンピュータ、タブレット、スマートフォンまたはサーバコンピュータ等の情報機器である。
The catheter system 10 of the present embodiment includes a computer 90. The computer 90 includes a control unit 21, a main storage device 22, an auxiliary storage device 23, a communication unit 24, a display unit 25, an input unit 26, a reading unit 29, and a bus. The computer 90 is an information device such as a general-purpose personal computer, a tablet, a smartphone, or a server computer.
プログラム97は、可搬型記録媒体96に記録されている。制御部21は、読取部29を介してプログラム97を読み込み、補助記憶装置23に保存する。また制御部21は、コンピュータ90内に実装されたフラッシュメモリ等の半導体メモリ98に記憶されたプログラム97を読出してもよい。さらに、制御部21は、通信部24および図示しないネットワークを介して接続される図示しない他のサーバコンピュータからプログラム97をダウンロードして補助記憶装置23に保存してもよい。
Program 97 is recorded on the portable recording medium 96. The control unit 21 reads the program 97 via the reading unit 29 and stores it in the auxiliary storage device 23. Further, the control unit 21 may read the program 97 stored in the semiconductor memory 98 such as the flash memory mounted in the computer 90. Further, the control unit 21 may download the program 97 from the communication unit 24 and another server computer (not shown) connected via a network (not shown) and store the program 97 in the auxiliary storage device 23.
プログラム97は、コンピュータ90の制御プログラムとしてインストールされ、主記憶装置22にロードして実行される。これにより、コンピュータ90は上述した情報処理装置20として機能する。
The program 97 is installed as a control program of the computer 90, loaded into the main storage device 22, and executed. As a result, the computer 90 functions as the information processing device 20 described above.
コンピュータ90は、汎用のパソコン、タブレット、スマートフォン、大型計算機、大型計算機上で動作する仮想マシン、クラウドコンピューティングシステム、または、量子コンピュータである。コンピュータ90は、分散処理を行なう複数のパソコン等であってもよい。
The computer 90 is a general-purpose personal computer, tablet, smartphone, large computer, virtual machine operating on the large computer, cloud computing system, or quantum computer. The computer 90 may be a plurality of personal computers or the like that perform distributed processing.
[実施の形態11]
図38は、実施の形態11の情報処理装置20の機能ブロック図である。情報処理装置20は、画像取得部81と、第1位置情報出力部83とを備える。画像取得部81は、ラジアル走査型の画像取得用カテーテル40により得られたカテーテル画像51を取得する。 [Embodiment 11]
FIG. 38 is a functional block diagram of theinformation processing apparatus 20 according to the eleventh embodiment. The information processing device 20 includes an image acquisition unit 81 and a first position information output unit 83. The image acquisition unit 81 acquires the catheter image 51 obtained by the radial scanning type image acquisition catheter 40.
図38は、実施の形態11の情報処理装置20の機能ブロック図である。情報処理装置20は、画像取得部81と、第1位置情報出力部83とを備える。画像取得部81は、ラジアル走査型の画像取得用カテーテル40により得られたカテーテル画像51を取得する。 [Embodiment 11]
FIG. 38 is a functional block diagram of the
第1位置情報出力部83は、カテーテル画像51を入力した場合に、カテーテル画像51に含まれる医療器具の位置に関する第1位置情報を出力する医療器具学習済モデル611に、取得したカテーテル画像51を入力して、第1位置情報を出力する。
When the catheter image 51 is input, the first position information output unit 83 outputs the acquired catheter image 51 to the medical device learned model 611 that outputs the first position information regarding the position of the medical device included in the catheter image 51. Input and output the first position information.
(付記A1)
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像を取得する画像取得部と、
前記カテーテル画像を入力した場合に、前記第1腔の内側である第1内腔領域および前記画像取得用カテーテルが挿入されていない第2腔の内側である第2内腔領域を含む非生体組織領域と、生体組織領域とを異なる領域として分類した第1分類データを出力する第1分類学習済モデルに、取得した前記カテーテル画像を入力して、前記第1分類データを出力する第1分類データ出力部とを備え、
前記第1分類学習済モデルは、少なくとも、前記第1内腔領域および前記第2内腔領域を含む前記非生体組織領域と、前記生体組織領域と、が明示された第1訓練データを用いて生成されている
情報処理装置。 (Appendix A1)
An image acquisition unit that acquires a catheter image obtained by an image acquisition catheter inserted in the first cavity, and an image acquisition unit.
Non-living tissue including the first lumen region inside the first cavity and the second lumen region inside the second lumen into which the image acquisition catheter is not inserted when the catheter image is input. The first classification data that outputs the first classification data by inputting the acquired catheter image into the first classification trained model that outputs the first classification data that classifies the region and the biological tissue region as different regions. Equipped with an output unit
The first classification trained model uses the first training data in which at least the first lumen region, the non-living tissue region including the second lumen region, and the living tissue region are specified. The information processing device that is being generated.
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像を取得する画像取得部と、
前記カテーテル画像を入力した場合に、前記第1腔の内側である第1内腔領域および前記画像取得用カテーテルが挿入されていない第2腔の内側である第2内腔領域を含む非生体組織領域と、生体組織領域とを異なる領域として分類した第1分類データを出力する第1分類学習済モデルに、取得した前記カテーテル画像を入力して、前記第1分類データを出力する第1分類データ出力部とを備え、
前記第1分類学習済モデルは、少なくとも、前記第1内腔領域および前記第2内腔領域を含む前記非生体組織領域と、前記生体組織領域と、が明示された第1訓練データを用いて生成されている
情報処理装置。 (Appendix A1)
An image acquisition unit that acquires a catheter image obtained by an image acquisition catheter inserted in the first cavity, and an image acquisition unit.
Non-living tissue including the first lumen region inside the first cavity and the second lumen region inside the second lumen into which the image acquisition catheter is not inserted when the catheter image is input. The first classification data that outputs the first classification data by inputting the acquired catheter image into the first classification trained model that outputs the first classification data that classifies the region and the biological tissue region as different regions. Equipped with an output unit
The first classification trained model uses the first training data in which at least the first lumen region, the non-living tissue region including the second lumen region, and the living tissue region are specified. The information processing device that is being generated.
(付記A2)
前記第1分類データにおいて、前記非生体組織領域から前記第1内腔領域と前記第2内腔領域とをそれぞれ抽出する内腔領域抽出部と、
前記第1分類データを、前記第1内腔領域と、前記第2内腔領域と、前記生体組織領域とをそれぞれ区別できる態様に変更して出力する第1態様出力部とを備える
付記A1に記載の情報処理装置。 (Appendix A2)
In the first classification data, a lumen region extraction unit that extracts the first lumen region and the second lumen region from the non-living tissue region, respectively.
Appendix A1 includes a first-mode output unit that outputs the first classification data by changing the mode so that the first lumen region, the second lumen region, and the biological tissue region can be distinguished from each other. The information processing device described.
前記第1分類データにおいて、前記非生体組織領域から前記第1内腔領域と前記第2内腔領域とをそれぞれ抽出する内腔領域抽出部と、
前記第1分類データを、前記第1内腔領域と、前記第2内腔領域と、前記生体組織領域とをそれぞれ区別できる態様に変更して出力する第1態様出力部とを備える
付記A1に記載の情報処理装置。 (Appendix A2)
In the first classification data, a lumen region extraction unit that extracts the first lumen region and the second lumen region from the non-living tissue region, respectively.
Appendix A1 includes a first-mode output unit that outputs the first classification data by changing the mode so that the first lumen region, the second lumen region, and the biological tissue region can be distinguished from each other. The information processing device described.
(付記A3)
前記第1分類データにおいて、前記非生体組織領域から前記第1内腔領域でも前記第2内腔領域でもない非内腔領域を抽出し、
前記第1分類データを、前記第1内腔領域と、前記第2内腔領域と、前記非内腔領域と、前記生体組織領域とをそれぞれ区別できる態様に変更して出力する第2態様出力部とを備える
付記A1または付記A2に記載の情報処理装置。 (Appendix A3)
In the first classification data, a non-luminal region that is neither the first lumen region nor the second lumen region is extracted from the non-living tissue region.
A second aspect output in which the first classification data is changed into a mode in which the first lumen region, the second lumen region, the non-luminal region, and the biological tissue region can be distinguished from each other. The information processing apparatus according to Supplementary A1 or Supplementary A2, which comprises a unit.
前記第1分類データにおいて、前記非生体組織領域から前記第1内腔領域でも前記第2内腔領域でもない非内腔領域を抽出し、
前記第1分類データを、前記第1内腔領域と、前記第2内腔領域と、前記非内腔領域と、前記生体組織領域とをそれぞれ区別できる態様に変更して出力する第2態様出力部とを備える
付記A1または付記A2に記載の情報処理装置。 (Appendix A3)
In the first classification data, a non-luminal region that is neither the first lumen region nor the second lumen region is extracted from the non-living tissue region.
A second aspect output in which the first classification data is changed into a mode in which the first lumen region, the second lumen region, the non-luminal region, and the biological tissue region can be distinguished from each other. The information processing apparatus according to Supplementary A1 or Supplementary A2, which comprises a unit.
(付記A4)
前記第1分類学習済モデルは、前記カテーテル画像を入力した場合に、前記生体組織領域と、前記第1内腔領域と、前記第2内腔領域と、前記非内腔領域とをそれぞれ異なる領域として分類した前記第1分類データを出力する
付記A3に記載の情報処理装置。 (Appendix A4)
In the first classification trained model, when the catheter image is input, the biological tissue region, the first lumen region, the second lumen region, and the non-lumen region are different from each other. The information processing apparatus according to Appendix A3, which outputs the first classification data classified as.
前記第1分類学習済モデルは、前記カテーテル画像を入力した場合に、前記生体組織領域と、前記第1内腔領域と、前記第2内腔領域と、前記非内腔領域とをそれぞれ異なる領域として分類した前記第1分類データを出力する
付記A3に記載の情報処理装置。 (Appendix A4)
In the first classification trained model, when the catheter image is input, the biological tissue region, the first lumen region, the second lumen region, and the non-lumen region are different from each other. The information processing apparatus according to Appendix A3, which outputs the first classification data classified as.
(付記A5)
前記画像取得用カテーテルは、ラジアル走査型の断層像取得用カテーテルであり、
前記カテーテル画像は、前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像であり、
前記第1分類データは、前記RT形式画像における各画素の分類結果である
付記A1から付記A4のいずれか一つに記載の情報処理装置。 (Appendix A5)
The image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
The catheter image is an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
The information processing apparatus according to any one of Supplementary A1 to Supplementary A4, wherein the first classification data is a classification result of each pixel in the RT format image.
前記画像取得用カテーテルは、ラジアル走査型の断層像取得用カテーテルであり、
前記カテーテル画像は、前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像であり、
前記第1分類データは、前記RT形式画像における各画素の分類結果である
付記A1から付記A4のいずれか一つに記載の情報処理装置。 (Appendix A5)
The image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
The catheter image is an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
The information processing apparatus according to any one of Supplementary A1 to Supplementary A4, wherein the first classification data is a classification result of each pixel in the RT format image.
(付記A6)
前記第1分類学習済モデルは、
複数の畳込層を含み、
複数の前記畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習されている
付記A5に記載の情報処理装置。 (Appendix A6)
The first classification trained model is
Contains multiple convolutional layers,
At least one of the plurality of convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and adds the same data as the side having a small scanning angle to the outside of the side having a large scanning angle. The information processing apparatus according to Appendix A5, which is learned by performing padding processing.
前記第1分類学習済モデルは、
複数の畳込層を含み、
複数の前記畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習されている
付記A5に記載の情報処理装置。 (Appendix A6)
The first classification trained model is
Contains multiple convolutional layers,
At least one of the plurality of convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and adds the same data as the side having a small scanning angle to the outside of the side having a large scanning angle. The information processing apparatus according to Appendix A5, which is learned by performing padding processing.
(付記A7)
前記第1分類学習済モデルは、時系列的に取得した複数の前記カテーテル画像を入力した場合に、複数の前記カテーテル画像のうちの最新の前記カテーテル画像に関して前記非生体組織領域と前記生体組織領域とを分類した前記第1分類データを出力する
付記A1から付記A6のいずれか一つに記載の情報処理装置。 (Appendix A7)
In the first classification trained model, when a plurality of the catheter images acquired in time series are input, the non-living tissue region and the living tissue region with respect to the latest catheter image among the plurality of catheter images are input. The information processing apparatus according to any one of Supplementary A1 to Supplementary A6, which outputs the first classification data classified into the above.
前記第1分類学習済モデルは、時系列的に取得した複数の前記カテーテル画像を入力した場合に、複数の前記カテーテル画像のうちの最新の前記カテーテル画像に関して前記非生体組織領域と前記生体組織領域とを分類した前記第1分類データを出力する
付記A1から付記A6のいずれか一つに記載の情報処理装置。 (Appendix A7)
In the first classification trained model, when a plurality of the catheter images acquired in time series are input, the non-living tissue region and the living tissue region with respect to the latest catheter image among the plurality of catheter images are input. The information processing apparatus according to any one of Supplementary A1 to Supplementary A6, which outputs the first classification data classified into the above.
(付記A8)
前記第1分類学習済モデルは、
過去に入力された前記カテーテル画像に関する情報を保持するメモリ部を備え、
前記メモリ部に保持した情報と、複数の前記カテーテル画像のうちの最新の前記カテーテル画像とに基づいて、前記第1分類データを出力する
付記A7に記載の情報処理装置。 (Appendix A8)
The first classification trained model is
It is equipped with a memory unit that holds information about the catheter image that was input in the past.
The information processing apparatus according to Appendix A7, which outputs the first classification data based on the information held in the memory unit and the latest catheter image among the plurality of catheter images.
前記第1分類学習済モデルは、
過去に入力された前記カテーテル画像に関する情報を保持するメモリ部を備え、
前記メモリ部に保持した情報と、複数の前記カテーテル画像のうちの最新の前記カテーテル画像とに基づいて、前記第1分類データを出力する
付記A7に記載の情報処理装置。 (Appendix A8)
The first classification trained model is
It is equipped with a memory unit that holds information about the catheter image that was input in the past.
The information processing apparatus according to Appendix A7, which outputs the first classification data based on the information held in the memory unit and the latest catheter image among the plurality of catheter images.
(付記A9)
前記第1分類学習済モデルは、前記カテーテル画像を入力した場合に、前記生体組織領域と、前記非生体組織領域と、前記第1腔または前記第2腔に挿入されている医療器具を示す医療器具領域とをそれぞれ異なる領域として分類した前記第1分類データを出力する
付記A1から付記A8のいずれか一つに記載の情報処理装置。 (Appendix A9)
The first classification trained model is medical treatment showing the biological tissue region, the non-living tissue region, and the medical device inserted in the first cavity or the second cavity when the catheter image is input. The information processing apparatus according to any one of Supplementary A1 to Supplementary A8, which outputs the first classification data in which the device area is classified as a different area.
前記第1分類学習済モデルは、前記カテーテル画像を入力した場合に、前記生体組織領域と、前記非生体組織領域と、前記第1腔または前記第2腔に挿入されている医療器具を示す医療器具領域とをそれぞれ異なる領域として分類した前記第1分類データを出力する
付記A1から付記A8のいずれか一つに記載の情報処理装置。 (Appendix A9)
The first classification trained model is medical treatment showing the biological tissue region, the non-living tissue region, and the medical device inserted in the first cavity or the second cavity when the catheter image is input. The information processing apparatus according to any one of Supplementary A1 to Supplementary A8, which outputs the first classification data in which the device area is classified as a different area.
(付記A10)
前記カテーテル画像を入力した場合に、前記第1内腔領域を含む前記非生体組織領域と、前記生体組織領域とを異なる領域として分類した第2分類データを出力する第2分類学習済モデルに、取得した前記カテーテル画像を入力して、出力される第2分類データを取得する第2分類データ取得部と、
前記第1分類データに前記第2分類データを合成した合成分類データを出力する合成分類データ出力部とを備え、
前記第2分類学習済モデルは、前記非生体組織領域のうち前記第1内腔領域のみが明示された第2訓練データを用いて生成されている
付記A1から付記A9のいずれか一つに記載の情報処理装置。 (Appendix A10)
When the catheter image is input, the second classification trained model that outputs the second classification data in which the non-living tissue region including the first lumen region and the living tissue region are classified as different regions is used. The second classification data acquisition unit that inputs the acquired catheter image and acquires the output second classification data,
It is provided with a synthetic classification data output unit that outputs synthetic classification data obtained by synthesizing the second classification data with the first classification data.
The second classification trained model is described in any one of Supplementary A1 to Supplementary A9, which is generated using the second training data in which only the first lumen region of the non-living tissue region is specified. Information processing equipment.
前記カテーテル画像を入力した場合に、前記第1内腔領域を含む前記非生体組織領域と、前記生体組織領域とを異なる領域として分類した第2分類データを出力する第2分類学習済モデルに、取得した前記カテーテル画像を入力して、出力される第2分類データを取得する第2分類データ取得部と、
前記第1分類データに前記第2分類データを合成した合成分類データを出力する合成分類データ出力部とを備え、
前記第2分類学習済モデルは、前記非生体組織領域のうち前記第1内腔領域のみが明示された第2訓練データを用いて生成されている
付記A1から付記A9のいずれか一つに記載の情報処理装置。 (Appendix A10)
When the catheter image is input, the second classification trained model that outputs the second classification data in which the non-living tissue region including the first lumen region and the living tissue region are classified as different regions is used. The second classification data acquisition unit that inputs the acquired catheter image and acquires the output second classification data,
It is provided with a synthetic classification data output unit that outputs synthetic classification data obtained by synthesizing the second classification data with the first classification data.
The second classification trained model is described in any one of Supplementary A1 to Supplementary A9, which is generated using the second training data in which only the first lumen region of the non-living tissue region is specified. Information processing equipment.
(付記A11)
前記第2分類学習済モデルは、前記カテーテル画像を入力した場合に、前記生体組織領域と、前記非生体組織領域と、前記第1腔または前記第2腔に挿入されている医療器具を示す医療器具領域とをそれぞれ異なる領域として分類した前記第2分類データを出力する
付記A10に記載の情報処理装置。 (Appendix A11)
The second classification trained model is a medical treatment showing the living tissue region, the non-living tissue region, and the medical device inserted in the first cavity or the second cavity when the catheter image is input. The information processing apparatus according to Appendix A10, which outputs the second classification data in which the instrument area and the device area are classified as different areas.
前記第2分類学習済モデルは、前記カテーテル画像を入力した場合に、前記生体組織領域と、前記非生体組織領域と、前記第1腔または前記第2腔に挿入されている医療器具を示す医療器具領域とをそれぞれ異なる領域として分類した前記第2分類データを出力する
付記A10に記載の情報処理装置。 (Appendix A11)
The second classification trained model is a medical treatment showing the living tissue region, the non-living tissue region, and the medical device inserted in the first cavity or the second cavity when the catheter image is input. The information processing apparatus according to Appendix A10, which outputs the second classification data in which the instrument area and the device area are classified as different areas.
(付記A12)
前記第1分類学習済モデルは、前記カテーテル画像のそれぞれの部分について前記生体組織領域である確率または前記非生体組織領域である確率をさらに出力し、
前記第2分類学習済モデルは、前記カテーテル画像のそれぞれの部分について前記生体組織領域である確率または前記非生体組織領域である確率をさらに出力し、
前記合成分類データ出力部は、前記カテーテル画像のそれぞれの部分について前記生体組織領域である確率または前記非生体組織領域である確率を演算した結果に基づいて前記第1分類データに前記第2分類データを合成した合成分類データを出力する
付記A10または付記A11に記載の情報処理装置。 (Appendix A12)
The first classification trained model further outputs the probability of being the biological tissue region or the probability of being the non-living tissue region for each portion of the catheter image.
The second classification trained model further outputs the probability of being the biological tissue region or the probability of being the non-living tissue region for each portion of the catheter image.
The synthetic classification data output unit adds the second classification data to the first classification data based on the result of calculating the probability of being the living tissue region or the probability of being the non-living tissue region for each part of the catheter image. The information processing apparatus according to Supplementary A10 or Supplementary A11, which outputs synthetic classification data obtained by synthesizing the above.
前記第1分類学習済モデルは、前記カテーテル画像のそれぞれの部分について前記生体組織領域である確率または前記非生体組織領域である確率をさらに出力し、
前記第2分類学習済モデルは、前記カテーテル画像のそれぞれの部分について前記生体組織領域である確率または前記非生体組織領域である確率をさらに出力し、
前記合成分類データ出力部は、前記カテーテル画像のそれぞれの部分について前記生体組織領域である確率または前記非生体組織領域である確率を演算した結果に基づいて前記第1分類データに前記第2分類データを合成した合成分類データを出力する
付記A10または付記A11に記載の情報処理装置。 (Appendix A12)
The first classification trained model further outputs the probability of being the biological tissue region or the probability of being the non-living tissue region for each portion of the catheter image.
The second classification trained model further outputs the probability of being the biological tissue region or the probability of being the non-living tissue region for each portion of the catheter image.
The synthetic classification data output unit adds the second classification data to the first classification data based on the result of calculating the probability of being the living tissue region or the probability of being the non-living tissue region for each part of the catheter image. The information processing apparatus according to Supplementary A10 or Supplementary A11, which outputs synthetic classification data obtained by synthesizing the above.
(付記A13)
前記画像取得用カテーテルは、前記画像取得用カテーテルの長手方向に沿って複数の前記カテーテル画像を順次取得する三次元走査用カテーテルである
付記A1から付記A12のいずれか一つに記載の情報処理装置。 (Appendix A13)
The information processing apparatus according to any one of Supplementary A1 to Supplementary A12, wherein the image acquisition catheter is a three-dimensional scanning catheter that sequentially acquires a plurality of the catheter images along the longitudinal direction of the image acquisition catheter. ..
前記画像取得用カテーテルは、前記画像取得用カテーテルの長手方向に沿って複数の前記カテーテル画像を順次取得する三次元走査用カテーテルである
付記A1から付記A12のいずれか一つに記載の情報処理装置。 (Appendix A13)
The information processing apparatus according to any one of Supplementary A1 to Supplementary A12, wherein the image acquisition catheter is a three-dimensional scanning catheter that sequentially acquires a plurality of the catheter images along the longitudinal direction of the image acquisition catheter. ..
(付記A14)
取得した複数の前記カテーテル画像からそれぞれ生成した複数の前記第1分類データに基づいて生成した三次元画像を出力する三次元出力部を備える
付記A13に記載の情報処理装置。 (Appendix A14)
The information processing apparatus according to Appendix A13, comprising a three-dimensional output unit that outputs a three-dimensional image generated based on the plurality of first classification data generated from each of the plurality of acquired catheter images.
取得した複数の前記カテーテル画像からそれぞれ生成した複数の前記第1分類データに基づいて生成した三次元画像を出力する三次元出力部を備える
付記A13に記載の情報処理装置。 (Appendix A14)
The information processing apparatus according to Appendix A13, comprising a three-dimensional output unit that outputs a three-dimensional image generated based on the plurality of first classification data generated from each of the plurality of acquired catheter images.
(付記A15)
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像を取得し、
少なくとも、前記第1腔の内側である第1内腔領域、および、前記画像取得用カテーテルが挿入されていない第2腔の内側である第2内腔領域を含む非生体組織領域と、生体組織領域と、が明示された第1訓練データを用いて生成されており、前記カテーテル画像を入力した場合に、前記非生体組織領域と前記生体組織領域とを異なる領域として分類した第1分類データを出力する第1分類学習済モデルに、取得した前記カテーテル画像を入力して、前記第1分類データを出力する
処理をコンピュータに実行させる情報処理方法。 (Appendix A15)
The catheter image obtained by the image acquisition catheter inserted in the first cavity is acquired, and the catheter image is acquired.
At least, a non-living tissue region including a first lumen region inside the first cavity and a second lumen region inside the second cavity into which the image acquisition catheter is not inserted, and living tissue. The first classification data is generated by using the first training data in which the region and the above are clearly specified, and when the catheter image is input, the non-living tissue region and the living tissue region are classified as different regions. An information processing method in which a computer executes a process of inputting an acquired catheter image into a first-class classification trained model to be output and outputting the first-class classification data.
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像を取得し、
少なくとも、前記第1腔の内側である第1内腔領域、および、前記画像取得用カテーテルが挿入されていない第2腔の内側である第2内腔領域を含む非生体組織領域と、生体組織領域と、が明示された第1訓練データを用いて生成されており、前記カテーテル画像を入力した場合に、前記非生体組織領域と前記生体組織領域とを異なる領域として分類した第1分類データを出力する第1分類学習済モデルに、取得した前記カテーテル画像を入力して、前記第1分類データを出力する
処理をコンピュータに実行させる情報処理方法。 (Appendix A15)
The catheter image obtained by the image acquisition catheter inserted in the first cavity is acquired, and the catheter image is acquired.
At least, a non-living tissue region including a first lumen region inside the first cavity and a second lumen region inside the second cavity into which the image acquisition catheter is not inserted, and living tissue. The first classification data is generated by using the first training data in which the region and the above are clearly specified, and when the catheter image is input, the non-living tissue region and the living tissue region are classified as different regions. An information processing method in which a computer executes a process of inputting an acquired catheter image into a first-class classification trained model to be output and outputting the first-class classification data.
(付記A16)
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像を取得し、
少なくとも、前記第1腔の内側である第1内腔領域、および、前記画像取得用カテーテルが挿入されていない第2腔の内側である第2内腔領域を含む非生体組織領域と、生体組織領域と、が明示された第1訓練データを用いて生成されており、前記カテーテル画像を入力した場合に、前記非生体組織領域と前記生体組織領域とを異なる領域として分類した第1分類データを出力する第1分類学習済モデルに、取得した前記カテーテル画像を入力して、前記第1分類データを出力する
処理をコンピュータに実行させるプログラム。 (Appendix A16)
The catheter image obtained by the image acquisition catheter inserted in the first cavity is acquired, and the catheter image is acquired.
At least, a non-living tissue region including a first lumen region inside the first cavity and a second lumen region inside the second cavity into which the image acquisition catheter is not inserted, and living tissue. The first classification data is generated by using the first training data in which the region and the above are clearly specified, and when the catheter image is input, the non-living tissue region and the living tissue region are classified as different regions. A program that inputs the acquired catheter image to the output first classification trained model and causes a computer to execute a process of outputting the first classification data.
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像を取得し、
少なくとも、前記第1腔の内側である第1内腔領域、および、前記画像取得用カテーテルが挿入されていない第2腔の内側である第2内腔領域を含む非生体組織領域と、生体組織領域と、が明示された第1訓練データを用いて生成されており、前記カテーテル画像を入力した場合に、前記非生体組織領域と前記生体組織領域とを異なる領域として分類した第1分類データを出力する第1分類学習済モデルに、取得した前記カテーテル画像を入力して、前記第1分類データを出力する
処理をコンピュータに実行させるプログラム。 (Appendix A16)
The catheter image obtained by the image acquisition catheter inserted in the first cavity is acquired, and the catheter image is acquired.
At least, a non-living tissue region including a first lumen region inside the first cavity and a second lumen region inside the second cavity into which the image acquisition catheter is not inserted, and living tissue. The first classification data is generated by using the first training data in which the region and the above are clearly specified, and when the catheter image is input, the non-living tissue region and the living tissue region are classified as different regions. A program that inputs the acquired catheter image to the output first classification trained model and causes a computer to execute a process of outputting the first classification data.
(付記A17)
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像と、前記カテーテル画像の各部分に対して、生体組織領域であることを示す生体組織領域ラベルと、前記第1腔の内側であることを示す第1内腔領域、前記画像取得用カテーテルが挿入されていない第2腔の内側であることを示す第2内腔領域、および、前記第1内腔領域でも前記第2内腔領域でもない非内腔領域を含む非生体組織領域ラベルと、を有する複数のラベルを付与したラベルデータと、を関連づけて記録した複数組の訓練データを取得し、
前記複数組の訓練データを用いて、前記カテーテル画像を入力、前記ラベルデータを出力として、前記カテーテル画像が入力された場合に、前記カテーテル画像の各部分に対して、前記生体組織領域ラベルと前記非生体組織領域ラベルとを出力する学習済モデルを生成する
学習済モデルの生成方法。 (Appendix A17)
A catheter image obtained by an image acquisition catheter inserted into the first lumen, a biological tissue region label indicating that each portion of the catheter image is a biological tissue region, and the inside of the first lumen. The first lumen region indicating that there is, the second lumen region indicating that the image acquisition catheter is inside the second lumen into which the catheter for image acquisition is not inserted, and the second lumen region also in the first lumen region. A plurality of sets of training data recorded in association with a non-living tissue region label including a non-luminal region that is not a region and a label data with a plurality of labels having the same are obtained.
When the catheter image is input using the plurality of sets of training data and the label data is output and the catheter image is input, the biological tissue region label and the biological tissue region label are used for each portion of the catheter image. A method for generating a trained model that outputs a trained model that outputs a non-living tissue area label.
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像と、前記カテーテル画像の各部分に対して、生体組織領域であることを示す生体組織領域ラベルと、前記第1腔の内側であることを示す第1内腔領域、前記画像取得用カテーテルが挿入されていない第2腔の内側であることを示す第2内腔領域、および、前記第1内腔領域でも前記第2内腔領域でもない非内腔領域を含む非生体組織領域ラベルと、を有する複数のラベルを付与したラベルデータと、を関連づけて記録した複数組の訓練データを取得し、
前記複数組の訓練データを用いて、前記カテーテル画像を入力、前記ラベルデータを出力として、前記カテーテル画像が入力された場合に、前記カテーテル画像の各部分に対して、前記生体組織領域ラベルと前記非生体組織領域ラベルとを出力する学習済モデルを生成する
学習済モデルの生成方法。 (Appendix A17)
A catheter image obtained by an image acquisition catheter inserted into the first lumen, a biological tissue region label indicating that each portion of the catheter image is a biological tissue region, and the inside of the first lumen. The first lumen region indicating that there is, the second lumen region indicating that the image acquisition catheter is inside the second lumen into which the catheter for image acquisition is not inserted, and the second lumen region also in the first lumen region. A plurality of sets of training data recorded in association with a non-living tissue region label including a non-luminal region that is not a region and a label data with a plurality of labels having the same are obtained.
When the catheter image is input using the plurality of sets of training data and the label data is output and the catheter image is input, the biological tissue region label and the biological tissue region label are used for each portion of the catheter image. A method for generating a trained model that outputs a trained model that outputs a non-living tissue area label.
(付記A18)
前記複数組の訓練データの前記非生体組織領域ラベルは、前記第1内腔領域を示す第1内腔領域ラベル、前記第2内腔領域を示す第2内腔領域ラベルおよび前記非内腔領域を示す非内腔領域ラベルを有し、
前記複数組の訓練データを用いて、前記カテーテル画像を入力、前記ラベルデータを出力として、前記カテーテル画像が入力された場合に、前記カテーテル画像の各部分に対して、前記生体組織領域ラベル、前記第1内腔領域ラベル、前記第2内腔領域ラベルおよび前記非内腔領域ラベルを出力する学習済モデルを生成する
付記A17に記載の学習済モデルの生成方法。 (Appendix A18)
The non-living tissue region label of the plurality of sets of training data includes a first lumen region label indicating the first lumen region, a second lumen region label indicating the second lumen region, and the non-lumen region. Has a non-luminal area label indicating
When the catheter image is input using the plurality of sets of training data and the label data is output and the catheter image is input, the biological tissue region label and the living tissue region label are used for each portion of the catheter image. The method for generating a trained model according to Appendix A17, which generates a trained model that outputs a first lumen region label, the second lumen region label, and the non-lumen region label.
前記複数組の訓練データの前記非生体組織領域ラベルは、前記第1内腔領域を示す第1内腔領域ラベル、前記第2内腔領域を示す第2内腔領域ラベルおよび前記非内腔領域を示す非内腔領域ラベルを有し、
前記複数組の訓練データを用いて、前記カテーテル画像を入力、前記ラベルデータを出力として、前記カテーテル画像が入力された場合に、前記カテーテル画像の各部分に対して、前記生体組織領域ラベル、前記第1内腔領域ラベル、前記第2内腔領域ラベルおよび前記非内腔領域ラベルを出力する学習済モデルを生成する
付記A17に記載の学習済モデルの生成方法。 (Appendix A18)
The non-living tissue region label of the plurality of sets of training data includes a first lumen region label indicating the first lumen region, a second lumen region label indicating the second lumen region, and the non-lumen region. Has a non-luminal area label indicating
When the catheter image is input using the plurality of sets of training data and the label data is output and the catheter image is input, the biological tissue region label and the living tissue region label are used for each portion of the catheter image. The method for generating a trained model according to Appendix A17, which generates a trained model that outputs a first lumen region label, the second lumen region label, and the non-lumen region label.
(付記A19)
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像と、前記カテーテル画像における前記第1腔の内側の境界線を示す境界線データを基に生成された、生体組織領域であることを示す生体組織領域ラベルと、前記第1腔の内側であることを示す第1内腔領域を含む非生体組織領域ラベルとを有する複数のラベルを付与したラベルデータと、を関連づけて記録した複数組の訓練データを取得し、
前記複数組の訓練データを用いて、前記カテーテル画像を入力、前記ラベルデータを出力として、前記カテーテル画像が入力された場合に、前記カテーテル画像の各部分に対して、前記生体組織領域ラベルと前記非生体組織領域ラベルとを出力する学習済モデルを生成する
学習済モデルの生成方法。 (Appendix A19)
It is a biological tissue region generated based on a catheter image obtained by an image acquisition catheter inserted into the first cavity and boundary line data indicating the inner boundary line of the first cavity in the catheter image. A plurality of labeled data having a plurality of labels having a biological tissue region label indicating the above and a non-biological tissue region label including the first lumen region indicating the inside of the first cavity, and a plurality of data recorded in association with each other. Get the training data of the set,
When the catheter image is input using the plurality of sets of training data and the label data is output and the catheter image is input, the biological tissue region label and the biological tissue region label are used for each portion of the catheter image. A method for generating a trained model that outputs a trained model that outputs a non-living tissue area label.
第1腔に挿入された画像取得用カテーテルにより得られたカテーテル画像と、前記カテーテル画像における前記第1腔の内側の境界線を示す境界線データを基に生成された、生体組織領域であることを示す生体組織領域ラベルと、前記第1腔の内側であることを示す第1内腔領域を含む非生体組織領域ラベルとを有する複数のラベルを付与したラベルデータと、を関連づけて記録した複数組の訓練データを取得し、
前記複数組の訓練データを用いて、前記カテーテル画像を入力、前記ラベルデータを出力として、前記カテーテル画像が入力された場合に、前記カテーテル画像の各部分に対して、前記生体組織領域ラベルと前記非生体組織領域ラベルとを出力する学習済モデルを生成する
学習済モデルの生成方法。 (Appendix A19)
It is a biological tissue region generated based on a catheter image obtained by an image acquisition catheter inserted into the first cavity and boundary line data indicating the inner boundary line of the first cavity in the catheter image. A plurality of labeled data having a plurality of labels having a biological tissue region label indicating the above and a non-biological tissue region label including the first lumen region indicating the inside of the first cavity, and a plurality of data recorded in association with each other. Get the training data of the set,
When the catheter image is input using the plurality of sets of training data and the label data is output and the catheter image is input, the biological tissue region label and the biological tissue region label are used for each portion of the catheter image. A method for generating a trained model that outputs a trained model that outputs a non-living tissue area label.
(付記A20)
前記カテーテル画像は、ラジアル走査型の前記画像取得用カテーテルにより得られた、一回転分の走査線データを走査角度順に平行に配列したRT形式画像であり、
前記学習済モデルは複数の畳込層を含み、
前記畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習する
付記A17から付記A19のいずれか一つに記載の学習済モデルの生成方法。 (Appendix A20)
The catheter image is an RT format image obtained by the radial scanning type image acquisition catheter in which scanning line data for one rotation is arranged in parallel in the order of scanning angles.
The trained model contains multiple convolutional layers.
At least one of the convolutional layers is padding that adds the same data as the side with a large scanning angle to the outside of the side with a small scanning angle and the same data as the side with a small scanning angle to the outside of the side with a large scanning angle. The method for generating a trained model according to any one of Supplementary A17 to Supplementary A19.
前記カテーテル画像は、ラジアル走査型の前記画像取得用カテーテルにより得られた、一回転分の走査線データを走査角度順に平行に配列したRT形式画像であり、
前記学習済モデルは複数の畳込層を含み、
前記畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習する
付記A17から付記A19のいずれか一つに記載の学習済モデルの生成方法。 (Appendix A20)
The catheter image is an RT format image obtained by the radial scanning type image acquisition catheter in which scanning line data for one rotation is arranged in parallel in the order of scanning angles.
The trained model contains multiple convolutional layers.
At least one of the convolutional layers is padding that adds the same data as the side with a large scanning angle to the outside of the side with a small scanning angle and the same data as the side with a small scanning angle to the outside of the side with a large scanning angle. The method for generating a trained model according to any one of Supplementary A17 to Supplementary A19.
(付記B1)
ラジアル走査型の画像取得用カテーテルにより得られたカテーテル画像を取得する画像取得部と、
前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する医療器具学習済モデルに、取得した前記カテーテル画像を入力して、前記第1位置情報を出力する第1位置情報出力部と
を備える情報処理装置。 (Appendix B1)
An image acquisition unit that acquires a catheter image obtained by a radial scanning type image acquisition catheter, and an image acquisition unit.
When the catheter image is input, the acquired catheter image is input to the medical device learned model that outputs the first position information regarding the position of the medical device included in the catheter image, and the first position information is input. An information processing device including a first position information output unit for output.
ラジアル走査型の画像取得用カテーテルにより得られたカテーテル画像を取得する画像取得部と、
前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する医療器具学習済モデルに、取得した前記カテーテル画像を入力して、前記第1位置情報を出力する第1位置情報出力部と
を備える情報処理装置。 (Appendix B1)
An image acquisition unit that acquires a catheter image obtained by a radial scanning type image acquisition catheter, and an image acquisition unit.
When the catheter image is input, the acquired catheter image is input to the medical device learned model that outputs the first position information regarding the position of the medical device included in the catheter image, and the first position information is input. An information processing device including a first position information output unit for output.
(付記B2)
前記第1位置情報出力部は、前記カテーテル画像に含まれる1個の画素の位置を用いて前記第1位置情報を出力する
付記B1に記載の情報処理装置。 (Appendix B2)
The information processing device according to Appendix B1, wherein the first position information output unit outputs the first position information using the position of one pixel included in the catheter image.
前記第1位置情報出力部は、前記カテーテル画像に含まれる1個の画素の位置を用いて前記第1位置情報を出力する
付記B1に記載の情報処理装置。 (Appendix B2)
The information processing device according to Appendix B1, wherein the first position information output unit outputs the first position information using the position of one pixel included in the catheter image.
(付記B3)
前記第1位置情報出力部は、時系列的に得られた複数の前記カテーテル画像にそれぞれ対応する時系列的な前記第1位置情報を取得する第1位置情報取得部と
時系列的な前記第1位置情報から、所定の条件を満たさない前記第1位置情報を除外する除外部と、
時系列的な前記第1位置情報に、所定の条件を満たす補完情報を加える補完部と
を備える付記B1または付記B2に記載の情報処理装置。 (Appendix B3)
The first position information output unit is a time-series first position information acquisition unit and a time-series first position information acquisition unit that acquires the time-series first position information corresponding to each of the plurality of catheter images obtained in chronological order. An exclusion unit that excludes the first position information that does not satisfy a predetermined condition from the one position information,
The information processing apparatus according to Appendix B1 or Appendix B2, which comprises a complement section for adding complementary information satisfying a predetermined condition to the first position information in chronological order.
前記第1位置情報出力部は、時系列的に得られた複数の前記カテーテル画像にそれぞれ対応する時系列的な前記第1位置情報を取得する第1位置情報取得部と
時系列的な前記第1位置情報から、所定の条件を満たさない前記第1位置情報を除外する除外部と、
時系列的な前記第1位置情報に、所定の条件を満たす補完情報を加える補完部と
を備える付記B1または付記B2に記載の情報処理装置。 (Appendix B3)
The first position information output unit is a time-series first position information acquisition unit and a time-series first position information acquisition unit that acquires the time-series first position information corresponding to each of the plurality of catheter images obtained in chronological order. An exclusion unit that excludes the first position information that does not satisfy a predetermined condition from the one position information,
The information processing apparatus according to Appendix B1 or Appendix B2, which comprises a complement section for adding complementary information satisfying a predetermined condition to the first position information in chronological order.
(付記B4)
前記医療器具学習済モデルは、時系列的に取得した複数の前記カテーテル画像を入力した場合に、複数の前記カテーテル画像のうちの最新の前記カテーテル画像に関して前記第1位置情報を出力する
付記B1から付記B3のいずれか一つに記載の情報処理装置。 (Appendix B4)
The medical device learned model outputs the first position information regarding the latest catheter image among the plurality of catheter images when a plurality of the catheter images acquired in time series are input. The information processing apparatus according to any one of Supplementary B3.
前記医療器具学習済モデルは、時系列的に取得した複数の前記カテーテル画像を入力した場合に、複数の前記カテーテル画像のうちの最新の前記カテーテル画像に関して前記第1位置情報を出力する
付記B1から付記B3のいずれか一つに記載の情報処理装置。 (Appendix B4)
The medical device learned model outputs the first position information regarding the latest catheter image among the plurality of catheter images when a plurality of the catheter images acquired in time series are input. The information processing apparatus according to any one of Supplementary B3.
(付記B5)
前記医療器具学習済モデルは、
過去に入力された前記カテーテル画像に関する情報を保持するメモリ部を備え、
前記メモリ部に保持した情報と、複数の前記カテーテル画像のうちの最新の前記カテーテル画像とに基づいて、前記第1位置情報を出力する
付記B4に記載の情報処理装置。 (Appendix B5)
The medical device learned model is
It is equipped with a memory unit that holds information about the catheter image that was input in the past.
The information processing device according to Appendix B4, which outputs the first position information based on the information held in the memory unit and the latest catheter image among the plurality of catheter images.
前記医療器具学習済モデルは、
過去に入力された前記カテーテル画像に関する情報を保持するメモリ部を備え、
前記メモリ部に保持した情報と、複数の前記カテーテル画像のうちの最新の前記カテーテル画像とに基づいて、前記第1位置情報を出力する
付記B4に記載の情報処理装置。 (Appendix B5)
The medical device learned model is
It is equipped with a memory unit that holds information about the catheter image that was input in the past.
The information processing device according to Appendix B4, which outputs the first position information based on the information held in the memory unit and the latest catheter image among the plurality of catheter images.
(付記B6)
前記医療器具学習済モデルは、
前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像で前記カテーテル画像の入力を受け付け、
複数の第1畳込層を含み、
複数の前記第1畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習されている
付記B1から付記B5のいずれか一つに記載の情報処理装置。 (Appendix B6)
The medical device learned model is
The input of the catheter image is accepted as an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
Contains multiple first convolutional layers
At least one of the plurality of first convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and the same data as the side having a small scanning angle to the outside of the side having a large scanning angle. The information processing apparatus according to any one of Supplementary note B1 to Supplementary note B5, which has been learned by performing a padding process.
前記医療器具学習済モデルは、
前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像で前記カテーテル画像の入力を受け付け、
複数の第1畳込層を含み、
複数の前記第1畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習されている
付記B1から付記B5のいずれか一つに記載の情報処理装置。 (Appendix B6)
The medical device learned model is
The input of the catheter image is accepted as an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
Contains multiple first convolutional layers
At least one of the plurality of first convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and the same data as the side having a small scanning angle to the outside of the side having a large scanning angle. The information processing apparatus according to any one of Supplementary note B1 to Supplementary note B5, which has been learned by performing a padding process.
(付記B7)
前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる医療器具の位置に関する走査角度情報を出力する角度学習済モデルに、取得した前記カテーテル画像を入力して、出力された前記走査角度情報を取得する走査角度情報取得部と、
前記医療器具学習済モデルから出力された前記第1位置情報と、前記角度学習済モデルから出力された前記走査角度情報とに基づいて、前記カテーテル画像に含まれる医療器具の位置に関する第2位置情報を出力する第2位置情報出力部とを備える
付記B1から付記B6のいずれか一つに記載の情報処理装置。 (Appendix B7)
When the catheter image is input, the acquired catheter image is input to the angle-learned model that outputs the scanning angle information regarding the position of the medical device included in the catheter image, and the output scanning angle information is output. The scanning angle information acquisition unit to be acquired and
Second position information regarding the position of the medical device included in the catheter image based on the first position information output from the medical device learned model and the scanning angle information output from the angle learned model. The information processing apparatus according to any one of Supplementary note B1 to Supplementary note B6, comprising a second position information output unit for outputting.
前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる医療器具の位置に関する走査角度情報を出力する角度学習済モデルに、取得した前記カテーテル画像を入力して、出力された前記走査角度情報を取得する走査角度情報取得部と、
前記医療器具学習済モデルから出力された前記第1位置情報と、前記角度学習済モデルから出力された前記走査角度情報とに基づいて、前記カテーテル画像に含まれる医療器具の位置に関する第2位置情報を出力する第2位置情報出力部とを備える
付記B1から付記B6のいずれか一つに記載の情報処理装置。 (Appendix B7)
When the catheter image is input, the acquired catheter image is input to the angle-learned model that outputs the scanning angle information regarding the position of the medical device included in the catheter image, and the output scanning angle information is output. The scanning angle information acquisition unit to be acquired and
Second position information regarding the position of the medical device included in the catheter image based on the first position information output from the medical device learned model and the scanning angle information output from the angle learned model. The information processing apparatus according to any one of Supplementary note B1 to Supplementary note B6, comprising a second position information output unit for outputting.
(付記B8)
前記角度学習済モデルは、
前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像で前記カテーテル画像の入力を受け付け、
複数の第2畳込層を含み、
複数の前記第2畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習されている
付記B7に記載の情報処理装置。 (Appendix B8)
The angle trained model is
The input of the catheter image is accepted as an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
Contains multiple second convolutional layers
At least one of the plurality of second convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and the same data as the side having a small scanning angle to the outside of the side having a large scanning angle. The information processing apparatus according to Appendix B7, which has been learned by performing a padding process.
前記角度学習済モデルは、
前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像で前記カテーテル画像の入力を受け付け、
複数の第2畳込層を含み、
複数の前記第2畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習されている
付記B7に記載の情報処理装置。 (Appendix B8)
The angle trained model is
The input of the catheter image is accepted as an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
Contains multiple second convolutional layers
At least one of the plurality of second convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and the same data as the side having a small scanning angle to the outside of the side having a large scanning angle. The information processing apparatus according to Appendix B7, which has been learned by performing a padding process.
(付記B9)
前記医療器具学習済モデルは、前記カテーテル画像と、前記カテーテル画像に含まれる医療器具の位置とを関連づけて記録した複数組の訓練データを用いて生成されている
付記B1から付記B8のいずれか一つに記載の情報処理装置。 (Appendix B9)
The medical device learned model is any one of Supplementary B1 to Supplementary B8 generated using a plurality of sets of training data recorded by associating the catheter image with the position of the medical device included in the catheter image. The information processing device described in 1.
前記医療器具学習済モデルは、前記カテーテル画像と、前記カテーテル画像に含まれる医療器具の位置とを関連づけて記録した複数組の訓練データを用いて生成されている
付記B1から付記B8のいずれか一つに記載の情報処理装置。 (Appendix B9)
The medical device learned model is any one of Supplementary B1 to Supplementary B8 generated using a plurality of sets of training data recorded by associating the catheter image with the position of the medical device included in the catheter image. The information processing device described in 1.
(付記B10)
前記訓練データは、
前記画像取得用カテーテルにより得られた前記カテーテル画像を表示し、
前記カテーテル画像に対する一回のクリック操作または一回のタップ操作により、前記カテーテル画像に含まれる医療器具の位置を受け付け、
前記カテーテル画像と、医療器具の位置とを関連づけて記憶する処理により生成されている
付記B9に記載の情報処理装置。 (Appendix B10)
The training data is
The catheter image obtained by the image acquisition catheter is displayed, and the image is displayed.
The position of the medical device included in the catheter image is received by one click operation or one tap operation on the catheter image.
The information processing apparatus according to Appendix B9, which is generated by a process of associating and storing the catheter image and the position of a medical device.
前記訓練データは、
前記画像取得用カテーテルにより得られた前記カテーテル画像を表示し、
前記カテーテル画像に対する一回のクリック操作または一回のタップ操作により、前記カテーテル画像に含まれる医療器具の位置を受け付け、
前記カテーテル画像と、医療器具の位置とを関連づけて記憶する処理により生成されている
付記B9に記載の情報処理装置。 (Appendix B10)
The training data is
The catheter image obtained by the image acquisition catheter is displayed, and the image is displayed.
The position of the medical device included in the catheter image is received by one click operation or one tap operation on the catheter image.
The information processing apparatus according to Appendix B9, which is generated by a process of associating and storing the catheter image and the position of a medical device.
(付記B11)
前記訓練データは、
前記医療器具学習済モデルに前記カテーテル画像を入力し、
前記医療器具学習済モデルから出力された前記第1位置情報を、入力した前記カテーテル画像に重畳して表示し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けない場合、前記カテーテル画像と前記第1位置情報とを関連づけた非訂正データを前記訓練データとして記憶し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けた場合、前記カテーテル画像と前記訂正指示に基づく医療器具の位置に関する情報とを関連づけた訂正データを前記訓練データとして記憶する処理により生成されている
付記B9に記載の情報処理装置。 (Appendix B11)
The training data is
The catheter image is input to the medical device trained model, and the catheter image is input.
The first position information output from the medical device trained model is superimposed on the input catheter image and displayed.
When the correction instruction regarding the position of the medical device included in the catheter image is not accepted, the uncorrected data relating the catheter image and the first position information is stored as the training data.
When a correction instruction regarding the position of the medical device included in the catheter image is received, the correction data in which the catheter image is associated with the information regarding the position of the medical device based on the correction instruction is stored as the training data. The information processing device according to Appendix B9.
前記訓練データは、
前記医療器具学習済モデルに前記カテーテル画像を入力し、
前記医療器具学習済モデルから出力された前記第1位置情報を、入力した前記カテーテル画像に重畳して表示し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けない場合、前記カテーテル画像と前記第1位置情報とを関連づけた非訂正データを前記訓練データとして記憶し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けた場合、前記カテーテル画像と前記訂正指示に基づく医療器具の位置に関する情報とを関連づけた訂正データを前記訓練データとして記憶する処理により生成されている
付記B9に記載の情報処理装置。 (Appendix B11)
The training data is
The catheter image is input to the medical device trained model, and the catheter image is input.
The first position information output from the medical device trained model is superimposed on the input catheter image and displayed.
When the correction instruction regarding the position of the medical device included in the catheter image is not accepted, the uncorrected data relating the catheter image and the first position information is stored as the training data.
When a correction instruction regarding the position of the medical device included in the catheter image is received, the correction data in which the catheter image is associated with the information regarding the position of the medical device based on the correction instruction is stored as the training data. The information processing device according to Appendix B9.
(付記B12)
画像取得用カテーテルにより得られたカテーテル画像と、前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報とを関連づけて記録した複数組の訓練データを取得し、
複数組の前記訓練データに基づいて、前記カテーテル画像が入力された場合に前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する学習済モデルを生成する
学習済モデルの生成方法。 (Appendix B12)
A plurality of sets of training data recorded by associating the catheter image obtained by the image acquisition catheter with the first position information regarding the position of the medical device included in the catheter image were acquired.
A method of generating a trained model for generating a trained model that outputs first position information regarding the position of a medical device included in the catheter image when the catheter image is input based on a plurality of sets of the training data.
画像取得用カテーテルにより得られたカテーテル画像と、前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報とを関連づけて記録した複数組の訓練データを取得し、
複数組の前記訓練データに基づいて、前記カテーテル画像が入力された場合に前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する学習済モデルを生成する
学習済モデルの生成方法。 (Appendix B12)
A plurality of sets of training data recorded by associating the catheter image obtained by the image acquisition catheter with the first position information regarding the position of the medical device included in the catheter image were acquired.
A method of generating a trained model for generating a trained model that outputs first position information regarding the position of a medical device included in the catheter image when the catheter image is input based on a plurality of sets of the training data.
(付記B13)
前記第1位置情報は、前記カテーテル画像に含まれる1個の画素の位置に関する情報である
付記B12に記載の学習済モデルの生成方法。 (Appendix B13)
The first position information is information about the position of one pixel included in the catheter image. The method for generating a trained model according to Appendix B12.
前記第1位置情報は、前記カテーテル画像に含まれる1個の画素の位置に関する情報である
付記B12に記載の学習済モデルの生成方法。 (Appendix B13)
The first position information is information about the position of one pixel included in the catheter image. The method for generating a trained model according to Appendix B12.
(付記B14)
画像取得用カテーテルにより得られた内腔を含むカテーテル画像を表示し、
前記カテーテル画像に対する一回のクリック操作または一回のタップ操作により、前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する第1位置情報を受け付け、
前記カテーテル画像と、前記第1位置情報とを関連づけた訓練データを記憶する
処理をコンピュータに実行させる訓練データ生成方法。 (Appendix B14)
A catheter image including the lumen obtained by the image acquisition catheter is displayed.
By one click operation or one tap operation on the catheter image, the first position information regarding the position of the medical device inserted into the lumen contained in the catheter image is received.
A training data generation method for causing a computer to execute a process of storing training data in which the catheter image and the first position information are associated with each other.
画像取得用カテーテルにより得られた内腔を含むカテーテル画像を表示し、
前記カテーテル画像に対する一回のクリック操作または一回のタップ操作により、前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する第1位置情報を受け付け、
前記カテーテル画像と、前記第1位置情報とを関連づけた訓練データを記憶する
処理をコンピュータに実行させる訓練データ生成方法。 (Appendix B14)
A catheter image including the lumen obtained by the image acquisition catheter is displayed.
By one click operation or one tap operation on the catheter image, the first position information regarding the position of the medical device inserted into the lumen contained in the catheter image is received.
A training data generation method for causing a computer to execute a process of storing training data in which the catheter image and the first position information are associated with each other.
(付記B15)
前記第1位置情報は、前記カテーテル画像に含まれる1個の画素の位置に関する情報である
付記B14に記載の訓練データ生成方法。 (Appendix B15)
The training data generation method according to Appendix B14, wherein the first position information is information regarding the position of one pixel included in the catheter image.
前記第1位置情報は、前記カテーテル画像に含まれる1個の画素の位置に関する情報である
付記B14に記載の訓練データ生成方法。 (Appendix B15)
The training data generation method according to Appendix B14, wherein the first position information is information regarding the position of one pixel included in the catheter image.
(付記B16)
前記カテーテル画像に対して前記第1位置情報を受け付けた場合、
時系列的に連続して得られた別のカテーテル画像を表示する
付記B14または付記B15に記載の訓練データ生成方法。 (Appendix B16)
When the first position information is received for the catheter image,
The training data generation method according to Annex B14 or B15, which displays another catheter image obtained continuously in chronological order.
前記カテーテル画像に対して前記第1位置情報を受け付けた場合、
時系列的に連続して得られた別のカテーテル画像を表示する
付記B14または付記B15に記載の訓練データ生成方法。 (Appendix B16)
When the first position information is received for the catheter image,
The training data generation method according to Annex B14 or B15, which displays another catheter image obtained continuously in chronological order.
(付記B17)
前記画像取得用カテーテルは、ラジアル走査型の断層像取得用カテーテルであり、
前記カテーテル画像の表示は、前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像と、前記走査線データに基づくデータを前記画像取得用カテーテルの周囲に放射状に配置したXY形式画像との2枚を並べて表示し、
前記第1位置情報は、前記RT形式画像と前記XY形式画像とのいずれの画像からも受け付ける
付記B14から付記B16のいずれか一つに記載の訓練データ生成方法。 (Appendix B17)
The image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
The display of the catheter image is an RT format image in which a plurality of scan line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles, and data based on the scan line data is radially around the image acquisition catheter. Display two images side by side with the XY format image placed in
The training data generation method according to any one of Supplementary Provisions B14 to B16, wherein the first position information is accepted from any of the RT format image and the XY format image.
前記画像取得用カテーテルは、ラジアル走査型の断層像取得用カテーテルであり、
前記カテーテル画像の表示は、前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像と、前記走査線データに基づくデータを前記画像取得用カテーテルの周囲に放射状に配置したXY形式画像との2枚を並べて表示し、
前記第1位置情報は、前記RT形式画像と前記XY形式画像とのいずれの画像からも受け付ける
付記B14から付記B16のいずれか一つに記載の訓練データ生成方法。 (Appendix B17)
The image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
The display of the catheter image is an RT format image in which a plurality of scan line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles, and data based on the scan line data is radially around the image acquisition catheter. Display two images side by side with the XY format image placed in
The training data generation method according to any one of Supplementary Provisions B14 to B16, wherein the first position information is accepted from any of the RT format image and the XY format image.
(付記B18)
画像取得用カテーテルにより得られたカテーテル画像を入力した場合に前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する医療器具学習済モデルに、前記カテーテル画像を入力し、
前記医療器具学習済モデルから出力された前記第1位置情報を、入力した前記カテーテル画像に重畳して表示し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けない場合、前記カテーテル画像と前記第1位置情報とを関連づけた非訂正データを訓練データとして記憶し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けた場合、前記カテーテル画像と受け付けた医療器具の位置に関する情報とを関連づけた訂正データを前記訓練データとして記憶する
処理をコンピュータに実行させる訓練データ生成方法。 (Appendix B18)
When the catheter image obtained by the image acquisition catheter is input, the catheter image is input to the medical device trained model that outputs the first position information regarding the position of the medical device included in the catheter image.
The first position information output from the medical device trained model is superimposed on the input catheter image and displayed.
When the correction instruction regarding the position of the medical device included in the catheter image is not accepted, the uncorrected data relating the catheter image and the first position information is stored as training data.
When a correction instruction regarding the position of a medical device included in the catheter image is received, training is performed to make a computer execute a process of storing correction data in which the catheter image is associated with information regarding the position of the received medical device as the training data. Data generation method.
画像取得用カテーテルにより得られたカテーテル画像を入力した場合に前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する医療器具学習済モデルに、前記カテーテル画像を入力し、
前記医療器具学習済モデルから出力された前記第1位置情報を、入力した前記カテーテル画像に重畳して表示し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けない場合、前記カテーテル画像と前記第1位置情報とを関連づけた非訂正データを訓練データとして記憶し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けた場合、前記カテーテル画像と受け付けた医療器具の位置に関する情報とを関連づけた訂正データを前記訓練データとして記憶する
処理をコンピュータに実行させる訓練データ生成方法。 (Appendix B18)
When the catheter image obtained by the image acquisition catheter is input, the catheter image is input to the medical device trained model that outputs the first position information regarding the position of the medical device included in the catheter image.
The first position information output from the medical device trained model is superimposed on the input catheter image and displayed.
When the correction instruction regarding the position of the medical device included in the catheter image is not accepted, the uncorrected data relating the catheter image and the first position information is stored as training data.
When a correction instruction regarding the position of a medical device included in the catheter image is received, training is performed to make a computer execute a process of storing correction data in which the catheter image is associated with information regarding the position of the received medical device as the training data. Data generation method.
(付記B19)
前記非訂正データおよび前記訂正データは、前記カテーテル画像に含まれる1個の画素の位置に関するデータである
付記B18に記載の訓練データ生成方法。 (Appendix B19)
The training data generation method according to Appendix B18, wherein the uncorrected data and the corrected data are data relating to the position of one pixel included in the catheter image.
前記非訂正データおよび前記訂正データは、前記カテーテル画像に含まれる1個の画素の位置に関するデータである
付記B18に記載の訓練データ生成方法。 (Appendix B19)
The training data generation method according to Appendix B18, wherein the uncorrected data and the corrected data are data relating to the position of one pixel included in the catheter image.
(付記B20)
時系列的に得られた複数の前記カテーテル画像を順番に前記医療器具学習済モデルに入力し、
出力されたそれぞれの位置を入力した前記カテーテル画像に重畳して順番に表示する
付記B18または付記B19に記載の訓練データ生成方法。 (Appendix B20)
A plurality of the catheter images obtained in time series are sequentially input to the medical device trained model, and the catheter images are input to the medical device trained model.
The training data generation method according to Appendix B18 or Appendix B19, wherein each output position is superimposed on the input catheter image and displayed in order.
時系列的に得られた複数の前記カテーテル画像を順番に前記医療器具学習済モデルに入力し、
出力されたそれぞれの位置を入力した前記カテーテル画像に重畳して順番に表示する
付記B18または付記B19に記載の訓練データ生成方法。 (Appendix B20)
A plurality of the catheter images obtained in time series are sequentially input to the medical device trained model, and the catheter images are input to the medical device trained model.
The training data generation method according to Appendix B18 or Appendix B19, wherein each output position is superimposed on the input catheter image and displayed in order.
(付記B21)
前記医療器具の位置は、一回のクリック操作または一回のタップ操作により受け付ける
付記B18から付記B20のいずれか一つに記載の訓練データ生成方法。 (Appendix B21)
The training data generation method according to any one of Supplementary note B18 to Supplementary note B20, wherein the position of the medical device is accepted by one click operation or one tap operation.
前記医療器具の位置は、一回のクリック操作または一回のタップ操作により受け付ける
付記B18から付記B20のいずれか一つに記載の訓練データ生成方法。 (Appendix B21)
The training data generation method according to any one of Supplementary note B18 to Supplementary note B20, wherein the position of the medical device is accepted by one click operation or one tap operation.
(付記B22)
前記画像取得用カテーテルは、ラジアル走査型の断層像取得用カテーテルであり、
前記カテーテル画像の表示は、前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像と、前記走査線データに基づくデータを前記画像取得用カテーテルの周囲に放射状に配置したXY形式画像との2枚を並べて表示し、
前記医療器具の位置は、前記RT形式画像と前記XY形式画像とのいずれの画像からも受け付ける
付記B18から付記B21のいずれか一つに記載の訓練データ生成方法。 (Appendix B22)
The image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
The display of the catheter image is an RT format image in which a plurality of scan line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles, and data based on the scan line data is radially around the image acquisition catheter. Display two images side by side with the XY format image placed in
The training data generation method according to any one of Supplementary Provisions B18 to B21, wherein the position of the medical device is accepted from both the RT format image and the XY format image.
前記画像取得用カテーテルは、ラジアル走査型の断層像取得用カテーテルであり、
前記カテーテル画像の表示は、前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像と、前記走査線データに基づくデータを前記画像取得用カテーテルの周囲に放射状に配置したXY形式画像との2枚を並べて表示し、
前記医療器具の位置は、前記RT形式画像と前記XY形式画像とのいずれの画像からも受け付ける
付記B18から付記B21のいずれか一つに記載の訓練データ生成方法。 (Appendix B22)
The image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
The display of the catheter image is an RT format image in which a plurality of scan line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles, and data based on the scan line data is radially around the image acquisition catheter. Display two images side by side with the XY format image placed in
The training data generation method according to any one of Supplementary Provisions B18 to B21, wherein the position of the medical device is accepted from both the RT format image and the XY format image.
(付記C1)
画像取得用カテーテルにより得られた内腔を含むカテーテル画像を取得する画像取得部と、
前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する位置情報を取得する位置情報取得部と、
前記カテーテル画像と前記位置情報とを入力した場合に、前記カテーテル画像のそれぞれの領域を、生体組織領域、前記医療器具が存在する医療器具領域および非生体組織領域の少なくとも3つに分類した第1データを出力する第1学習済モデルに、取得した前記カテーテル画像と、取得した前記位置情報とを入力して前記第1データを出力する第1データ出力部と
を備える情報処理装置。 (Appendix C1)
An image acquisition unit that acquires a catheter image including the lumen obtained by an image acquisition catheter, and an image acquisition unit.
A position information acquisition unit that acquires position information regarding the position of a medical device inserted into the lumen included in the catheter image, and a position information acquisition unit.
When the catheter image and the position information are input, each region of the catheter image is classified into at least three regions, a biological tissue region, a medical instrument region in which the medical instrument is present, and a non-biological tissue region. An information processing device including a first data output unit that inputs the acquired catheter image and the acquired position information and outputs the first data to the first trained model that outputs data.
画像取得用カテーテルにより得られた内腔を含むカテーテル画像を取得する画像取得部と、
前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する位置情報を取得する位置情報取得部と、
前記カテーテル画像と前記位置情報とを入力した場合に、前記カテーテル画像のそれぞれの領域を、生体組織領域、前記医療器具が存在する医療器具領域および非生体組織領域の少なくとも3つに分類した第1データを出力する第1学習済モデルに、取得した前記カテーテル画像と、取得した前記位置情報とを入力して前記第1データを出力する第1データ出力部と
を備える情報処理装置。 (Appendix C1)
An image acquisition unit that acquires a catheter image including the lumen obtained by an image acquisition catheter, and an image acquisition unit.
A position information acquisition unit that acquires position information regarding the position of a medical device inserted into the lumen included in the catheter image, and a position information acquisition unit.
When the catheter image and the position information are input, each region of the catheter image is classified into at least three regions, a biological tissue region, a medical instrument region in which the medical instrument is present, and a non-biological tissue region. An information processing device including a first data output unit that inputs the acquired catheter image and the acquired position information and outputs the first data to the first trained model that outputs data.
(付記C2)
前記位置情報取得部は、
前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる前記位置情報を出力する医療器具学習済モデルに、取得した前記カテーテル画像を入力して、前記医療器具学習済モデルから前記位置情報を取得する
付記C1に記載の情報処理装置。 (Appendix C2)
The location information acquisition unit
When the catheter image is input, the acquired catheter image is input to the medical device learned model that outputs the position information included in the catheter image, and the position information is acquired from the medical device learned model. The information processing device according to the appendix C1.
前記位置情報取得部は、
前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる前記位置情報を出力する医療器具学習済モデルに、取得した前記カテーテル画像を入力して、前記医療器具学習済モデルから前記位置情報を取得する
付記C1に記載の情報処理装置。 (Appendix C2)
The location information acquisition unit
When the catheter image is input, the acquired catheter image is input to the medical device learned model that outputs the position information included in the catheter image, and the position information is acquired from the medical device learned model. The information processing device according to the appendix C1.
(付記C3)
前記位置情報を入力せずに前記カテーテル画像を入力した場合に、前記カテーテル画像のそれぞれの領域を、生体組織領域、前記医療器具が存在する医療器具領域および非生体組織領域の少なくとも3つに分類した第2データを出力する第2モデルに、取得した前記カテーテル画像を入力して前記第2データを取得する第2データ取得部と
前記第1データと前記第2データとを合成した合成データを出力する合成データ出力部とを備える
付記C2に記載の情報処理装置。 (Appendix C3)
When the catheter image is input without inputting the position information, each region of the catheter image is classified into at least three regions: a biological tissue region, a medical instrument region in which the medical instrument is present, and a non-living tissue region. A second data acquisition unit that inputs the acquired catheter image to the second model that outputs the second data to acquire the second data, and synthetic data obtained by synthesizing the first data and the second data. The information processing apparatus according to Appendix C2, which includes a synthetic data output unit for output.
前記位置情報を入力せずに前記カテーテル画像を入力した場合に、前記カテーテル画像のそれぞれの領域を、生体組織領域、前記医療器具が存在する医療器具領域および非生体組織領域の少なくとも3つに分類した第2データを出力する第2モデルに、取得した前記カテーテル画像を入力して前記第2データを取得する第2データ取得部と
前記第1データと前記第2データとを合成した合成データを出力する合成データ出力部とを備える
付記C2に記載の情報処理装置。 (Appendix C3)
When the catheter image is input without inputting the position information, each region of the catheter image is classified into at least three regions: a biological tissue region, a medical instrument region in which the medical instrument is present, and a non-living tissue region. A second data acquisition unit that inputs the acquired catheter image to the second model that outputs the second data to acquire the second data, and synthetic data obtained by synthesizing the first data and the second data. The information processing apparatus according to Appendix C2, which includes a synthetic data output unit for output.
(付記C4)
前記合成データ出力部は、
前記第1データおよび前記第2データのうち、前記生体組織領域および前記非生体組織領域に分類された生体組織関連領域に関するデータを合成した第1合成データを出力する第1合成データ出力部と、
前記第1データおよび前記第2データのうち、前記医療器具領域に関するデータを合成した第2合成データを出力する第2合成データ出力部とを備える
付記C3に記載の情報処理装置。 (Appendix C4)
The synthetic data output unit is
A first synthetic data output unit that outputs the first synthetic data obtained by synthesizing the data related to the biological tissue-related region classified into the biological tissue region and the non-biological tissue region among the first data and the second data.
The information processing apparatus according to Appendix C3, comprising a second synthetic data output unit that outputs a second synthetic data obtained by synthesizing data related to the medical device region among the first data and the second data.
前記合成データ出力部は、
前記第1データおよび前記第2データのうち、前記生体組織領域および前記非生体組織領域に分類された生体組織関連領域に関するデータを合成した第1合成データを出力する第1合成データ出力部と、
前記第1データおよび前記第2データのうち、前記医療器具領域に関するデータを合成した第2合成データを出力する第2合成データ出力部とを備える
付記C3に記載の情報処理装置。 (Appendix C4)
The synthetic data output unit is
A first synthetic data output unit that outputs the first synthetic data obtained by synthesizing the data related to the biological tissue-related region classified into the biological tissue region and the non-biological tissue region among the first data and the second data.
The information processing apparatus according to Appendix C3, comprising a second synthetic data output unit that outputs a second synthetic data obtained by synthesizing data related to the medical device region among the first data and the second data.
(付記C5)
前記第2合成データ出力部は、
前記医療器具学習済モデルから前記位置情報を取得できた場合、前記第1データに含まれる前記医療器具領域に関するデータを使用して前記第2合成データを出力し、
前記医療器具学習済モデルから前記位置情報を取得できなかった場合、前記第2データに含まれる前記医療器具領域に関するデータを使用して前記第2合成データを出力する
付記C4に記載の情報処理装置。 (Appendix C5)
The second synthetic data output unit is
When the position information can be acquired from the medical device learned model, the second synthetic data is output using the data related to the medical device area included in the first data.
The information processing apparatus according to Appendix C4, which outputs the second synthetic data using the data related to the medical device region included in the second data when the position information cannot be acquired from the medical device learned model. ..
前記第2合成データ出力部は、
前記医療器具学習済モデルから前記位置情報を取得できた場合、前記第1データに含まれる前記医療器具領域に関するデータを使用して前記第2合成データを出力し、
前記医療器具学習済モデルから前記位置情報を取得できなかった場合、前記第2データに含まれる前記医療器具領域に関するデータを使用して前記第2合成データを出力する
付記C4に記載の情報処理装置。 (Appendix C5)
The second synthetic data output unit is
When the position information can be acquired from the medical device learned model, the second synthetic data is output using the data related to the medical device area included in the first data.
The information processing apparatus according to Appendix C4, which outputs the second synthetic data using the data related to the medical device region included in the second data when the position information cannot be acquired from the medical device learned model. ..
(付記C6)
前記合成データ出力部は、前記第1データの信頼度および前記第2データの信頼度に応じた重みづけに基づいて前記医療器具領域に関するデータを合成した前記第2合成データを出力する
付記C4に記載の情報処理装置。 (Appendix C6)
The synthetic data output unit outputs the second synthetic data obtained by synthesizing the data related to the medical device area based on the reliability of the first data and the weighting according to the reliability of the second data. The information processing device described.
前記合成データ出力部は、前記第1データの信頼度および前記第2データの信頼度に応じた重みづけに基づいて前記医療器具領域に関するデータを合成した前記第2合成データを出力する
付記C4に記載の情報処理装置。 (Appendix C6)
The synthetic data output unit outputs the second synthetic data obtained by synthesizing the data related to the medical device area based on the reliability of the first data and the weighting according to the reliability of the second data. The information processing device described.
(付記C7)
前記信頼度は、前記医療器具学習済モデルから前記位置情報を取得できたか否かに基づいて定められる
付記C6に記載の情報処理装置。 (Appendix C7)
The information processing apparatus according to Appendix C6, wherein the reliability is determined based on whether or not the position information can be acquired from the medical device learned model.
前記信頼度は、前記医療器具学習済モデルから前記位置情報を取得できたか否かに基づいて定められる
付記C6に記載の情報処理装置。 (Appendix C7)
The information processing apparatus according to Appendix C6, wherein the reliability is determined based on whether or not the position information can be acquired from the medical device learned model.
(付記C8)
前記合成データ出力部は、
前記医療器具学習済モデルから前記位置情報を取得できた場合、前記第1データの信頼度を前記第2データの信頼度よりも高く設定し、
前記医療器具学習済モデルから前記位置情報を取得できない場合、前記第1データの信頼度を前記第2データの信頼度よりも低く設定する
付記C6に記載の情報処理装置。 (Appendix C8)
The synthetic data output unit is
When the position information can be acquired from the medical device trained model, the reliability of the first data is set higher than the reliability of the second data.
The information processing apparatus according to Appendix C6, which sets the reliability of the first data to be lower than the reliability of the second data when the position information cannot be acquired from the medical device learned model.
前記合成データ出力部は、
前記医療器具学習済モデルから前記位置情報を取得できた場合、前記第1データの信頼度を前記第2データの信頼度よりも高く設定し、
前記医療器具学習済モデルから前記位置情報を取得できない場合、前記第1データの信頼度を前記第2データの信頼度よりも低く設定する
付記C6に記載の情報処理装置。 (Appendix C8)
The synthetic data output unit is
When the position information can be acquired from the medical device trained model, the reliability of the first data is set higher than the reliability of the second data.
The information processing apparatus according to Appendix C6, which sets the reliability of the first data to be lower than the reliability of the second data when the position information cannot be acquired from the medical device learned model.
(付記C9)
前記画像取得用カテーテルは、前記画像取得用カテーテルの長手方向に沿って複数の前記カテーテル画像を順次取得する三次元走査用カテーテルである
付記C1から付記C8のいずれか一つに記載の情報処理装置。 (Appendix C9)
The information processing apparatus according to any one of Supplementary Provisions C1 to C8, wherein the image acquisition catheter is a three-dimensional scanning catheter that sequentially acquires a plurality of the catheter images along the longitudinal direction of the image acquisition catheter. ..
前記画像取得用カテーテルは、前記画像取得用カテーテルの長手方向に沿って複数の前記カテーテル画像を順次取得する三次元走査用カテーテルである
付記C1から付記C8のいずれか一つに記載の情報処理装置。 (Appendix C9)
The information processing apparatus according to any one of Supplementary Provisions C1 to C8, wherein the image acquisition catheter is a three-dimensional scanning catheter that sequentially acquires a plurality of the catheter images along the longitudinal direction of the image acquisition catheter. ..
(付記C10)
画像取得用カテーテルにより得られた内腔を含むカテーテル画像を取得し、
前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する位置情報を取得し、
前記カテーテル画像と前記カテーテル画像に含まれる前記医療器具の位置に関する位置情報とを入力した場合に、前記カテーテル画像のそれぞれの領域を、生体組織領域、前記医療器具が存在する医療器具領域および非生体組織領域の少なくとも3つに分類した第1データを出力する第1学習済モデルに、取得した前記カテーテル画像と、取得した位置情報とを入力して前記第1データを出力する
処理をコンピュータに実行させる情報処理方法。 (Appendix C10)
Acquire a catheter image including the lumen obtained by the image acquisition catheter,
Obtaining position information regarding the position of the medical device inserted into the lumen included in the catheter image,
When the catheter image and the position information regarding the position of the medical device included in the catheter image are input, each region of the catheter image is divided into a biological tissue region, a medical device region in which the medical device is present, and a non-living body. The process of inputting the acquired catheter image and the acquired position information into the first trained model that outputs the first data classified into at least three tissue regions and outputting the first data is executed on the computer. Information processing method to make.
画像取得用カテーテルにより得られた内腔を含むカテーテル画像を取得し、
前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する位置情報を取得し、
前記カテーテル画像と前記カテーテル画像に含まれる前記医療器具の位置に関する位置情報とを入力した場合に、前記カテーテル画像のそれぞれの領域を、生体組織領域、前記医療器具が存在する医療器具領域および非生体組織領域の少なくとも3つに分類した第1データを出力する第1学習済モデルに、取得した前記カテーテル画像と、取得した位置情報とを入力して前記第1データを出力する
処理をコンピュータに実行させる情報処理方法。 (Appendix C10)
Acquire a catheter image including the lumen obtained by the image acquisition catheter,
Obtaining position information regarding the position of the medical device inserted into the lumen included in the catheter image,
When the catheter image and the position information regarding the position of the medical device included in the catheter image are input, each region of the catheter image is divided into a biological tissue region, a medical device region in which the medical device is present, and a non-living body. The process of inputting the acquired catheter image and the acquired position information into the first trained model that outputs the first data classified into at least three tissue regions and outputting the first data is executed on the computer. Information processing method to make.
(付記C11)
画像取得用カテーテルにより得られた内腔を含むカテーテル画像を取得し、
前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する位置情報を取得し、
前記カテーテル画像と前記カテーテル画像に含まれる前記医療器具の位置に関する位置情報とを入力した場合に、前記カテーテル画像のそれぞれの領域を、生体組織領域、前記医療器具が存在する医療器具領域および非生体組織領域の少なくとも3つに分類した第1データを出力する第1学習済モデルに、取得した前記カテーテル画像と、取得した位置情報とを入力して前記第1データを出力する
処理をコンピュータに実行させるプログラム。 (Appendix C11)
Acquire a catheter image including the lumen obtained by the image acquisition catheter,
Obtaining position information regarding the position of the medical device inserted into the lumen included in the catheter image,
When the catheter image and the position information regarding the position of the medical device included in the catheter image are input, each region of the catheter image is divided into a biological tissue region, a medical device region in which the medical device is present, and a non-living body. The process of inputting the acquired catheter image and the acquired position information into the first trained model that outputs the first data classified into at least three tissue regions and outputting the first data is executed on the computer. Program to let you.
画像取得用カテーテルにより得られた内腔を含むカテーテル画像を取得し、
前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する位置情報を取得し、
前記カテーテル画像と前記カテーテル画像に含まれる前記医療器具の位置に関する位置情報とを入力した場合に、前記カテーテル画像のそれぞれの領域を、生体組織領域、前記医療器具が存在する医療器具領域および非生体組織領域の少なくとも3つに分類した第1データを出力する第1学習済モデルに、取得した前記カテーテル画像と、取得した位置情報とを入力して前記第1データを出力する
処理をコンピュータに実行させるプログラム。 (Appendix C11)
Acquire a catheter image including the lumen obtained by the image acquisition catheter,
Obtaining position information regarding the position of the medical device inserted into the lumen included in the catheter image,
When the catheter image and the position information regarding the position of the medical device included in the catheter image are input, each region of the catheter image is divided into a biological tissue region, a medical device region in which the medical device is present, and a non-living body. The process of inputting the acquired catheter image and the acquired position information into the first trained model that outputs the first data classified into at least three tissue regions and outputting the first data is executed on the computer. Program to let you.
各実施例で記載されている技術的特徴(構成要件)はお互いに組合せ可能であり、組み合わせすることにより、新しい技術的特徴を形成することができる。
今回開示された実施の形態はすべての点で例示であって、制限的なものでは無いと考えられるべきである。本発明の範囲は、上記した意味では無く、請求の範囲によって示され、請求の範囲と均等の意味および範囲内でのすべての変更が含まれることが意図される。 The technical features (constituent requirements) described in each embodiment can be combined with each other, and by combining them, new technical features can be formed.
The embodiments disclosed this time should be considered to be exemplary in all respects and not restrictive. The scope of the present invention is expressed by the scope of claims, not the above-mentioned meaning, and is intended to include all modifications within the meaning and scope equivalent to the scope of claims.
今回開示された実施の形態はすべての点で例示であって、制限的なものでは無いと考えられるべきである。本発明の範囲は、上記した意味では無く、請求の範囲によって示され、請求の範囲と均等の意味および範囲内でのすべての変更が含まれることが意図される。 The technical features (constituent requirements) described in each embodiment can be combined with each other, and by combining them, new technical features can be formed.
The embodiments disclosed this time should be considered to be exemplary in all respects and not restrictive. The scope of the present invention is expressed by the scope of claims, not the above-mentioned meaning, and is intended to include all modifications within the meaning and scope equivalent to the scope of claims.
10 カテーテルシステム
20 情報処理装置
21 制御部
22 主記憶装置
23 補助記憶装置
24 通信部
25 表示部
26 入力部
27 カテーテル制御装置
271 カテーテル制御部
29 読取部
31 表示装置
32 入力装置
33 MDU
37 画像診断装置
40 画像取得用カテーテル
41 プローブ部
42 センサ
43 シャフト
44 先端マーカ
45 コネクタ部
46 ガイドワイヤルーメン
51 カテーテル画像
518 RT形式カテーテル画像(カテーテル画像)
519 XY形式カテーテル画像
52 分類データ(ヒント無分類データ、第2データ)
521 第1分類データ(ラベルデータ)
522 第2分類データ(ラベルデータ)
526 合成分類データ
528 RT形式分類データ
529 XY形式分類データ
536 合成データ
541 第1合成部
542 第2合成部
543 第3合成部
55 三次元データ
551 生体三次元データ
552 医療器具三次元データ
561 ヒント有分類データ(第1データ)
611 医療器具学習済モデル
612 角度学習済モデル
615 位置情報合成部
619 位置情報モデル
62 分類モデル(第2モデル)
621 第1分類学習済モデル
622 第2分類学習済モデル
626 合成分類モデル
628 分類データ合成部
629 分類データ変換部
631 ヒント有学習済モデル(第1学習済モデル)
632 ヒント無学習済モデル(第2学習済モデル)
639 X%ヒント学習済モデル
65 位置情報取得部
66 位置分類解析部
71 医療器具位置訓練データDB
72 ヒント有訓練データDB
781 カーソル
782 制御ボタンエリア
81 画像取得部
82 第1分類データ出力部
90 コンピュータ
96 可搬型記録媒体
97 プログラム
98 半導体メモリ 10Catheter system 20 Information processing device 21 Control unit 22 Main storage device 23 Auxiliary storage device 24 Communication unit 25 Display unit 26 Input unit 27 Catheter control device 271 Catheter control unit 29 Reading unit 31 Display device 32 Input device 33 MDU
37Diagnostic imaging device 40 Catheter for image acquisition 41 Probe part 42 Sensor 43 Shaft 44 Tip marker 45 Connector part 46 Guide wire lumen 51 Catheter image 518 RT format catheter image (catheter image)
519 XYformat catheter image 52 Classification data (hint unclassified data, second data)
521 First classification data (label data)
522 Second classification data (label data)
526Synthetic classification data 528 RT format classification data 529 XY format classification data 536 Synthetic data
5411st synthesis unit 542 2nd synthesis unit 543 3rd synthesis unit 55 3D data 551 Biological 3D data 552 Medical device 3D data 561 Hint classified data (1st data)
611 Medical equipment trainedmodel 612 Angle trained model 615 Position information synthesizer 619 Position information model 62 Classification model (second model)
621 First classification trainedmodel 622 Second classification trained model 626 Synthetic classification model 628 Classification data synthesis unit 629 Classification data conversion unit 631 Hint learned model (first trained model)
632 Hint unlearned model (second trained model)
639 X% hint trainedmodel 65 Position information acquisition unit 66 Position classification analysis unit 71 Medical device position training data DB
72 Hint Yes Training data DB
781Cursor 782 Control button area 81 Image acquisition unit 82 Class 1 data output unit 90 Computer 96 Portable recording medium 97 Program 98 Semiconductor memory
20 情報処理装置
21 制御部
22 主記憶装置
23 補助記憶装置
24 通信部
25 表示部
26 入力部
27 カテーテル制御装置
271 カテーテル制御部
29 読取部
31 表示装置
32 入力装置
33 MDU
37 画像診断装置
40 画像取得用カテーテル
41 プローブ部
42 センサ
43 シャフト
44 先端マーカ
45 コネクタ部
46 ガイドワイヤルーメン
51 カテーテル画像
518 RT形式カテーテル画像(カテーテル画像)
519 XY形式カテーテル画像
52 分類データ(ヒント無分類データ、第2データ)
521 第1分類データ(ラベルデータ)
522 第2分類データ(ラベルデータ)
526 合成分類データ
528 RT形式分類データ
529 XY形式分類データ
536 合成データ
541 第1合成部
542 第2合成部
543 第3合成部
55 三次元データ
551 生体三次元データ
552 医療器具三次元データ
561 ヒント有分類データ(第1データ)
611 医療器具学習済モデル
612 角度学習済モデル
615 位置情報合成部
619 位置情報モデル
62 分類モデル(第2モデル)
621 第1分類学習済モデル
622 第2分類学習済モデル
626 合成分類モデル
628 分類データ合成部
629 分類データ変換部
631 ヒント有学習済モデル(第1学習済モデル)
632 ヒント無学習済モデル(第2学習済モデル)
639 X%ヒント学習済モデル
65 位置情報取得部
66 位置分類解析部
71 医療器具位置訓練データDB
72 ヒント有訓練データDB
781 カーソル
782 制御ボタンエリア
81 画像取得部
82 第1分類データ出力部
90 コンピュータ
96 可搬型記録媒体
97 プログラム
98 半導体メモリ 10
37
519 XY
521 First classification data (label data)
522 Second classification data (label data)
526
541
611 Medical equipment trained
621 First classification trained
632 Hint unlearned model (second trained model)
639 X% hint trained
72 Hint Yes Training data DB
781
Claims (22)
- ラジアル走査型の画像取得用カテーテルにより得られたカテーテル画像を取得する画像取得部と、
前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する医療器具学習済モデルに、取得した前記カテーテル画像を入力して、前記第1位置情報を出力する第1位置情報出力部と
を備える情報処理装置。 An image acquisition unit that acquires a catheter image obtained by a radial scanning type image acquisition catheter, and an image acquisition unit.
When the catheter image is input, the acquired catheter image is input to the medical device learned model that outputs the first position information regarding the position of the medical device included in the catheter image, and the first position information is input. An information processing device including a first position information output unit for output. - 前記第1位置情報出力部は、前記カテーテル画像に含まれる1個の画素の位置を用いて前記第1位置情報を出力する
請求項1に記載の情報処理装置。 The information processing device according to claim 1, wherein the first position information output unit outputs the first position information using the position of one pixel included in the catheter image. - 前記第1位置情報出力部は、時系列的に得られた複数の前記カテーテル画像にそれぞれ対応する時系列的な前記第1位置情報を取得する第1位置情報取得部と
時系列的な前記第1位置情報から、所定の条件を満たさない前記第1位置情報を除外する除外部と、
時系列的な前記第1位置情報に、所定の条件を満たす補完情報を加える補完部と
を備える請求項1または請求項2に記載の情報処理装置。 The first position information output unit is a time-series first position information acquisition unit and a time-series first position information acquisition unit that acquires the time-series first position information corresponding to each of the plurality of catheter images obtained in chronological order. An exclusion unit that excludes the first position information that does not satisfy a predetermined condition from the one position information,
The information processing apparatus according to claim 1 or 2, further comprising a complement unit for adding complementary information satisfying a predetermined condition to the first position information in chronological order. - 前記医療器具学習済モデルは、時系列的に取得した複数の前記カテーテル画像を入力した場合に、複数の前記カテーテル画像のうちの最新の前記カテーテル画像に関して前記第1位置情報を出力する
請求項1から請求項3のいずれか一つに記載の情報処理装置。 The medical device learned model outputs the first position information with respect to the latest catheter image among the plurality of catheter images when a plurality of the catheter images acquired in time series are input. The information processing apparatus according to any one of claims 3. - 前記医療器具学習済モデルは、
過去に入力された前記カテーテル画像に関する情報を保持するメモリ部を備え、
前記メモリ部に保持した情報と、複数の前記カテーテル画像のうちの最新の前記カテーテル画像とに基づいて、前記第1位置情報を出力する
請求項4に記載の情報処理装置。 The medical device learned model is
It is equipped with a memory unit that holds information about the catheter image that was input in the past.
The information processing apparatus according to claim 4, which outputs the first position information based on the information held in the memory unit and the latest catheter image among the plurality of catheter images. - 前記医療器具学習済モデルは、
前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像で前記カテーテル画像の入力を受け付け、
複数の第1畳込層を含み、
複数の前記第1畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習されている
請求項1から請求項5のいずれか一つに記載の情報処理装置。 The medical device learned model is
The input of the catheter image is accepted as an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
Contains multiple first convolutional layers
At least one of the plurality of first convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and the same data as the side having a small scanning angle to the outside of the side having a large scanning angle. The information processing apparatus according to any one of claims 1 to 5, which has been learned by performing a padding process to which the data is added. - 前記カテーテル画像を入力した場合に、前記カテーテル画像に含まれる医療器具の位置に関する走査角度情報を出力する角度学習済モデルに、取得した前記カテーテル画像を入力して、出力された前記走査角度情報を取得する走査角度情報取得部と、
前記医療器具学習済モデルから出力された前記第1位置情報と、前記角度学習済モデルから出力された前記走査角度情報とに基づいて、前記カテーテル画像に含まれる医療器具の位置に関する第2位置情報を出力する第2位置情報出力部とを備える
請求項1から請求項6のいずれか一つに記載の情報処理装置。 When the catheter image is input, the acquired catheter image is input to the angle-learned model that outputs the scanning angle information regarding the position of the medical device included in the catheter image, and the output scanning angle information is output. The scanning angle information acquisition unit to be acquired and
Second position information regarding the position of the medical device included in the catheter image based on the first position information output from the medical device learned model and the scanning angle information output from the angle learned model. The information processing apparatus according to any one of claims 1 to 6, further comprising a second position information output unit for outputting. - 前記角度学習済モデルは、
前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像で前記カテーテル画像の入力を受け付け、
複数の第2畳込層を含み、
複数の前記第2畳込層の少なくとも一つは、走査角度が小さい側の外側に走査角度が大きい側と同じデータを付加し、走査角度が大きい側の外側に走査角度が小さい側と同じデータを付加するパディング処理を行なって学習されている
請求項7に記載の情報処理装置。 The angle trained model is
The input of the catheter image is accepted as an RT format image in which a plurality of scanning line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles.
Contains multiple second convolutional layers
At least one of the plurality of second convolutional layers adds the same data as the side having a large scanning angle to the outside of the side having a small scanning angle, and the same data as the side having a small scanning angle to the outside of the side having a large scanning angle. The information processing apparatus according to claim 7, wherein the information processing apparatus is learned by performing a padding process. - 前記医療器具学習済モデルは、前記カテーテル画像と、前記カテーテル画像に含まれる医療器具の位置とを関連づけて記録した複数組の訓練データを用いて生成されている
請求項1から請求項8のいずれか一つに記載の情報処理装置。 The medical device learned model is any of claims 1 to 8 generated by using a plurality of sets of training data recorded in association with the catheter image and the position of the medical device included in the catheter image. The information processing device described in one. - 前記訓練データは、
前記画像取得用カテーテルにより得られた前記カテーテル画像を表示し、
前記カテーテル画像に対する一回のクリック操作または一回のタップ操作により、前記カテーテル画像に含まれる医療器具の位置を受け付け、
前記カテーテル画像と、医療器具の位置とを関連づけて記憶する処理により生成されている
請求項9に記載の情報処理装置。 The training data is
The catheter image obtained by the image acquisition catheter is displayed, and the image is displayed.
The position of the medical device included in the catheter image is received by one click operation or one tap operation on the catheter image.
The information processing apparatus according to claim 9, which is generated by a process of storing the catheter image in association with the position of the medical device. - 前記訓練データは、
前記医療器具学習済モデルに前記カテーテル画像を入力し、
前記医療器具学習済モデルから出力された前記第1位置情報を、入力した前記カテーテル画像に重畳して表示し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けない場合、前記カテーテル画像と前記第1位置情報とを関連づけた非訂正データを前記訓練データとして記憶し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けた場合、前記カテーテル画像と前記訂正指示に基づく医療器具の位置に関する情報とを関連づけた訂正データを前記訓練データとして記憶する処理により生成されている
請求項9に記載の情報処理装置。 The training data is
The catheter image is input to the medical device trained model, and the catheter image is input.
The first position information output from the medical device trained model is superimposed on the input catheter image and displayed.
When the correction instruction regarding the position of the medical device included in the catheter image is not accepted, the uncorrected data relating the catheter image and the first position information is stored as the training data.
When a correction instruction regarding the position of the medical device included in the catheter image is received, the correction data in which the catheter image is associated with the information regarding the position of the medical device based on the correction instruction is stored as the training data. The information processing apparatus according to claim 9. - 画像取得用カテーテルにより得られたカテーテル画像と、前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報とを関連づけて記録した複数組の訓練データを取得し、
複数組の前記訓練データに基づいて、前記カテーテル画像が入力された場合に前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する学習済モデルを生成する
学習済モデルの生成方法。 A plurality of sets of training data recorded by associating the catheter image obtained by the image acquisition catheter with the first position information regarding the position of the medical device included in the catheter image were acquired.
A method of generating a trained model for generating a trained model that outputs first position information regarding the position of a medical device included in the catheter image when the catheter image is input based on a plurality of sets of the training data. - 前記第1位置情報は、前記カテーテル画像に含まれる1個の画素の位置に関する情報である
請求項12に記載の学習済モデルの生成方法。 The method for generating a trained model according to claim 12, wherein the first position information is information regarding the position of one pixel included in the catheter image. - 画像取得用カテーテルにより得られた内腔を含むカテーテル画像を表示し、
前記カテーテル画像に対する一回のクリック操作または一回のタップ操作により、前記カテーテル画像に含まれる前記内腔に挿入された医療器具の位置に関する第1位置情報を受け付け、
前記カテーテル画像と、前記第1位置情報とを関連づけた訓練データを記憶する
処理をコンピュータに実行させる訓練データ生成方法。 A catheter image including the lumen obtained by the image acquisition catheter is displayed.
By one click operation or one tap operation on the catheter image, the first position information regarding the position of the medical device inserted into the lumen contained in the catheter image is received.
A training data generation method for causing a computer to execute a process of storing training data in which the catheter image and the first position information are associated with each other. - 前記第1位置情報は、前記カテーテル画像に含まれる1個の画素の位置に関する情報である
請求項14に記載の訓練データ生成方法。 The training data generation method according to claim 14, wherein the first position information is information regarding the position of one pixel included in the catheter image. - 前記カテーテル画像に対して前記第1位置情報を受け付けた場合、
時系列的に連続して得られた別のカテーテル画像を表示する
請求項14または請求項15に記載の訓練データ生成方法。 When the first position information is received for the catheter image,
The training data generation method according to claim 14 or 15, wherein another catheter image obtained continuously in chronological order is displayed. - 前記画像取得用カテーテルは、ラジアル走査型の断層像取得用カテーテルであり、
前記カテーテル画像の表示は、前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像と、前記走査線データに基づくデータを前記画像取得用カテーテルの周囲に放射状に配置したXY形式画像との2枚を並べて表示し、
前記第1位置情報は、前記RT形式画像と前記XY形式画像とのいずれの画像からも受け付ける
請求項14から請求項16のいずれか一つに記載の訓練データ生成方法。 The image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
The display of the catheter image is an RT format image in which a plurality of scan line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles, and data based on the scan line data is radially around the image acquisition catheter. Display two images side by side with the XY format image placed in
The training data generation method according to any one of claims 14 to 16, wherein the first position information is accepted from any of the RT format image and the XY format image. - 画像取得用カテーテルにより得られたカテーテル画像を入力した場合に前記カテーテル画像に含まれる医療器具の位置に関する第1位置情報を出力する医療器具学習済モデルに、前記カテーテル画像を入力し、
前記医療器具学習済モデルから出力された前記第1位置情報を、入力した前記カテーテル画像に重畳して表示し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けない場合、前記カテーテル画像と前記第1位置情報とを関連づけた非訂正データを訓練データとして記憶し、
前記カテーテル画像に含まれる医療器具の位置に関する訂正指示を受け付けた場合、前記カテーテル画像と受け付けた医療器具の位置に関する情報とを関連づけた訂正データを前記訓練データとして記憶する
処理をコンピュータに実行させる訓練データ生成方法。 When the catheter image obtained by the image acquisition catheter is input, the catheter image is input to the medical device trained model that outputs the first position information regarding the position of the medical device included in the catheter image.
The first position information output from the medical device trained model is superimposed on the input catheter image and displayed.
When the correction instruction regarding the position of the medical device included in the catheter image is not accepted, the uncorrected data relating the catheter image and the first position information is stored as training data.
When a correction instruction regarding the position of a medical device included in the catheter image is received, training is performed to make a computer execute a process of storing correction data in which the catheter image is associated with information regarding the position of the received medical device as the training data. Data generation method. - 前記非訂正データおよび前記訂正データは、前記カテーテル画像に含まれる1個の画素の位置に関するデータである
請求項18に記載の訓練データ生成方法。 The training data generation method according to claim 18, wherein the uncorrected data and the corrected data are data relating to the position of one pixel included in the catheter image. - 時系列的に得られた複数の前記カテーテル画像を順番に前記医療器具学習済モデルに入力し、
出力されたそれぞれの位置を入力した前記カテーテル画像に重畳して順番に表示する
請求項18または請求項19に記載の訓練データ生成方法。 A plurality of the catheter images obtained in time series are sequentially input to the medical device trained model, and the catheter images are input to the medical device trained model.
The training data generation method according to claim 18 or 19, wherein each output position is superimposed on the input catheter image and displayed in order. - 前記医療器具の位置は、一回のクリック操作または一回のタップ操作により受け付ける
請求項18から請求項20のいずれか一つに記載の訓練データ生成方法。 The training data generation method according to any one of claims 18 to 20, wherein the position of the medical device is accepted by one click operation or one tap operation. - 前記画像取得用カテーテルは、ラジアル走査型の断層像取得用カテーテルであり、
前記カテーテル画像の表示は、前記画像取得用カテーテルから取得した複数の走査線データを走査角度順に平行に配列したRT形式画像と、前記走査線データに基づくデータを前記画像取得用カテーテルの周囲に放射状に配置したXY形式画像との2枚を並べて表示し、
前記医療器具の位置は、前記RT形式画像と前記XY形式画像とのいずれの画像からも受け付ける
請求項18から請求項21のいずれか一つに記載の訓練データ生成方法。 The image acquisition catheter is a radial scanning type tomographic image acquisition catheter.
The display of the catheter image is an RT format image in which a plurality of scan line data acquired from the image acquisition catheter are arranged in parallel in the order of scanning angles, and data based on the scan line data is radially around the image acquisition catheter. Display two images side by side with the XY format image placed in
The training data generation method according to any one of claims 18 to 21, wherein the position of the medical device is accepted from both the RT format image and the XY format image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022554019A JPWO2022071326A1 (en) | 2020-09-29 | 2021-09-28 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020163911 | 2020-09-29 | ||
JP2020-163911 | 2020-09-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022071326A1 true WO2022071326A1 (en) | 2022-04-07 |
Family
ID=80950418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/035668 WO2022071326A1 (en) | 2020-09-29 | 2021-09-28 | Information processing device, learned model generation method and training data generation method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2022071326A1 (en) |
WO (1) | WO2022071326A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024004597A1 (en) * | 2022-06-29 | 2024-01-04 | 富士フイルム株式会社 | Learning device, trained model, medical diagnosis device, endoscopic ultrasonography device, learning method, and program |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010075616A (en) * | 2008-09-29 | 2010-04-08 | Yamaguchi Univ | Discrimination of nature of tissue using sparse coding method |
JP2017503548A (en) * | 2013-12-20 | 2017-02-02 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Automatic ultrasonic beam steering and needle artifact suppression |
JP2020081866A (en) * | 2018-11-15 | 2020-06-04 | ゼネラル・エレクトリック・カンパニイ | Deep learning for arterial analysis and assessment |
-
2021
- 2021-09-28 WO PCT/JP2021/035668 patent/WO2022071326A1/en active Application Filing
- 2021-09-28 JP JP2022554019A patent/JPWO2022071326A1/ja active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010075616A (en) * | 2008-09-29 | 2010-04-08 | Yamaguchi Univ | Discrimination of nature of tissue using sparse coding method |
JP2017503548A (en) * | 2013-12-20 | 2017-02-02 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Automatic ultrasonic beam steering and needle artifact suppression |
JP2020081866A (en) * | 2018-11-15 | 2020-06-04 | ゼネラル・エレクトリック・カンパニイ | Deep learning for arterial analysis and assessment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024004597A1 (en) * | 2022-06-29 | 2024-01-04 | 富士フイルム株式会社 | Learning device, trained model, medical diagnosis device, endoscopic ultrasonography device, learning method, and program |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022071326A1 (en) | 2022-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8538105B2 (en) | Medical image processing apparatus, method, and program | |
US7912270B2 (en) | Method and system for creating and using an impact atlas | |
CN107809955B (en) | Real-time collimation and ROI-filter localization in X-ray imaging via automatic detection of landmarks of interest | |
JP2022509316A (en) | Methods, devices and systems for planning intracavitary probe procedures | |
US20240013514A1 (en) | Information processing device, information processing method, and program | |
WO2022071326A1 (en) | Information processing device, learned model generation method and training data generation method | |
JP7489882B2 (en) | Computer program, image processing method and image processing device | |
WO2021193019A1 (en) | Program, information processing method, information processing device, and model generation method | |
WO2022071328A1 (en) | Information processing device, information processing method, and program | |
US20230133103A1 (en) | Learning model generation method, image processing apparatus, program, and training data generation method | |
WO2022071325A1 (en) | Information processing device, information processing method, program, and trained model generation method | |
JP2008086658A (en) | Image display device, and image display program | |
WO2021193018A1 (en) | Program, information processing method, information processing device, and model generation method | |
WO2021193024A1 (en) | Program, information processing method, information processing device and model generating method | |
CN116744855A (en) | Identification of anatomical scan window, probe orientation and/or patient position based on ultrasound images | |
WO2021199962A1 (en) | Program, information processing method, and information processing device | |
WO2021199967A1 (en) | Program, information processing method, learning model generation method, learning model relearning method, and information processing system | |
JP7421548B2 (en) | Diagnostic support device and diagnostic support system | |
US20240221366A1 (en) | Learning model generation method, image processing apparatus, information processing apparatus, training data generation method, and image processing method | |
WO2021200985A1 (en) | Program, information processing method, information processing system, and method for generating learning model | |
WO2021199966A1 (en) | Program, information processing method, training model generation method, retraining method for training model, and information processing system | |
WO2024071322A1 (en) | Information processing method, learning model generation method, computer program, and information processing device | |
CN115089294B (en) | Interventional operation navigation method | |
JP2023148901A (en) | Information processing method, program and information processing device | |
JP7480010B2 (en) | Information processing device, program, and information processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21875627 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022554019 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21875627 Country of ref document: EP Kind code of ref document: A1 |