WO2022202200A1 - Image processing device, image processing system, image display method, and image processing program - Google Patents

Image processing device, image processing system, image display method, and image processing program Download PDF

Info

Publication number
WO2022202200A1
WO2022202200A1 PCT/JP2022/009239 JP2022009239W WO2022202200A1 WO 2022202200 A1 WO2022202200 A1 WO 2022202200A1 JP 2022009239 W JP2022009239 W JP 2022009239W WO 2022202200 A1 WO2022202200 A1 WO 2022202200A1
Authority
WO
WIPO (PCT)
Prior art keywords
tissue
image
data
dimensional
image processing
Prior art date
Application number
PCT/JP2022/009239
Other languages
French (fr)
Japanese (ja)
Inventor
泰一 坂本
克彦 清水
弘之 石原
俊祐 吉澤
クレモン ジャケ
ステフェン チェン
トマ エン
亮介 佐賀
Original Assignee
テルモ株式会社
株式会社ロッケン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社, 株式会社ロッケン filed Critical テルモ株式会社
Priority to JP2023508893A priority Critical patent/JPWO2022202200A1/ja
Publication of WO2022202200A1 publication Critical patent/WO2022202200A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters

Definitions

  • the present disclosure relates to an image processing device, an image processing system, an image display method, and an image processing program.
  • Patent Documents 1 to 3 describe techniques for generating three-dimensional images of heart chambers or blood vessels using a US imaging system.
  • US is an abbreviation for ultrasound.
  • IVUS is an abbreviation for intravascular ultrasound.
  • IVUS is a device or method that provides two-dimensional images in a plane perpendicular to the longitudinal axis of the catheter.
  • a three-dimensional image that expresses the structure of a living tissue such as a heart chamber or a blood vessel is automatically generated from the two-dimensional IVUS image, and the generated three-dimensional image is displayed to the operator. can be considered. If the generated three-dimensional image is displayed as it is, the operator can only see the outer wall of the tissue. Therefore, it is conceivable to cut out part of the structure of the living tissue in the three-dimensional image so that the lumen can be seen. . If a catheter other than the IVUS catheter, such as an ablation catheter or a catheter for atrial septal puncture, is inserted into the living tissue, it is conceivable to further display a three-dimensional image representing the other catheter.
  • a catheter other than the IVUS catheter such as an ablation catheter or a catheter for atrial septal puncture
  • a 3D mapping system in which a position sensor is mounted on a catheter and draws a three-dimensional image using position information when the position sensor touches the myocardial tissue, is mainly used in the procedure. It is very time-consuming because it is necessary to bring the catheter into full contact with the myocardial tissue surface in the heart chamber. Circumferential isolation of the PV or SVC requires the marking of the site of ablation, but if IVUS can be used to complete such an operation, the time may be reduced.
  • PV is an abbreviation for pulmonary vein.
  • SVC is an abbreviation for superior vena cava. It is conceivable to display a three-dimensional image that expresses a mark by marking at least one location such as a cauterized location of living tissue.
  • An object of the present disclosure is to enable the location of an object or landmark associated with an object located in the lumen of a living tissue, or a landmark associated with the living tissue, to be located behind or within a portion of the living tissue. It is to do so.
  • An image processing apparatus includes first data representing a living tissue, second data representing an object located in a lumen of the living tissue, or second data representing a mark associated with the living tissue.
  • An image processing device for displaying three-dimensional data including three data as a three-dimensional image on a display, wherein the positional relationship between the biological tissue and the object or the mark is specified by referring to the three-dimensional data. , according to the specified positional relationship, there is an intervening tissue that is a portion of the living tissue that is interposed between the object or the mark and a viewpoint set when the three-dimensional image is displayed. and a control unit for performing control to display an image representing the object or the mark at a position corresponding to the intervening tissue in the three-dimensional image when it is determined that the intervening tissue exists.
  • control unit performs control to display an image representing the object or the mark on the surface of the intervening tissue in the three-dimensional image when it is determined that the intervening tissue exists.
  • control unit assigns a texture different from a texture applied to a surface of a portion of the living tissue adjacent to the intervening tissue in the three-dimensional image as the image representing the object or the mark. Apply to the surface of the intervening tissue.
  • control unit performs control to display an image representing the target object or the mark on the cross section of the intervening tissue in the three-dimensional image when it is determined that the intervening tissue exists.
  • control unit applies a texture different from a texture applied to a cross-section of a portion of the biological tissue adjacent to the intervening tissue in the three-dimensional image as the image representing the object or the mark. Apply to the cross-section of the intervening tissue.
  • control unit when the control unit determines that the intervening tissue exists, it performs control to display an image representing the target object or the mark through the intervening tissue in the three-dimensional image.
  • control unit arranges a three-dimensional image representing the object or the mark on the side opposite to the viewpoint of the intervening tissue in the three-dimensional image as the image representing the object or the mark. do.
  • the three-dimensional data are data constructed based on data obtained by a sensor inserted into the lumen of the biological tissue and observing the biological tissue and the object, respectively. data and the second data, wherein the object is a catheter inserted into the lumen of the biological tissue.
  • the three-dimensional data is constructed based on data obtained by a sensor that is inserted into the lumen of the biological tissue and observes the biological tissue, and each time new data is obtained by the sensor.
  • the control unit acquires the designation data that designates the mark, constructs the third data based on the acquired designation data, and converts the third data to the Include in 3D data.
  • An image processing system as one aspect of the present disclosure includes the image processing device, and a sensor that observes the living tissue and the object.
  • the image processing system further includes the display.
  • An image display method includes first data representing a living tissue, second data representing an object located in a lumen of the living tissue, or second data representing a mark associated with the living tissue.
  • An image display method for displaying three-dimensional data including three data on a display as a three-dimensional image, wherein a computer refers to the three-dimensional data to determine the positional relationship between the biological tissue and the object or the mark. and the computer determines, according to the identified positional relationship, the part of the biological tissue interposed between the object or the mark and the viewpoint set when the three-dimensional image is displayed.
  • an image representing the object or the mark at a position corresponding to the intervening tissue in the three-dimensional image when the computer determines that the intervening tissue exists is to control the display of
  • An image processing program includes first data representing a living tissue, second data representing an object located in a lumen of the living tissue, or second data representing a mark associated with the living tissue. a process of specifying the positional relationship between the biological tissue and the object or the mark by referring to the three-dimensional data in a computer that displays the three-dimensional data including the three data as a three-dimensional image on a display; depending on the positional relationship, whether or not there is an intervening tissue that is a portion intervening between the object or the mark and the viewpoint set when the three-dimensional image is displayed in the living tissue and, if it is determined that the intervening tissue exists, control is performed to display an image representing the object or the mark at a position corresponding to the intervening tissue in the three-dimensional image.
  • FIG. 1 is a perspective view of an image processing system according to an embodiment of the present disclosure
  • FIG. FIG. 4 is a cross-sectional view showing an example in which an object exists behind an intervening tissue
  • 1 is a block diagram showing the configuration of an image processing device according to an embodiment of the present disclosure
  • FIG. FIG. 2 is a perspective view of a probe and drive unit according to an embodiment of the present disclosure
  • 4 is a flow chart showing the operation of the image processing system according to the embodiment of the present disclosure
  • 4 is a flow chart showing the operation of the image processing system according to the embodiment of the present disclosure
  • FIG. 4 is a cross-sectional view showing an example of the positional relationship between a living tissue, an object, and a viewpoint
  • FIG. 4 is a diagram showing an example screen of a display according to an embodiment of the present disclosure
  • FIG. FIG. 4 is a diagram showing an example screen of a display according to an embodiment of the present disclosure
  • FIG. FIG. 4 is a schematic diagram showing an example of the positional relationship between a living tissue, a mark, and a viewpoint
  • FIG. 10 is a schematic diagram showing another example of the positional relationship between a living tissue, a mark, and a viewpoint
  • FIG. 12 is a schematic diagram showing an example of displaying an image representing a mark on the surface of the intervening tissue in the example of FIG. 11;
  • FIG. 1 An outline of the present embodiment will be described with reference to FIGS. 1 to 3.
  • FIG. 1 An outline of the present embodiment will be described with reference to FIGS. 1 to 3.
  • the image processing apparatus 11 converts three-dimensional data 52 including first data representing the living tissue 60 and second data representing the object located in the lumen 61 of the living tissue 60 into a three-dimensional image 53 . is displayed on the display 16 as a computer.
  • the image processing device 11 refers to the three-dimensional data 52 to identify the positional relationship between the living tissue 60 and the object.
  • the image processing device 11 detects the presence of an intervening tissue, which is a portion of the biological tissue 60 that is interposed between the object and the viewpoint set when the three-dimensional image 53 is displayed, according to the specified positional relationship.
  • the image processing device 11 decides whether to When the image processing device 11 determines that an intervening tissue exists, the image processing device 11 performs control to display an image representing the object at a position corresponding to the intervening tissue in the three-dimensional image 53 . Therefore, according to this embodiment, it is possible to confirm the position of the object even when there is an intervening tissue.
  • the biological tissue 60 includes, for example, blood vessels or organs such as the heart.
  • the biological tissue 60 is not limited to an anatomical single organ or a part thereof, but also includes a tissue that straddles a plurality of organs and has a lumen.
  • a specific example of such tissue is a portion of the vascular system extending from the upper portion of the inferior vena cava through the right atrium to the lower portion of the superior vena cava.
  • the living tissue 60 is the right atrium.
  • the portion of the right atrium adjacent to the fossa ovalis 65 is raised inward to form a ridge 64 .
  • a catheter 63 such as an ablation catheter or a catheter for atrial septal puncture is inserted into the right atrium.
  • an image expressing the structure of the living tissue 60 is automatically generated as the three-dimensional image 53 and the generated image is displayed to the operator.
  • a part of the structure of the living tissue 60 is cut off in the generated image so that the lumen 61 can be seen.
  • an image representing catheter 63 is also displayed.
  • the catheter 63 is behind the ridge 64 and is not visible to the operator depending on the direction in which the lumen 61 is viewed.
  • an image representing the catheter 63 is displayed on the surface of the ridge 64 in this embodiment. That is, at least the portion of the ridge 64 hiding the catheter 63 appears transparent, allowing the catheter 63 to be seen through that portion. Therefore, the operator can smoothly perform an operation such as ablation or atrial septal puncture.
  • the portion of the ridge 64 that hides the catheter 63 corresponds to the "intervening tissue".
  • This embodiment can be used not only when the catheter 63 is behind the ridge 64, but also when the fossa ovalis 65 is tented during an atrial septal puncture operation and the catheter 63 is embedded in the tissue. is behind or inside any tissue.
  • the catheter 63 corresponds to the "object".
  • the object is not limited to the catheter 63, but may be another object located in the lumen 61 of the biological tissue 60, such as a stent.
  • the X direction and the Y direction perpendicular to the X direction respectively correspond to the lateral direction of the lumen 61 of the living tissue 60 .
  • a Z direction perpendicular to the X and Y directions corresponds to the longitudinal direction of the lumen 61 of the biological tissue 60 .
  • the image processing system 10 includes an image processing device 11, a cable 12, a drive unit 13, a keyboard 14, a mouse 15, and a display 16.
  • the image processing apparatus 11 is a dedicated computer specialized for image diagnosis in this embodiment, but may be a general-purpose computer such as a PC. "PC” is an abbreviation for personal computer.
  • the cable 12 is used to connect the image processing device 11 and the drive unit 13.
  • the drive unit 13 is a device that is used by connecting to the probe 20 shown in FIG.
  • the drive unit 13 is also called MDU.
  • MDU is an abbreviation for motor drive unit.
  • Probe 20 has IVUS applications. Probe 20 is also referred to as an IVUS catheter or diagnostic imaging catheter.
  • the keyboard 14, mouse 15, and display 16 are connected to the image processing device 11 via any cable or wirelessly.
  • the display 16 is, for example, an LCD, organic EL display, or HMD.
  • LCD is an abbreviation for liquid crystal display.
  • EL is an abbreviation for electro luminescence.
  • HMD is an abbreviation for head-mounted display.
  • the image processing system 10 further comprises a connection terminal 17 and a cart unit 18 as options.
  • connection terminal 17 is used to connect the image processing device 11 and an external device.
  • the connection terminal 17 is, for example, a USB terminal.
  • USB is an abbreviation for Universal Serial Bus.
  • the external device is, for example, a recording medium such as a magnetic disk drive, a magneto-optical disk drive, or an optical disk drive.
  • the cart unit 18 is a cart with casters for movement.
  • An image processing device 11 , a cable 12 and a drive unit 13 are installed in the cart body of the cart unit 18 .
  • a keyboard 14 , a mouse 15 and a display 16 are installed on the top table of the cart unit 18 .
  • the probe 20 includes a drive shaft 21, a hub 22, a sheath 23, an outer tube 24, an ultrasonic transducer 25, and a relay connector 26.
  • the drive shaft 21 passes through a sheath 23 inserted into the body cavity of a living body, an outer tube 24 connected to the proximal end of the sheath 23, and extends to the inside of a hub 22 provided at the proximal end of the probe 20.
  • the driving shaft 21 has an ultrasonic transducer 25 for transmitting and receiving signals at its tip and is rotatably provided within the sheath 23 and the outer tube 24 .
  • a relay connector 26 connects the sheath 23 and the outer tube 24 .
  • the hub 22, the drive shaft 21, and the ultrasonic transducer 25 are connected to each other so as to integrally move back and forth in the axial direction. Therefore, for example, when the hub 22 is pushed toward the distal side, the drive shaft 21 and the ultrasonic transducer 25 move inside the sheath 23 toward the distal side. For example, when the hub 22 is pulled proximally, the drive shaft 21 and the ultrasonic transducer 25 move proximally inside the sheath 23 as indicated by the arrows.
  • the drive unit 13 includes a scanner unit 31, a slide unit 32, and a bottom cover 33.
  • the scanner unit 31 is connected to the image processing device 11 via the cable 12 .
  • the scanner unit 31 includes a probe connection section 34 that connects to the probe 20 and a scanner motor 35 that is a drive source that rotates the drive shaft 21 .
  • the probe connecting portion 34 is detachably connected to the probe 20 through an insertion port 36 of the hub 22 provided at the proximal end of the probe 20 .
  • the proximal end of the drive shaft 21 is rotatably supported, and the rotational force of the scanner motor 35 is transmitted to the drive shaft 21 .
  • Signals are also transmitted and received between the drive shaft 21 and the image processing device 11 via the cable 12 .
  • the image processing device 11 generates a tomographic image of the body lumen and performs image processing based on the signal transmitted from the drive shaft 21 .
  • the slide unit 32 mounts the scanner unit 31 so as to move back and forth, and is mechanically and electrically connected to the scanner unit 31 .
  • the slide unit 32 includes a probe clamp section 37 , a slide motor 38 and a switch group 39 .
  • the probe clamping part 37 is arranged coaxially with the probe connecting part 34 on the tip side of the probe connecting part 34 and supports the probe 20 connected to the probe connecting part 34 .
  • the slide motor 38 is a driving source that generates axial driving force.
  • the scanner unit 31 advances and retreats by driving the slide motor 38, and the drive shaft 21 advances and retreats in the axial direction accordingly.
  • the slide motor 38 is, for example, a servomotor.
  • the switch group 39 includes, for example, a forward switch and a pullback switch that are pressed when moving the scanner unit 31 back and forth, and a scan switch that is pressed when image rendering is started and ended.
  • Various switches are included in the switch group 39 as needed, without being limited to the example here.
  • the scanner motor 35 When the scan switch is pressed, image rendering is started, the scanner motor 35 is driven, and the slide motor 38 is driven to move the scanner unit 31 backward.
  • a user such as an operator connects the probe 20 to the scanner unit 31 in advance, and causes the drive shaft 21 to rotate and move to the proximal end side in the axial direction when image rendering is started.
  • the scanner motor 35 and the slide motor 38 are stopped when the scan switch is pressed again, and image rendering is completed.
  • the bottom cover 33 covers the bottom surface of the slide unit 32 and the entire circumference of the side surface on the bottom surface side, and can move toward and away from the bottom surface of the slide unit 32 .
  • the image processing device 11 includes a control section 41 , a storage section 42 , a communication section 43 , an input section 44 and an output section 45 .
  • the control unit 41 includes at least one processor, at least one programmable circuit, at least one dedicated circuit, or any combination thereof.
  • a processor may be a general-purpose processor such as a CPU or GPU, or a dedicated processor specialized for a particular process.
  • CPU is an abbreviation for central processing unit.
  • GPU is an abbreviation for graphics processing unit.
  • a programmable circuit is, for example, an FPGA.
  • FPGA is an abbreviation for field-programmable gate array.
  • a dedicated circuit is, for example, an ASIC.
  • ASIC is an abbreviation for application specific integrated circuit.
  • the control unit 41 executes processing related to the operation of the image processing device 11 while controlling each unit of the image processing system 10 including the image processing device 11 .
  • the storage unit 42 includes at least one semiconductor memory, at least one magnetic memory, at least one optical memory, or any combination thereof.
  • a semiconductor memory is, for example, a RAM or a ROM.
  • RAM is an abbreviation for random access memory.
  • ROM is an abbreviation for read only memory.
  • RAM is, for example, SRAM or DRAM.
  • SRAM is an abbreviation for static random access memory.
  • DRAM is an abbreviation for dynamic random access memory.
  • ROM is, for example, EEPROM.
  • EEPROM is an abbreviation for electrically erasable programmable read only memory.
  • the storage unit 42 functions, for example, as a main memory device, an auxiliary memory device, or a cache memory.
  • the storage unit 42 stores data used for the operation of the image processing apparatus 11, such as the tomographic data 51, and data obtained by the operation of the image processing apparatus 11, such as the three-dimensional data 52 and the three-dimensional image 53. .
  • the communication unit 43 includes at least one communication interface.
  • the communication interface is, for example, a wired LAN interface, a wireless LAN interface, or an image diagnosis interface that receives and A/D converts IVUS signals.
  • LAN is an abbreviation for local area network.
  • A/D is an abbreviation for analog to digital.
  • the communication unit 43 receives data used for the operation of the image processing device 11 and transmits data obtained by the operation of the image processing device 11 .
  • the drive unit 13 is connected to an image diagnosis interface included in the communication section 43 .
  • the input unit 44 includes at least one input interface.
  • the input interface is, for example, a USB interface, an HDMI (registered trademark) interface, or an interface compatible with a short-range wireless communication standard such as Bluetooth (registered trademark).
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • the output unit 45 includes at least one output interface.
  • the output interface is, for example, a USB interface, an HDMI (registered trademark) interface, or an interface compatible with a short-range wireless communication standard such as Bluetooth (registered trademark).
  • the output unit 45 outputs data obtained by the operation of the image processing device 11 .
  • the display 16 is connected to a USB interface or HDMI (registered trademark) interface included in the output unit 45 .
  • the functions of the image processing device 11 are realized by executing the image processing program according to the present embodiment with a processor as the control unit 41 . That is, the functions of the image processing device 11 are realized by software.
  • the image processing program causes the computer to function as the image processing device 11 by causing the computer to execute the operation of the image processing device 11 . That is, the computer functions as the image processing device 11 by executing the operation of the image processing device 11 according to the image processing program.
  • the program can be stored on a non-transitory computer-readable medium.
  • a non-transitory computer-readable medium is, for example, a flash memory, a magnetic recording device, an optical disk, a magneto-optical recording medium, or a ROM.
  • Program distribution is performed, for example, by selling, assigning, or lending a portable medium such as an SD card, DVD, or CD-ROM storing the program.
  • SD is an abbreviation for Secure Digital.
  • DVD is an abbreviation for digital versatile disc.
  • CD-ROM is an abbreviation for compact disc read only memory.
  • the program may be distributed by storing the program in the storage of the server and transferring the program from the server to another computer.
  • a program may be provided as a program product.
  • a computer for example, temporarily stores a program stored in a portable medium or a program transferred from a server in a main storage device. Then, the computer reads the program stored in the main storage device with the processor, and executes processing according to the read program with the processor.
  • the computer may read the program directly from the portable medium and execute processing according to the program.
  • the computer may execute processing according to the received program every time the program is transferred from the server to the computer.
  • the processing may be executed by a so-called ASP type service that realizes the function only by executing the execution instruction and obtaining the result without transferring the program from the server to the computer.
  • "ASP" is an abbreviation for application service provider.
  • the program includes information to be used for processing by a computer and conforming to the program. For example, data that is not a direct instruction to a computer but that has the property of prescribing the processing of the computer corresponds to "things equivalent to a program.”
  • a part or all of the functions of the image processing device 11 may be realized by a programmable circuit or a dedicated circuit as the control unit 41. That is, part or all of the functions of the image processing device 11 may be realized by hardware.
  • the operation of the image processing system 10 according to this embodiment will be described with reference to FIG.
  • the operation of the image processing system 10 corresponds to the image display method according to this embodiment.
  • the probe 20 is primed by the user before the flow of FIG. 5 starts. After that, the probe 20 is fitted into the probe connection portion 34 and the probe clamp portion 37 of the drive unit 13 and connected and fixed to the drive unit 13 . Then, the probe 20 is inserted to a target site in a living tissue 60 such as a blood vessel or heart.
  • a living tissue 60 such as a blood vessel or heart.
  • step S101 the scan switch included in the switch group 39 is pressed, and the pullback switch included in the switch group 39 is pressed, so that a so-called pullback operation is performed.
  • the probe 20 transmits ultrasonic waves by means of the ultrasonic transducer 25 retracted in the axial direction by a pullback operation inside the biological tissue 60 .
  • the ultrasonic transducer 25 radially transmits ultrasonic waves while moving inside the living tissue 60 .
  • the ultrasonic transducer 25 receives reflected waves of the transmitted ultrasonic waves.
  • the probe 20 inputs the signal of the reflected wave received by the ultrasonic transducer 25 to the image processing device 11 .
  • the control unit 41 of the image processing apparatus 11 processes the input signal to sequentially generate cross-sectional images of the biological tissue 60, thereby acquiring tomographic data 51 including a plurality of cross-sectional images.
  • the probe 20 rotates the ultrasonic transducer 25 in the circumferential direction inside the living tissue 60 and moves it in the axial direction, and rotates the ultrasonic transducer 25 toward the outside from the center of rotation.
  • the probe 20 receives reflected waves from reflecting objects present in each of a plurality of directions inside the living tissue 60 by the ultrasonic transducer 25 .
  • the probe 20 transmits the received reflected wave signal to the image processing device 11 via the drive unit 13 and the cable 12 .
  • the communication unit 43 of the image processing device 11 receives the signal transmitted from the probe 20 .
  • the communication unit 43 A/D converts the received signal.
  • the communication unit 43 inputs the A/D converted signal to the control unit 41 .
  • the control unit 41 processes the input signal and calculates the intensity value distribution of the reflected waves from the reflectors present in the transmission direction of the ultrasonic waves from the ultrasonic transducer 25 .
  • the control unit 41 sequentially generates two-dimensional images having a luminance value distribution corresponding to the calculated intensity value distribution as cross-sectional images of the biological tissue 60, thereby acquiring tomographic data 51, which is a data set of cross-sectional images.
  • the control unit 41 causes the storage unit 42 to store the obtained tomographic data 51 .
  • the signal of the reflected wave received by the ultrasonic transducer 25 corresponds to the raw data of the tomographic data 51
  • the cross-sectional image generated by processing the signal of the reflected wave by the image processing device 11 is the tomographic data. 51 processing data.
  • the control unit 41 of the image processing device 11 may store the signal input from the probe 20 as the tomographic data 51 in the storage unit 42 as it is.
  • the control unit 41 may store, as the tomographic data 51 , data indicating the intensity value distribution of the reflected wave calculated by processing the signal input from the probe 20 in the storage unit 42 .
  • the tomographic data 51 is not limited to a data set of cross-sectional images of the living tissue 60, and may be data representing cross-sections of the living tissue 60 at each movement position of the ultrasonic transducer 25 in some format.
  • an ultrasonic transducer that transmits ultrasonic waves in multiple directions without rotating is used instead of the ultrasonic transducer 25 that transmits ultrasonic waves in multiple directions while rotating in the circumferential direction.
  • the tomographic data 51 may be acquired using OFDI or OCT instead of being acquired using IVUS.
  • OFDI is an abbreviation for optical frequency domain imaging.
  • OCT is an abbreviation for optical coherence tomography.
  • another device instead of the image processing device 11 generating a dataset of cross-sectional images of the biological tissue 60, another device generates a similar dataset, and the image processing device 11 generates the dataset. It may be obtained from the other device. That is, instead of the control unit 41 of the image processing device 11 processing the IVUS signal to generate a cross-sectional image of the biological tissue 60, another device processes the IVUS signal to generate a cross-sectional image of the biological tissue 60. You may generate
  • step S102 the control unit 41 of the image processing apparatus 11 generates three-dimensional data 52 of the biological tissue 60 based on the tomographic data 51 acquired in step S101. That is, the control unit 41 generates the three-dimensional data 52 based on the tomographic data 51 acquired by the sensor.
  • the generated three-dimensional data 52 already exists, it is possible to update only the data at the location corresponding to the updated tomographic data 51 instead of regenerating all the three-dimensional data 52 from scratch. preferable. In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 in the subsequent step S103 can be improved.
  • control unit 41 of the image processing device 11 stacks the cross-sectional images of the living tissue 60 included in the tomographic data 51 stored in the storage unit 42 to three-dimensionalize the living tissue 60 .
  • Dimensional data 52 is generated.
  • any one of rendering methods such as surface rendering or volume rendering, and associated processing such as texture mapping including environment mapping, bump mapping, and the like is used.
  • the control unit 41 causes the storage unit 42 to store the generated three-dimensional data 52 .
  • the tomographic data 51 includes the catheter 63 as well as the data of the biological tissue 60. data is included. Therefore, in step S102, the three-dimensional data 52 generated by the control unit 41 also includes the data of the catheter 63 as second data in the same way as the data of the biological tissue 60 as the first data.
  • step S102 Details of the processing performed in step S102 when the catheter 63 is inserted into the biological tissue 60 will be described with reference to FIG.
  • the control unit 41 of the image processing apparatus 11 classifies the pixel groups of the cross-sectional image included in the tomographic data 51 acquired in step S101 into two or more classes.
  • These two or more classes include at least a "living tissue” class and a “catheter” class, a "blood cell” class, a guidewire class and other “medical device” classes other than “catheters”,
  • a class of "indwelling objects” such as stents, or a class of "lesions” such as lime or plaque may also be included.
  • Any method may be used as the classification method, but in this embodiment, a method of classifying pixel groups of cross-sectional images using a trained model is used.
  • the learned model is trained by performing machine learning in advance so that it can detect regions corresponding to each class from a sample IVUS cross-sectional image.
  • step S200a the control unit 41 of the image processing apparatus 11 builds a three-dimensional object of the living tissue 60 by stacking regions classified into the "living tissue" class.
  • the control unit 41 reflects the constructed three-dimensional object of the biological tissue 60 in the three-dimensional space.
  • step S200b the control unit 41 builds a three-dimensional object of the catheter 63 by stacking the regions classified into the "catheter” class.
  • the catheter 63 is extracted by segmentation in this embodiment, but may be extracted by other techniques such as object detection. For example, a method of extracting only the catheter position may be used to construct an object considering the position as an object of the catheter 63 .
  • the control unit 41 reflects the constructed three-dimensional object of the catheter 63 in the three-dimensional space.
  • step S ⁇ b>200 c the control unit 41 executes the processing of steps S ⁇ b>201 to S ⁇ b>205 for each tissue voxel that is the voxel of the living tissue 60 .
  • the control unit 41 may place the virtual camera 71 and the virtual light source 72 as shown in FIG. 7 at arbitrary positions in the three-dimensional space.
  • the position of the camera 71 corresponds to the “viewpoint” when displaying the three-dimensional image 53 on the display 16 .
  • the number and relative positions of the light sources 72 are not limited to those illustrated, and can be changed as appropriate.
  • the three-dimensional object of the living tissue 60 may be cut at any cutting plane.
  • step S201 the control unit 41 of the image processing device 11 determines whether or not there is a catheter voxel, which is the voxel of the catheter 63, on an extension of the straight line including the straight line connecting the viewpoint and the tissue voxel. If there is no catheter voxel on the extension line, in step S202, the control unit 41 applies the first color, which is the color pre-assigned to the "living tissue" class, as the color of the tissue voxel.
  • step S203 the control unit 41 calculates a first distance that is the distance between the viewpoint and the tissue voxel, and a first distance that is the distance between the viewpoint and the catheter voxel on the extension line. 2 Calculate the distance. If the first distance is longer than the second distance, that is, if the catheter 63 exists in front of the living tissue 60, in step S204, the control unit 41 selects the color of the tissue voxel from the "catheter" class in advance. Apply the second color, which is the assigned color.
  • the control unit 41 sets the first color and the second color as the tissue voxel colors. Apply a third color that is different from the first and second colors, such as a color halfway between the two colors. In this embodiment, the control unit 41 applies a color containing 70% of the first color and 30% of the second color as the intermediate color corresponding to the third color.
  • the data regarding the coloring of the voxel group performed in the flow of FIG. 6 is stored in the storage unit 42 as part of the three-dimensional data 52.
  • step S103 the control unit 41 of the image processing device 11 causes the display 16 to display the three-dimensional data 52 generated in step S102 as a three-dimensional image 53.
  • control unit 41 of the image processing device 11 generates a 3D image 53 from the 3D data 52 stored in the storage unit 42 .
  • the control unit 41 causes the display 16 to display the generated three-dimensional image 53 via the output unit 45 .
  • step S104 If there is a user operation in step S104, the processing from step S105 to step S108 is performed. If there is no user operation, the processing from step S105 to step S108 is skipped.
  • step S105 the control unit 41 of the image processing device 11 receives an operation for setting the position of the opening 62 as shown in FIG.
  • the position of the opening 62 is set such that the lumen 61 of the living tissue 60 is exposed through the opening 62 in the three-dimensional image 53 displayed in step S103.
  • control unit 41 of the image processing device 11 allows the user to use the keyboard 14 , the mouse 15 , or the touch screen provided integrally with the display 16 to display the three-dimensional image 53 displayed on the display 16 .
  • An operation for cutting off a portion of the living tissue 60 is received via the input unit 44 .
  • the control unit 41 receives an operation of cutting off a portion of the living tissue 60 so that the cross section of the living tissue 60 has an open shape.
  • the “cross section of the living tissue 60 ” may be a cross section of the living tissue 60 , a longitudinal cross section of the living tissue 60 , or other cross sections of the living tissue 60 .
  • the “transverse section of the biological tissue 60 ” is a cross section obtained by cutting the biological tissue 60 perpendicularly to the direction in which the ultrasonic transducer 25 moves in the biological tissue 60 .
  • the “longitudinal section of the biological tissue 60 ” is a cut plane obtained by cutting the biological tissue 60 along the direction in which the ultrasonic transducer 25 moves in the biological tissue 60 .
  • “Another cross section of the biological tissue 60 ” is a cross section obtained by cutting the biological tissue 60 obliquely with respect to the direction in which the ultrasonic transducer 25 moves in the biological tissue 60 .
  • the “open shape” is, for example, a substantially C-shape, a substantially U-shape, a substantially three-shape, or any of these shapes, such as a bifurcation of a blood vessel, a pulmonary vein ostium, or a hole originally opened in the living tissue 60 . It is a shape partially lacking due to the presence of In the example of FIG. 7, the cross section of the living tissue 60 is substantially C-shaped.
  • step S106 the control unit 41 of the image processing device 11 determines the position of the opening 62 as the position set by the operation received in step S105.
  • control unit 41 of the image processing device 11 sets the three-dimensional coordinates of the boundary of the cut off portion of the living tissue 60 by the user's operation to the opening 62 in the three-dimensional data 52 stored in the storage unit 42 . , as the three-dimensional coordinates of the edge of the The control unit 41 causes the storage unit 42 to store the identified three-dimensional coordinates.
  • step S ⁇ b>107 the control unit 41 of the image processing device 11 forms an opening 62 in the three-dimensional data 52 that exposes the lumen 61 of the biological tissue 60 in the three-dimensional image 53 .
  • control unit 41 of the image processing device 11 converts the portion specified by the three-dimensional coordinates stored in the storage unit 42 into the three-dimensional image 53 in the three-dimensional data 52 stored in the storage unit 42. It is set to be hidden or transparent when displayed on the display 16. - ⁇
  • step S108 the control unit 41 of the image processing device 11 adjusts the viewpoint when displaying the three-dimensional image 53 on the display 16 according to the position of the opening 62 formed in step S107.
  • the control unit 41 arranges the viewpoint on a straight line extending from the inner surface of the living tissue 60 to the outside of the living tissue 60 through the opening 62 . Therefore, the user can look into the interior of the living tissue 60 through the opening 62 and virtually observe the lumen 61 of the living tissue 60 .
  • control unit 41 of the image processing device 11 controls the position where the lumen 61 of the biological tissue 60 can be seen through the portion set to be hidden or transparent in the three-dimensional image 53 displayed on the display 16.
  • a virtual camera 71 is arranged.
  • the control unit 41 controls, in the cross section of the biological tissue 60, a first straight line L1 extending from the inner surface of the biological tissue 60 to the outside of the biological tissue 60 through the first edge E1 of the opening 62,
  • a virtual camera 71 is arranged in an area AF sandwiched by a second straight line L2 extending from the inner surface of the tissue 60 through the second edge E2 of the opening 62 to the outside of the living tissue 60 .
  • the point where the first straight line L1 intersects the inner surface of the living tissue 60 is the same point Pt as the point where the second straight line L2 intersects the inner surface of the living tissue 60 . Therefore, the user can observe the point Pt on the inner surface of the living tissue 60 regardless of the position of the virtual camera 71 in the area AF.
  • the point Pt is drawn perpendicularly to the third straight line L3 from the middle point Pc of the third straight line L3 connecting the first edge E1 of the opening 62 and the second edge E2 of the opening 62. It is the same as the point where the fourth straight line L4 intersects the inner surface of the living tissue 60 . Therefore, the user can easily observe the point Pt on the inner surface of the living tissue 60 through the opening 62 .
  • placing the virtual camera 71 on the extension of the fourth straight line L4 as shown in FIG. 7 makes it easier for the user to observe the point Pt on the inner surface of the living tissue 60 .
  • the position of the virtual camera 71 may be any position where the lumen 61 of the living tissue 60 can be observed through the opening 62, but in the present embodiment it is within the range facing the opening 62.
  • the position of the virtual camera 71 is preferably set at an intermediate position facing the central portion of the opening 62 .
  • step S108 if the catheter 63 is inserted into the living tissue 60, the processing from step S201 to step S205 is executed for each tissue voxel.
  • an intervening portion 66 that is part of the living tissue 60 exists between the camera 71 and the catheter 63 . That is, the catheter 63 exists behind the intervening portion 66 . Therefore, the control unit 41 applies a third color, such as an intermediate color between the first color and the second color, as the color of the voxels corresponding to the intervening portion 66 .
  • the control unit 41 applies the first color as the color of the voxels corresponding to the portion of the living tissue 60 visible from the camera 71 excluding the intervening portion 66 .
  • the cross section 69 faces the camera 71 in the intervening portion 66 .
  • the area of the cross section 69 corresponding to the intervening portion 66 is colored with the third color, and the remaining area is colored with the first color.
  • the control unit 41 of the image processing apparatus 11 selects a first mode in which the processing from step S201 to step S205 is not executed even when the catheter 63 is inserted into the biological tissue 60, and a second mode in which the processing from step S201 to step S205 is not executed.
  • the display mode may be switched between the second mode in which the processing from step S201 to step S205 is executed.
  • the first mode as shown in FIG. 8, even if the catheter 63 is behind the intervening portion 66, the area of the cross section 69 corresponding to the interposing portion 66 is the same as the rest of the area. Colored with one color.
  • the same texture as that applied to the cross section of the portion of the biological tissue 60 adjacent to the intervening portion 66 in the three-dimensional image 53 is applied to the cross section of the intervening portion 66 .
  • the area of the cross section 69 corresponding to the intervening portion 66 is colored with the third color. be done. That is, as the image 67 representing the catheter 63 , a texture different from the texture applied to the cross section of the portion of the biological tissue 60 adjacent to the intervening portion 66 in the three-dimensional image 53 is applied to the cross section of the intervening portion 66 .
  • the switching of the display mode may be performed manually by user operation, or may be performed automatically with an arbitrary event as a trigger.
  • step S109 if the tomographic data 51 is updated, the processes of steps S110 and S111 are performed. If there is no update of the tomographic data 51, the presence or absence of user operation is confirmed again in step S104.
  • step S110 the control unit 41 of the image processing device 11 processes the signal input from the probe 20 to newly generate a cross-sectional image of the living tissue 60, similarly to the processing of step S101, thereby obtaining at least one cross-sectional image.
  • step S111 the control unit 41 of the image processing apparatus 11 updates the three-dimensional data 52 of the living tissue 60 based on the tomographic data 51 acquired at step S110. That is, the control unit 41 updates the three-dimensional data 52 based on the tomographic data 51 acquired by the sensor. Then, in step S103, the control unit 41 causes the display 16 to display the three-dimensional data 52 updated in step S111 as the three-dimensional image 53.
  • step S111 it is preferable to update only the data corresponding to the updated tomographic data 51. FIG. In that case, the amount of data processing when updating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 can be improved in step S111.
  • step S111 if the catheter 63 is inserted into the living tissue 60, the processing from step S201 to step S205 is executed for each tissue voxel.
  • steps S105 to S108 from the second time onward when changing the position of the opening 62 from the first position to the second position, the control unit 41 of the image processing device 11 changes the viewpoint to the third position corresponding to the first position. to a fourth position corresponding to the second position.
  • the control unit 41 moves the virtual light source 72 when displaying the three-dimensional image 53 on the display 16 in accordance with the movement of the viewpoint from the third position to the fourth position.
  • the control unit 41 moves the virtual light source 72 using the rotation matrix used for moving the virtual camera 71 when changing the circumferential position of the opening 62 in the cross section of the living tissue 60 .
  • the control unit 41 may instantaneously switch the viewpoint from the third position to the fourth position.
  • a moving image gradually moving from the position to the fourth position is displayed on the display 16 as the three-dimensional image 53 . Therefore, it is easy for the user to know that the viewpoint has moved.
  • step S105 the control unit 41 of the image processing apparatus 11 causes the input unit 44 to perform an operation of setting the position of the opening 62 and an operation of setting the position of the target point that the user wants to see. may be accepted through
  • control unit 41 of the image processing device 11 allows the user to use the keyboard 14 , the mouse 15 , or the touch screen provided integrally with the display 16 to display the three-dimensional image 53 displayed on the display 16 .
  • An operation of designating the position of the target point using the input unit 44 may be accepted.
  • the control unit 41 performs an operation of setting the position of the point Pt as the position of the point where the first straight line L1 and the second straight line L2 intersect the inner surface of the biological tissue 60 via the input unit 44. may be accepted.
  • step S105 the control unit 41 of the image processing device 11 inputs an operation for setting the position of the target point that the user wants to see instead of the operation for setting the position of the opening 62. It may be accepted via the unit 44 . Then, in step S106, the control unit 41 may determine the position of the opening 62 according to the position set by the operation received in step S105.
  • control unit 41 of the image processing device 11 allows the user to use the keyboard 14 , the mouse 15 , or the touch screen provided integrally with the display 16 to display the three-dimensional image 53 displayed on the display 16 .
  • An operation of designating the position of the target point using the input unit 44 may be accepted.
  • the control section 41 may determine the position of the opening 62 according to the position of the target point.
  • the control unit 41 performs an operation of setting the position of the point Pt as the position of the point where the first straight line L1 and the second straight line L2 intersect the inner surface of the biological tissue 60 via the input unit 44. may be accepted.
  • the control unit 41 may determine, as the area AF, a fan-shaped area centered at the point Pt and having a center angle preset or specified by the user in the cross section of the biological tissue 60 .
  • the control unit 41 may determine the position of the living tissue 60 overlapping the area AF as the position of the opening 62 .
  • the control unit 41 may determine a normal line perpendicular to a tangent line passing through the point Pt of the inner surface of the living tissue 60 as the fourth straight line L4.
  • the area AF may be set narrower than the width of the opening 62 . That is, the area AF may be set so as not to include at least one of the first edge E1 of the opening 62 and the second edge E2 of the opening 62 .
  • the point at which the first straight line L1 intersects the inner surface of the living tissue 60 may not be the same as the point at which the second straight line L2 intersects the inner surface of the living tissue 60.
  • a point P1 at which the first straight line L1 intersects the inner surface of the living tissue 60 and a point P2 at which the second straight line L2 intersects the inner surface of the living tissue 60 are the circumference of a circle centered at the point Pt. may be above. That is, the points P1 and P2 may be substantially equidistant from the point Pt.
  • the control unit 41 of the image processing device 11 includes the first data representing the biological tissue 60 and the second data representing the object located in the lumen 61 of the biological tissue 60.
  • the three-dimensional data 52 is displayed on the display 16 as a three-dimensional image 53 .
  • the control unit 41 refers to the three-dimensional data 52 to identify the positional relationship between the living tissue 60 and the object. According to the specified positional relationship, the control unit 41 determines that there is an intervening tissue, which is a portion of the living tissue 60 that is interposed between the object and the viewpoint set when the three-dimensional image 53 is displayed.
  • control unit 41 determines whether When determining that an intervening tissue exists, the control unit 41 performs control to display an image representing the object at a position corresponding to the intervening tissue in the three-dimensional image 53 . Therefore, according to this embodiment, it is possible to confirm the position of the object even when there is an intervening tissue.
  • the control unit 41 of the image processing device 11 refers to data obtained by a sensor that observes the living tissue 60 and the object, and identifies the positional relationship between the living tissue 60 and the object.
  • the sensor is not limited to the ultrasonic transducer 25 used for IVUS, and may be any sensor such as a sensor used for OFDI, OCT, CT examination, extracorporeal echo examination, or X-ray examination. "CT” is an abbreviation for computed tomography.
  • the control unit 41 detects, as an intervening tissue, a portion of the living tissue 60 interposed between the object and the viewpoint set when the three-dimensional image 53 is displayed, according to the specified positional relationship.
  • the catheter 63 corresponds to the object
  • the intervening portion 66 corresponds to the intervening tissue.
  • control unit 41 of the image processing device 11 determines that an intervening tissue exists, it performs control to display an image representing the target object in the cross section of the intervening tissue in the three-dimensional image 53 in this embodiment.
  • control may be performed to display an image representing the object on the surface of the intervening tissue in the three-dimensional image 53 .
  • the control unit 41 of the image processing device 11 selects the image 67 representing the object as the image 67 of the biological tissue 60 in the three-dimensional image 53. , applying a different texture to the cross-section of the intervening tissue than the texture applied to the cross-section of the portion adjacent to the intervening tissue. Assuming that the surface of the intervening tissue faces the camera 71 in the three-dimensional space, the control unit 41 selects a portion of the living tissue 60 adjacent to the intervening tissue in the three-dimensional image 53 as an image representing the object. A texture may be applied to the surface of the intervening tissue that is different from the texture applied to the surface of the tissue.
  • the control unit 41 of the image processing device 11 performs control to display an image representing the target object with the intervening tissue transparent in the three-dimensional image 53 when it is determined that the intervening tissue exists. you can go In the example of FIG. 7, since the catheter 63 exists behind the intervening portion 66, the control unit 41 changes the color of the voxels corresponding to the intervening portion 66, but instead changes the transparency of the intervening portion 66. You can raise it.
  • the control unit 41 may arrange a three-dimensional image representing the object on the opposite side of the viewpoint of the intervening tissue in the three-dimensional image 53 as the image representing the object.
  • This embodiment may be applied not only to an object such as a catheter 63 as shown in FIG. 7, but also to landmarks 73 associated with living tissue 60 as shown in FIGS. 10-12.
  • the control unit 41 of the image processing device 11 causes the display 16 to display the three-dimensional data 52 including the first data representing the living tissue 60 and the third data representing the mark 73 as the three-dimensional image 53 .
  • the control unit 41 refers to the three-dimensional data 52 to identify the positional relationship between the living tissue 60 and the mark 73 . According to the specified positional relationship, the control unit 41 determines that there is an intervening tissue that is a portion of the living tissue 60 that is interposed between the mark 73 and the viewpoint set when the three-dimensional image 53 is displayed.
  • control unit 41 determines whether When determining that an intervening tissue exists, the control unit 41 performs control to display an image 74 representing a mark 73 at a position corresponding to the intervening tissue in the three-dimensional image 53 . Therefore, even if an intervening tissue exists, the position of the mark 73 can be confirmed.
  • the first data included in the three-dimensional data 52 is constructed based on data obtained by a sensor that is inserted into the lumen 61 of the biological tissue 60 and observes the biological tissue 60, and new data is obtained by the sensor. updated from time to time. That is, the first data is configured to be sequentially updated by the sensor.
  • the control unit 41 of the image processing device 11 acquires designation data designating the mark 73 .
  • the control unit 41 receives a user operation that designates at least one place in the three-dimensional space as the mark 73, thereby acquiring data designating the mark 73 as designation data.
  • the control unit 41 causes the storage unit 42 to store the acquired designation data.
  • the control unit 41 constructs the third data based on the designated data stored in the storage unit 42 .
  • the control unit 41 includes the constructed third data in the three-dimensional data 52 and causes the display 16 to display the three-dimensional data 52 as a three-dimensional image 53 . That is, the control unit 41 causes the display 16 to display the third data as part of the three-dimensional image 53 .
  • the landmarks 73 are the locations of the living tissue 60 or living tissue 60 in three-dimensional space, such as the ablated portion of the living tissue 60, the start and end points for measuring distances in three-dimensional space, or the positions of nerves that should be avoided from being ablated. It is associated with the living tissue 60 by being attached to the lumen 61 of the tissue 60 or at least one location around it. When the mark 73 is attached, the probe 20 is moved forward and backward to update the three-dimensional data 52, and the coordinates of the location marked with the mark 73 are recorded as a fixed point in the three-dimensional space.
  • the living tissue 60 is myocardium.
  • the markings 73 are applied to the surface of the myocardium.
  • the mark 73 may become embedded in the myocardial tissue, as shown in FIG. That is, an intervening portion 66 that is part of the myocardial tissue may exist between the camera 71 and the mark 73 . In that case, the mark 73 cannot be confirmed on the three-dimensional image 53 . Therefore, when the control unit 41 of the image processing device 11 determines that the intervening portion 66 exists, as shown in FIG. control.
  • control unit 41 sets the color of the voxel corresponding to the intervening portion 66 to a first color and an intermediate color between a first color that is the color of the myocardial tissue and a second color that is the color of the mark 73 . Change to a third color that is different from the second color.
  • the control unit 41 of the image processing device 11 selects the biological tissue 60 in the three-dimensional image 53 as the image 74 representing the mark 73 .
  • a texture different from the texture applied to the surface of the portion adjacent to the intervening portion 66 is applied to the surface of the intervening portion 66 .
  • the control unit 41 creates an image representing the mark 73 in the three-dimensional image 53 of the biological tissue 60 adjacent to the intervening portion 66 .
  • a different texture may be applied to the cross-section of the interposer 66 than the texture applied to the cross-section of the intervening portion.
  • the control section 41 of the image processing device 11 changes the color of the voxel corresponding to the intervening portion 66 . If the mark 73 exists behind the intervening portion 66 , the control section 41 may increase the transparency of the intervening portion 66 . That is, when the control unit 41 determines that the interposed portion 66 exists, the control portion 41 may perform control to display an image representing the mark 73 through the intervening portion 66 in the three-dimensional image 53 . In that case, the control unit 41 may arrange a three-dimensional image representing the mark 73 on the opposite side of the viewpoint of the intervening tissue in the three-dimensional image 53 as the image representing the mark 73 .
  • image processing system 11 image processing device 12 cable 13 drive unit 14 keyboard 15 mouse 16 display 17 connection terminal 18 cart unit 20 probe 21 drive shaft 22 hub 23 sheath 24 outer tube 25 ultrasonic transducer 26 relay connector 31 scanner unit 32 slide Unit 33 Bottom cover 34 Probe connection part 35 Scanner motor 36 Insertion port 37 Probe clamp part 38 Slide motor 39 Switch group 41 Control part 42 Storage part 43 Communication part 44 Input part 45 Output part 51 Tomographic data 52 Three-dimensional data 53 Three-dimensional Image 60 Tissue 61 Lumen 62 Aperture 63 Catheter 64 Ridge 65 Fossa Oval 66 Interposition 67 Image 68 Surface 69 Section 71 Camera 72 Light Source 73 Landmark 74 Image 80 Screen

Abstract

The present invention provides an image processing device for displaying, on a display as a three-dimensional image, three-dimensional data including first data that represents living tissue, second data that represents an object, or third data that represents a marker, the image processing device being provided with a control unit for: referencing the three-dimensional data and identifying the positional relationship between the living tissue and the object or the marker; assessing, according to the identified positional relationship, whether there exists interposing tissue that is a portion of the living tissue that is interposed between the object or the marker and a visual point set when the three-dimensional image is displayed; and performing, if it is assessed that the interposing tissue exists, control of displaying an image representing the marker or the object at the position on the three-dimensional image corresponding to the interposing tissue.

Description

画像処理装置、画像処理システム、画像表示方法、及び画像処理プログラムImage processing device, image processing system, image display method, and image processing program
 本開示は、画像処理装置、画像処理システム、画像表示方法、及び画像処理プログラムに関する。 The present disclosure relates to an image processing device, an image processing system, an image display method, and an image processing program.
 特許文献1から特許文献3には、US画像システムを用いて心腔又は血管の3次元画像を生成する技術が記載されている。「US」は、ultrasoundの略語である。 Patent Documents 1 to 3 describe techniques for generating three-dimensional images of heart chambers or blood vessels using a US imaging system. "US" is an abbreviation for ultrasound.
米国特許出願公開第2010/0215238号明細書U.S. Patent Application Publication No. 2010/0215238 米国特許第6385332号明細書U.S. Pat. No. 6,385,332 米国特許第6251072号明細書U.S. Pat. No. 6,251,072
 心腔内、心臓血管、及び下肢動脈領域などに対してIVUSを用いる治療が広く行われている。「IVUS」は、intravascular ultrasoundの略語である。IVUSとはカテーテル長軸に対して垂直平面の2次元画像を提供するデバイス又は方法のことである。 Treatment using IVUS is widely performed for intracardiac, cardiovascular, and lower extremity arterial regions. "IVUS" is an abbreviation for intravascular ultrasound. IVUS is a device or method that provides two-dimensional images in a plane perpendicular to the longitudinal axis of the catheter.
 現状として、術者は頭の中でIVUSの2次元画像を積層することで、立体構造を再構築しながら施術を行う必要があり、特に若年層の医師、又は経験の浅い医師にとって障壁がある。そのような障壁を取り除くために、IVUSの2次元画像から心腔又は血管などの生体組織の構造を表現する3次元画像を自動生成し、生成した3次元画像を術者に向けて表示することが考えられる。生成した3次元画像をそのまま表示するだけでは、術者には組織の外壁しか見えないため、3次元画像において、生体組織の構造の一部を切り取り、内腔を覗けるようにすることが考えられる。アブレーションカテーテル、又は心房中隔穿刺用のカテーテルなど、IVUSカテーテルとは別のカテーテルが生体組織に挿入されている場合は、その別のカテーテルを表現する3次元画像を更に表示することが考えられる。 Currently, the operator needs to perform the procedure while reconstructing the three-dimensional structure by layering the 2D images of IVUS in his head, which is a barrier especially for young doctors or inexperienced doctors. . In order to remove such barriers, a three-dimensional image that expresses the structure of a living tissue such as a heart chamber or a blood vessel is automatically generated from the two-dimensional IVUS image, and the generated three-dimensional image is displayed to the operator. can be considered. If the generated three-dimensional image is displayed as it is, the operator can only see the outer wall of the tissue. Therefore, it is conceivable to cut out part of the structure of the living tissue in the three-dimensional image so that the lumen can be seen. . If a catheter other than the IVUS catheter, such as an ablation catheter or a catheter for atrial septal puncture, is inserted into the living tissue, it is conceivable to further display a three-dimensional image representing the other catheter.
 しかし、それらの3次元画像を術者が見る角度に起因してカテーテルが組織のリッジの裏に入っているとき、又は心房中隔穿刺の術中に卵円窩がテンティングしてカテーテルが組織にめり込んでいるときは、術者にはカテーテルが見えず、カテーテルを使った施術を円滑に行うことができない。 However, when the catheter is behind a ridge of tissue due to the angle at which the operator views these three-dimensional images, or during an atrial septal puncture, the fossa ovalis tents and the catheter is caught in the tissue. When the catheter is embedded, the operator cannot see the catheter, and the operation using the catheter cannot be performed smoothly.
 昨今、心腔内をアブレーションカテーテルで焼灼することにより、電気的遮断を行う手技が普及している。カテーテルに位置センサを積み、位置センサが心筋組織に触れた際の位置情報を用いて3次元画像を描画する3Dマッピングシステムがその手技において主に使用されているが、3次元画像の描画のために心腔内の心筋組織表面にカテーテルをくまなく接触させる必要があり、非常に時間がかかる。PV又はSVCの円周隔離を行う場合は、どの箇所を焼灼したかをマーキングする操作が求められるが、IVUSを使用してこのような操作を完結することができれば、時間を短縮できる可能性がある。「PV」は、pulmonary veinの略語である。「SVC」は、superior vena cavaの略語である。生体組織の焼灼された箇所など、少なくとも1箇所にマーキングすることで目印を表現する3次元画像を表示することが考えられる。 Recently, a technique of electrical interruption by cauterizing the heart chamber with an ablation catheter has become widespread. A 3D mapping system, in which a position sensor is mounted on a catheter and draws a three-dimensional image using position information when the position sensor touches the myocardial tissue, is mainly used in the procedure. It is very time-consuming because it is necessary to bring the catheter into full contact with the myocardial tissue surface in the heart chamber. Circumferential isolation of the PV or SVC requires the marking of the site of ablation, but if IVUS can be used to complete such an operation, the time may be reduced. be. "PV" is an abbreviation for pulmonary vein. "SVC" is an abbreviation for superior vena cava. It is conceivable to display a three-dimensional image that expresses a mark by marking at least one location such as a cauterized location of living tissue.
 しかし、一度付けられた目印は、座標が固定されてしまう一方で、組織の座標はIVUSで逐次更新されていくため、拍動、又はIVUSカテーテルの変位に起因して目印が組織の内部に位置することがある。そのようなときは、術者には目印が見えず、アブレーションなどの施術を円滑に行うことができない。 However, once a mark is placed, the coordinates of the tissue are fixed, while the coordinates of the tissue are sequentially updated by IVUS. I have something to do. In such a case, the operator cannot see the mark and cannot smoothly perform an operation such as ablation.
 本開示の目的は、生体組織の内腔に位置する対象物、又は生体組織に関連付けられた目印が生体組織の一部の裏又は内部に存在する場合でも、対象物又は目印の位置を確認できるようにすることである。 An object of the present disclosure is to enable the location of an object or landmark associated with an object located in the lumen of a living tissue, or a landmark associated with the living tissue, to be located behind or within a portion of the living tissue. It is to do so.
 本開示の一態様としての画像処理装置は、生体組織を表す第1データと、前記生体組織の内腔に位置する対象物を表す第2データ、又は前記生体組織に関連付けられた目印を表す第3データとを含む3次元データを3次元画像としてディスプレイに表示させる画像処理装置であって、前記3次元データを参照して、前記生体組織と前記対象物又は前記目印との位置関係を特定し、特定した位置関係に応じて、前記生体組織の、前記対象物又は前記目印と、前記3次元画像が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定し、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織に対応する位置に前記対象物又は前記目印を表す画像を表示する制御を行う制御部を備える。 An image processing apparatus according to one aspect of the present disclosure includes first data representing a living tissue, second data representing an object located in a lumen of the living tissue, or second data representing a mark associated with the living tissue. An image processing device for displaying three-dimensional data including three data as a three-dimensional image on a display, wherein the positional relationship between the biological tissue and the object or the mark is specified by referring to the three-dimensional data. , according to the specified positional relationship, there is an intervening tissue that is a portion of the living tissue that is interposed between the object or the mark and a viewpoint set when the three-dimensional image is displayed. and a control unit for performing control to display an image representing the object or the mark at a position corresponding to the intervening tissue in the three-dimensional image when it is determined that the intervening tissue exists.
 一実施形態として、前記制御部は、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織の表面に前記対象物又は前記目印を表す画像を表示する制御を行う。 As one embodiment, the control unit performs control to display an image representing the object or the mark on the surface of the intervening tissue in the three-dimensional image when it is determined that the intervening tissue exists.
 一実施形態として、前記制御部は、前記対象物又は前記目印を表す画像として、前記3次元画像において前記生体組織の、前記介在組織に隣接する部分の表面に適用されるテクスチャとは異なるテクスチャを前記介在組織の前記表面に適用する。 In one embodiment, the control unit assigns a texture different from a texture applied to a surface of a portion of the living tissue adjacent to the intervening tissue in the three-dimensional image as the image representing the object or the mark. Apply to the surface of the intervening tissue.
 一実施形態として、前記制御部は、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織の断面に前記対象物又は前記目印を表す画像を表示する制御を行う。 As one embodiment, the control unit performs control to display an image representing the target object or the mark on the cross section of the intervening tissue in the three-dimensional image when it is determined that the intervening tissue exists.
 一実施形態として、前記制御部は、前記対象物又は前記目印を表す画像として、前記3次元画像において前記生体組織の、前記介在組織に隣接する部分の断面に適用されるテクスチャとは異なるテクスチャを前記介在組織の前記断面に適用する。 In one embodiment, the control unit applies a texture different from a texture applied to a cross-section of a portion of the biological tissue adjacent to the intervening tissue in the three-dimensional image as the image representing the object or the mark. Apply to the cross-section of the intervening tissue.
 一実施形態として、前記制御部は、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織を透過させて前記対象物又は前記目印を表す画像を表示する制御を行う。 As one embodiment, when the control unit determines that the intervening tissue exists, it performs control to display an image representing the target object or the mark through the intervening tissue in the three-dimensional image.
 一実施形態として、前記制御部は、前記対象物又は前記目印を表す画像として、前記3次元画像において前記介在組織の前記視点とは反対側に前記対象物又は前記目印を表す3次元画像を配置する。 As one embodiment, the control unit arranges a three-dimensional image representing the object or the mark on the side opposite to the viewpoint of the intervening tissue in the three-dimensional image as the image representing the object or the mark. do.
 一実施形態として、前記3次元データは、前記生体組織の前記内腔に挿入され、前記生体組織及び前記対象物を観察するセンサによって得られたデータを基に構築されたデータをそれぞれ前記第1データ及び前記第2データとして含み、前記対象物は、前記生体組織の前記内腔に挿入されたカテーテルである。 As one embodiment, the three-dimensional data are data constructed based on data obtained by a sensor inserted into the lumen of the biological tissue and observing the biological tissue and the object, respectively. data and the second data, wherein the object is a catheter inserted into the lumen of the biological tissue.
 一実施形態として、前記3次元データは、前記生体組織の前記内腔に挿入され、前記生体組織を観察するセンサによって得られたデータを基に構築され、前記センサによって新たなデータが得られる度に更新されるデータを前記第1データとして含み、前記制御部は、前記目印を指定する指定データを取得し、取得した指定データを基に前記第3データを構築し、前記第3データを前記3次元データに含める。 In one embodiment, the three-dimensional data is constructed based on data obtained by a sensor that is inserted into the lumen of the biological tissue and observes the biological tissue, and each time new data is obtained by the sensor. as the first data, the control unit acquires the designation data that designates the mark, constructs the third data based on the acquired designation data, and converts the third data to the Include in 3D data.
 本開示の一態様としての画像処理システムは、前記画像処理装置と、前記生体組織及び前記対象物を観察するセンサとを備える。 An image processing system as one aspect of the present disclosure includes the image processing device, and a sensor that observes the living tissue and the object.
 一実施形態として、前記画像処理システムは、前記ディスプレイを更に備える。 As one embodiment, the image processing system further includes the display.
 本開示の一態様としての画像表示方法は、生体組織を表す第1データと、前記生体組織の内腔に位置する対象物を表す第2データ、又は前記生体組織に関連付けられた目印を表す第3データとを含む3次元データを3次元画像としてディスプレイに表示する画像表示方法であって、コンピュータが、前記3次元データを参照して、前記生体組織と前記対象物又は前記目印との位置関係を特定し、前記コンピュータが、特定した位置関係に応じて、前記生体組織の、前記対象物又は前記目印と、前記3次元画像が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定し、前記コンピュータが、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織に対応する位置に前記対象物又は前記目印を表す画像を表示する制御を行う、というものである。 An image display method according to one aspect of the present disclosure includes first data representing a living tissue, second data representing an object located in a lumen of the living tissue, or second data representing a mark associated with the living tissue. An image display method for displaying three-dimensional data including three data on a display as a three-dimensional image, wherein a computer refers to the three-dimensional data to determine the positional relationship between the biological tissue and the object or the mark. and the computer determines, according to the identified positional relationship, the part of the biological tissue interposed between the object or the mark and the viewpoint set when the three-dimensional image is displayed. an image representing the object or the mark at a position corresponding to the intervening tissue in the three-dimensional image when the computer determines that the intervening tissue exists is to control the display of
 本開示の一態様としての画像処理プログラムは、生体組織を表す第1データと、前記生体組織の内腔に位置する対象物を表す第2データ、又は前記生体組織に関連付けられた目印を表す第3データとを含む3次元データを3次元画像としてディスプレイに表示させるコンピュータに、前記3次元データを参照して、前記生体組織と前記対象物又は前記目印との位置関係を特定する処理と、特定した位置関係に応じて、前記生体組織の、前記対象物又は前記目印と、前記3次元画像が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定する処理と、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織に対応する位置に前記対象物又は前記目印を表す画像を表示する制御を行う処理を実行させる。 An image processing program according to one aspect of the present disclosure includes first data representing a living tissue, second data representing an object located in a lumen of the living tissue, or second data representing a mark associated with the living tissue. a process of specifying the positional relationship between the biological tissue and the object or the mark by referring to the three-dimensional data in a computer that displays the three-dimensional data including the three data as a three-dimensional image on a display; depending on the positional relationship, whether or not there is an intervening tissue that is a portion intervening between the object or the mark and the viewpoint set when the three-dimensional image is displayed in the living tissue and, if it is determined that the intervening tissue exists, control is performed to display an image representing the object or the mark at a position corresponding to the intervening tissue in the three-dimensional image.
 本開示によれば、生体組織の内腔に位置する対象物、又は生体組織に関連付けられた目印が生体組織の一部の裏又は内部に存在する場合でも、対象物又は目印の位置を確認できるようになる。 According to the present disclosure, it is possible to ascertain the position of an object or a landmark associated with an object located in the lumen of the living tissue or even if the landmark is behind or inside a portion of the living tissue. become.
本開示の実施形態に係る画像処理システムの斜視図である。1 is a perspective view of an image processing system according to an embodiment of the present disclosure; FIG. 対象物が介在組織の裏に存在する例を示す断面図である。FIG. 4 is a cross-sectional view showing an example in which an object exists behind an intervening tissue; 本開示の実施形態に係る画像処理装置の構成を示すブロック図である。1 is a block diagram showing the configuration of an image processing device according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係るプローブ及び駆動ユニットの斜視図である。FIG. 2 is a perspective view of a probe and drive unit according to an embodiment of the present disclosure; 本開示の実施形態に係る画像処理システムの動作を示すフローチャートである。4 is a flow chart showing the operation of the image processing system according to the embodiment of the present disclosure; 本開示の実施形態に係る画像処理システムの動作を示すフローチャートである。4 is a flow chart showing the operation of the image processing system according to the embodiment of the present disclosure; 生体組織、対象物、及び視点の位置関係の例を示す断面図である。FIG. 4 is a cross-sectional view showing an example of the positional relationship between a living tissue, an object, and a viewpoint; 本開示の実施形態に係るディスプレイの画面の例を示す図である。FIG. 4 is a diagram showing an example screen of a display according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係るディスプレイの画面の例を示す図である。FIG. 4 is a diagram showing an example screen of a display according to an embodiment of the present disclosure; FIG. 生体組織、目印、及び視点の位置関係の例を示す模式図である。FIG. 4 is a schematic diagram showing an example of the positional relationship between a living tissue, a mark, and a viewpoint; 生体組織、目印、及び視点の位置関係の別の例を示す模式図である。FIG. 10 is a schematic diagram showing another example of the positional relationship between a living tissue, a mark, and a viewpoint; 図11の例において介在組織の表面に目印を表す画像を表示する例を示す模式図である。FIG. 12 is a schematic diagram showing an example of displaying an image representing a mark on the surface of the intervening tissue in the example of FIG. 11;
 以下、本開示の実施形態について、図を参照して説明する。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
 各図中、同一又は相当する部分には、同一符号を付している。本実施形態の説明において、同一又は相当する部分については、説明を適宜省略又は簡略化する。 In each figure, the same or corresponding parts are given the same reference numerals. In the description of this embodiment, the description of the same or corresponding parts will be omitted or simplified as appropriate.
 図1から図3を参照して、本実施形態の概要を説明する。 An outline of the present embodiment will be described with reference to FIGS. 1 to 3. FIG.
 本実施形態に係る画像処理装置11は、生体組織60を表す第1データと、生体組織60の内腔61に位置する対象物を表す第2データとを含む3次元データ52を3次元画像53としてディスプレイ16に表示させるコンピュータである。画像処理装置11は、3次元データ52を参照して、生体組織60と対象物との位置関係を特定する。画像処理装置11は、特定した位置関係に応じて、生体組織60の、対象物と、3次元画像53が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定する。画像処理装置11は、介在組織が存在すると判定した場合に、3次元画像53において介在組織に対応する位置に対象物を表す画像を表示する制御を行う。したがって、本実施形態によれば、介在組織が存在する場合でも、対象物の位置を確認できるようになる。 The image processing apparatus 11 according to the present embodiment converts three-dimensional data 52 including first data representing the living tissue 60 and second data representing the object located in the lumen 61 of the living tissue 60 into a three-dimensional image 53 . is displayed on the display 16 as a computer. The image processing device 11 refers to the three-dimensional data 52 to identify the positional relationship between the living tissue 60 and the object. The image processing device 11 detects the presence of an intervening tissue, which is a portion of the biological tissue 60 that is interposed between the object and the viewpoint set when the three-dimensional image 53 is displayed, according to the specified positional relationship. decide whether to When the image processing device 11 determines that an intervening tissue exists, the image processing device 11 performs control to display an image representing the object at a position corresponding to the intervening tissue in the three-dimensional image 53 . Therefore, according to this embodiment, it is possible to confirm the position of the object even when there is an intervening tissue.
 生体組織60は、例えば、血管、又は心臓などの臓器を含む。生体組織60は、解剖学的に単一の器官又はその一部のみに限らず、複数の器官を跨いで内腔を有する組織も含む。そのような組織の一例として、具体的には、下大静脈の上部から右心房を抜けて上大静脈の下部に至る血管系組織の一部が挙げられる。 The biological tissue 60 includes, for example, blood vessels or organs such as the heart. The biological tissue 60 is not limited to an anatomical single organ or a part thereof, but also includes a tissue that straddles a plurality of organs and has a lumen. A specific example of such tissue is a portion of the vascular system extending from the upper portion of the inferior vena cava through the right atrium to the lower portion of the superior vena cava.
 図2の例では、生体組織60は、右心房である。この例では、右心房の卵円窩65に隣接する部分が内側に隆起してリッジ64が形成されている。右心房には、アブレーションカテーテル、又は心房中隔穿刺用のカテーテルなどのカテーテル63が挿入されている。 In the example of FIG. 2, the living tissue 60 is the right atrium. In this example, the portion of the right atrium adjacent to the fossa ovalis 65 is raised inward to form a ridge 64 . A catheter 63 such as an ablation catheter or a catheter for atrial septal puncture is inserted into the right atrium.
 仮に、3次元画像53として、生体組織60の構造を表現する画像を自動生成し、生成した画像を術者に向けて表示したとする。生成した画像において、生体組織60の構造の一部を切り取り、内腔61を覗けるようにしたとする。カテーテル63を表現する画像を更に表示したとする。その場合、内腔61を覗く方向によっては、カテーテル63がリッジ64の裏に入っていることになり、術者に見えなくなってしまう。しかし、そのような場合に、本実施形態では、リッジ64の表面に、カテーテル63を表現する画像が表示される。すなわち、リッジ64の、少なくともカテーテル63を隠している部分が透明のようになり、当該部分を通してカテーテル63が見えるようになる。したがって、術者がアブレーション又は心房中隔穿刺などの施術を円滑に行えるようになる。 Assume that an image expressing the structure of the living tissue 60 is automatically generated as the three-dimensional image 53 and the generated image is displayed to the operator. Suppose that a part of the structure of the living tissue 60 is cut off in the generated image so that the lumen 61 can be seen. Assume that an image representing catheter 63 is also displayed. In that case, the catheter 63 is behind the ridge 64 and is not visible to the operator depending on the direction in which the lumen 61 is viewed. However, in such a case, an image representing the catheter 63 is displayed on the surface of the ridge 64 in this embodiment. That is, at least the portion of the ridge 64 hiding the catheter 63 appears transparent, allowing the catheter 63 to be seen through that portion. Therefore, the operator can smoothly perform an operation such as ablation or atrial septal puncture.
 上述した例では、リッジ64のカテーテル63を隠している部分は「介在組織」に相当する。本実施形態は、カテーテル63がリッジ64の裏に入っているときだけでなく、心房中隔穿刺の術中に卵円窩65がテンティングしてカテーテル63が組織にめり込んでいるときなど、カテーテル63が任意の組織の裏又は内部に存在する場合に適用することができる。 In the above example, the portion of the ridge 64 that hides the catheter 63 corresponds to the "intervening tissue". This embodiment can be used not only when the catheter 63 is behind the ridge 64, but also when the fossa ovalis 65 is tented during an atrial septal puncture operation and the catheter 63 is embedded in the tissue. is behind or inside any tissue.
 上述した例では、カテーテル63は「対象物」に相当する。対象物は、カテーテル63に限らず、ステントなど、生体組織60の内腔61に位置する他の物体でもよい。 In the above example, the catheter 63 corresponds to the "object". The object is not limited to the catheter 63, but may be another object located in the lumen 61 of the biological tissue 60, such as a stent.
 図2において、X方向、及びX方向に直交するY方向は、それぞれ生体組織60の内腔61の短手方向に相当する。X方向及びY方向に直交するZ方向は、生体組織60の内腔61の長手方向に相当する。 In FIG. 2 , the X direction and the Y direction perpendicular to the X direction respectively correspond to the lateral direction of the lumen 61 of the living tissue 60 . A Z direction perpendicular to the X and Y directions corresponds to the longitudinal direction of the lumen 61 of the biological tissue 60 .
 図1を参照して、本実施形態に係る画像処理システム10の構成を説明する。 The configuration of an image processing system 10 according to this embodiment will be described with reference to FIG.
 画像処理システム10は、画像処理装置11、ケーブル12、駆動ユニット13、キーボード14、マウス15、及びディスプレイ16を備える。 The image processing system 10 includes an image processing device 11, a cable 12, a drive unit 13, a keyboard 14, a mouse 15, and a display 16.
 画像処理装置11は、本実施形態では画像診断に特化した専用のコンピュータであるが、PCなどの汎用のコンピュータでもよい。「PC」は、personal computerの略語である。 The image processing apparatus 11 is a dedicated computer specialized for image diagnosis in this embodiment, but may be a general-purpose computer such as a PC. "PC" is an abbreviation for personal computer.
 ケーブル12は、画像処理装置11と駆動ユニット13とを接続するために用いられる。 The cable 12 is used to connect the image processing device 11 and the drive unit 13.
 駆動ユニット13は、図4に示すプローブ20に接続して用いられ、プローブ20を駆動する装置である。駆動ユニット13は、MDUとも呼ばれる。「MDU」は、motor drive unitの略語である。プローブ20は、IVUSに適用される。プローブ20は、IVUSカテーテル又は画像診断用カテーテルとも呼ばれる。 The drive unit 13 is a device that is used by connecting to the probe 20 shown in FIG. The drive unit 13 is also called MDU. "MDU" is an abbreviation for motor drive unit. Probe 20 has IVUS applications. Probe 20 is also referred to as an IVUS catheter or diagnostic imaging catheter.
 キーボード14、マウス15、及びディスプレイ16は、任意のケーブルを介して、又は無線で画像処理装置11と接続される。ディスプレイ16は、例えば、LCD、有機ELディスプレイ、又はHMDである。「LCD」は、liquid crystal displayの略語である。「EL」は、electro luminescenceの略語である。「HMD」は、head-mounted displayの略語である。 The keyboard 14, mouse 15, and display 16 are connected to the image processing device 11 via any cable or wirelessly. The display 16 is, for example, an LCD, organic EL display, or HMD. "LCD" is an abbreviation for liquid crystal display. "EL" is an abbreviation for electro luminescence. "HMD" is an abbreviation for head-mounted display.
 画像処理システム10は、オプションとして、接続端子17及びカートユニット18を更に備える。 The image processing system 10 further comprises a connection terminal 17 and a cart unit 18 as options.
 接続端子17は、画像処理装置11と外部機器とを接続するために用いられる。接続端子17は、例えば、USB端子である。「USB」は、Universal Serial Busの略語である。外部機器は、例えば、磁気ディスクドライブ、光磁気ディスクドライブ、又は光ディスクドライブなどの記録媒体である。 The connection terminal 17 is used to connect the image processing device 11 and an external device. The connection terminal 17 is, for example, a USB terminal. "USB" is an abbreviation for Universal Serial Bus. The external device is, for example, a recording medium such as a magnetic disk drive, a magneto-optical disk drive, or an optical disk drive.
 カートユニット18は、移動用のキャスタ付きのカートである。カートユニット18のカート本体には、画像処理装置11、ケーブル12、及び駆動ユニット13が設置される。カートユニット18の最上部のテーブルには、キーボード14、マウス15、及びディスプレイ16が設置される。 The cart unit 18 is a cart with casters for movement. An image processing device 11 , a cable 12 and a drive unit 13 are installed in the cart body of the cart unit 18 . A keyboard 14 , a mouse 15 and a display 16 are installed on the top table of the cart unit 18 .
 図4を参照して、本実施形態に係るプローブ20及び駆動ユニット13の構成を説明する。 The configuration of the probe 20 and drive unit 13 according to this embodiment will be described with reference to FIG.
 プローブ20は、駆動シャフト21、ハブ22、シース23、外管24、超音波振動子25、及び中継コネクタ26を備える。 The probe 20 includes a drive shaft 21, a hub 22, a sheath 23, an outer tube 24, an ultrasonic transducer 25, and a relay connector 26.
 駆動シャフト21は、生体の体腔内に挿入されるシース23と、シース23の基端に接続した外管24とを通り、プローブ20の基端に設けられたハブ22の内部まで延びている。駆動シャフト21は、信号を送受信する超音波振動子25を先端に有してシース23及び外管24内に回転可能に設けられる。中継コネクタ26は、シース23及び外管24を接続する。 The drive shaft 21 passes through a sheath 23 inserted into the body cavity of a living body, an outer tube 24 connected to the proximal end of the sheath 23, and extends to the inside of a hub 22 provided at the proximal end of the probe 20. The driving shaft 21 has an ultrasonic transducer 25 for transmitting and receiving signals at its tip and is rotatably provided within the sheath 23 and the outer tube 24 . A relay connector 26 connects the sheath 23 and the outer tube 24 .
 ハブ22、駆動シャフト21、及び超音波振動子25は、それぞれが一体的に軸方向に進退移動するように互いに接続される。そのため、例えば、ハブ22が先端側に向けて押される操作がなされると、駆動シャフト21及び超音波振動子25がシース23の内部を先端側へ移動する。例えば、ハブ22が基端側に引かれる操作がなされると、駆動シャフト21及び超音波振動子25は、矢印で示すように、シース23の内部を基端側へ移動する。 The hub 22, the drive shaft 21, and the ultrasonic transducer 25 are connected to each other so as to integrally move back and forth in the axial direction. Therefore, for example, when the hub 22 is pushed toward the distal side, the drive shaft 21 and the ultrasonic transducer 25 move inside the sheath 23 toward the distal side. For example, when the hub 22 is pulled proximally, the drive shaft 21 and the ultrasonic transducer 25 move proximally inside the sheath 23 as indicated by the arrows.
 駆動ユニット13は、スキャナユニット31、スライドユニット32、及びボトムカバー33を備える。 The drive unit 13 includes a scanner unit 31, a slide unit 32, and a bottom cover 33.
 スキャナユニット31は、ケーブル12を介して画像処理装置11と接続する。スキャナユニット31は、プローブ20と接続するプローブ接続部34と、駆動シャフト21を回転させる駆動源であるスキャナモータ35とを備える。 The scanner unit 31 is connected to the image processing device 11 via the cable 12 . The scanner unit 31 includes a probe connection section 34 that connects to the probe 20 and a scanner motor 35 that is a drive source that rotates the drive shaft 21 .
 プローブ接続部34は、プローブ20の基端に設けられたハブ22の差込口36を介して、プローブ20と着脱自在に接続する。ハブ22の内部では、駆動シャフト21の基端が回転自在に支持されており、スキャナモータ35の回転力が駆動シャフト21に伝えられる。また、ケーブル12を介して駆動シャフト21と画像処理装置11との間で信号が送受信される。画像処理装置11では、駆動シャフト21から伝わる信号に基づき、生体管腔の断層画像の生成、及び画像処理が行われる。 The probe connecting portion 34 is detachably connected to the probe 20 through an insertion port 36 of the hub 22 provided at the proximal end of the probe 20 . Inside the hub 22 , the proximal end of the drive shaft 21 is rotatably supported, and the rotational force of the scanner motor 35 is transmitted to the drive shaft 21 . Signals are also transmitted and received between the drive shaft 21 and the image processing device 11 via the cable 12 . The image processing device 11 generates a tomographic image of the body lumen and performs image processing based on the signal transmitted from the drive shaft 21 .
 スライドユニット32は、スキャナユニット31を進退自在に載せており、スキャナユニット31と機械的かつ電気的に接続している。スライドユニット32は、プローブクランプ部37、スライドモータ38、及びスイッチ群39を備える。 The slide unit 32 mounts the scanner unit 31 so as to move back and forth, and is mechanically and electrically connected to the scanner unit 31 . The slide unit 32 includes a probe clamp section 37 , a slide motor 38 and a switch group 39 .
 プローブクランプ部37は、プローブ接続部34よりも先端側でこれと同軸的に配置して設けられており、プローブ接続部34に接続されるプローブ20を支持する。 The probe clamping part 37 is arranged coaxially with the probe connecting part 34 on the tip side of the probe connecting part 34 and supports the probe 20 connected to the probe connecting part 34 .
 スライドモータ38は、軸方向の駆動力を生じさせる駆動源である。スライドモータ38の駆動によってスキャナユニット31が進退動し、それに伴って駆動シャフト21が軸方向に進退動する。スライドモータ38は、例えば、サーボモータである。 The slide motor 38 is a driving source that generates axial driving force. The scanner unit 31 advances and retreats by driving the slide motor 38, and the drive shaft 21 advances and retreats in the axial direction accordingly. The slide motor 38 is, for example, a servomotor.
 スイッチ群39には、例えば、スキャナユニット31の進退操作の際に押されるフォワードスイッチ及びプルバックスイッチ、並びに画像描写の開始及び終了の際に押されるスキャンスイッチが含まれる。ここでの例に限定されず、必要に応じて種々のスイッチがスイッチ群39に含まれる。 The switch group 39 includes, for example, a forward switch and a pullback switch that are pressed when moving the scanner unit 31 back and forth, and a scan switch that is pressed when image rendering is started and ended. Various switches are included in the switch group 39 as needed, without being limited to the example here.
 フォワードスイッチが押されると、スライドモータ38が正回転し、スキャナユニット31が前進する。一方、プルバックスイッチが押されると、スライドモータ38が逆回転し、スキャナユニット31が後退する。 When the forward switch is pressed, the slide motor 38 rotates forward and the scanner unit 31 advances. On the other hand, when the pullback switch is pushed, the slide motor 38 rotates in the reverse direction and the scanner unit 31 retreats.
 スキャンスイッチが押されると画像描写が開始され、スキャナモータ35が駆動するとともに、スライドモータ38が駆動してスキャナユニット31を後退させていく。術者などのユーザは、事前にプローブ20をスキャナユニット31に接続しておき、画像描写開始とともに駆動シャフト21が回転しつつ軸方向基端側に移動するようにする。スキャナモータ35及びスライドモータ38は、スキャンスイッチが再度押されると停止し、画像描写が終了する。 When the scan switch is pressed, image rendering is started, the scanner motor 35 is driven, and the slide motor 38 is driven to move the scanner unit 31 backward. A user such as an operator connects the probe 20 to the scanner unit 31 in advance, and causes the drive shaft 21 to rotate and move to the proximal end side in the axial direction when image rendering is started. The scanner motor 35 and the slide motor 38 are stopped when the scan switch is pressed again, and image rendering is completed.
 ボトムカバー33は、スライドユニット32の底面及び底面側の側面全周を覆っており、スライドユニット32の底面に対して近接離間自在である。 The bottom cover 33 covers the bottom surface of the slide unit 32 and the entire circumference of the side surface on the bottom surface side, and can move toward and away from the bottom surface of the slide unit 32 .
 図3を参照して、画像処理装置11の構成を説明する。 The configuration of the image processing device 11 will be described with reference to FIG.
 画像処理装置11は、制御部41と、記憶部42と、通信部43と、入力部44と、出力部45とを備える。 The image processing device 11 includes a control section 41 , a storage section 42 , a communication section 43 , an input section 44 and an output section 45 .
 制御部41は、少なくとも1つのプロセッサ、少なくとも1つのプログラマブル回路、少なくとも1つの専用回路、又はこれらの任意の組合せを含む。プロセッサは、CPU若しくはGPUなどの汎用プロセッサ、又は特定の処理に特化した専用プロセッサである。「CPU」は、central processing unitの略語である。「GPU」は、graphics processing unitの略語である。プログラマブル回路は、例えば、FPGAである。「FPGA」は、field-programmable gate arrayの略語である。専用回路は、例えば、ASICである。「ASIC」は、application specific integrated circuitの略語である。制御部41は、画像処理装置11を含む画像処理システム10の各部を制御しながら、画像処理装置11の動作に関わる処理を実行する。 The control unit 41 includes at least one processor, at least one programmable circuit, at least one dedicated circuit, or any combination thereof. A processor may be a general-purpose processor such as a CPU or GPU, or a dedicated processor specialized for a particular process. "CPU" is an abbreviation for central processing unit. "GPU" is an abbreviation for graphics processing unit. A programmable circuit is, for example, an FPGA. "FPGA" is an abbreviation for field-programmable gate array. A dedicated circuit is, for example, an ASIC. "ASIC" is an abbreviation for application specific integrated circuit. The control unit 41 executes processing related to the operation of the image processing device 11 while controlling each unit of the image processing system 10 including the image processing device 11 .
 記憶部42は、少なくとも1つの半導体メモリ、少なくとも1つの磁気メモリ、少なくとも1つの光メモリ、又はこれらの任意の組合せを含む。半導体メモリは、例えば、RAM又はROMである。「RAM」は、random access memoryの略語である。「ROM」は、read only memoryの略語である。RAMは、例えば、SRAM又はDRAMである。「SRAM」は、static random access memoryの略語である。「DRAM」は、dynamic random access memoryの略語である。ROMは、例えば、EEPROMである。「EEPROM」は、electrically erasable programmable read only memoryの略語である。記憶部42は、例えば、主記憶装置、補助記憶装置、又はキャッシュメモリとして機能する。記憶部42には、断層データ51など、画像処理装置11の動作に用いられるデータと、3次元データ52及び3次元画像53など、画像処理装置11の動作によって得られたデータとが記憶される。 The storage unit 42 includes at least one semiconductor memory, at least one magnetic memory, at least one optical memory, or any combination thereof. A semiconductor memory is, for example, a RAM or a ROM. "RAM" is an abbreviation for random access memory. "ROM" is an abbreviation for read only memory. RAM is, for example, SRAM or DRAM. "SRAM" is an abbreviation for static random access memory. "DRAM" is an abbreviation for dynamic random access memory. ROM is, for example, EEPROM. "EEPROM" is an abbreviation for electrically erasable programmable read only memory. The storage unit 42 functions, for example, as a main memory device, an auxiliary memory device, or a cache memory. The storage unit 42 stores data used for the operation of the image processing apparatus 11, such as the tomographic data 51, and data obtained by the operation of the image processing apparatus 11, such as the three-dimensional data 52 and the three-dimensional image 53. .
 通信部43は、少なくとも1つの通信用インタフェースを含む。通信用インタフェースは、例えば、有線LANインタフェース、無線LANインタフェース、又はIVUSの信号を受信及びA/D変換する画像診断用インタフェースである。「LAN」は、local area networkの略語である。「A/D」は、analog to digitalの略語である。通信部43は、画像処理装置11の動作に用いられるデータを受信し、また画像処理装置11の動作によって得られるデータを送信する。本実施形態では、通信部43に含まれる画像診断用インタフェースに駆動ユニット13が接続される。 The communication unit 43 includes at least one communication interface. The communication interface is, for example, a wired LAN interface, a wireless LAN interface, or an image diagnosis interface that receives and A/D converts IVUS signals. "LAN" is an abbreviation for local area network. "A/D" is an abbreviation for analog to digital. The communication unit 43 receives data used for the operation of the image processing device 11 and transmits data obtained by the operation of the image processing device 11 . In the present embodiment, the drive unit 13 is connected to an image diagnosis interface included in the communication section 43 .
 入力部44は、少なくとも1つの入力用インタフェースを含む。入力用インタフェースは、例えば、USBインタフェース、HDMI(登録商標)インタフェース、又はBluetooth(登録商標)などの近距離無線通信規格に対応したインタフェースである。「HDMI(登録商標)」は、High-Definition Multimedia Interfaceの略語である。入力部44は、画像処理装置11の動作に用いられるデータを入力する操作などのユーザの操作を受け付ける。本実施形態では、入力部44に含まれるUSBインタフェース、又は近距離無線通信に対応したインタフェースにキーボード14及びマウス15が接続される。タッチスクリーンがディスプレイ16と一体的に設けられている場合、入力部44に含まれるUSBインタフェース又はHDMI(登録商標)インタフェースにディスプレイ16が接続されてもよい。 The input unit 44 includes at least one input interface. The input interface is, for example, a USB interface, an HDMI (registered trademark) interface, or an interface compatible with a short-range wireless communication standard such as Bluetooth (registered trademark). "HDMI (registered trademark)" is an abbreviation for High-Definition Multimedia Interface. The input unit 44 receives a user's operation such as an operation of inputting data used for the operation of the image processing device 11 . In this embodiment, the keyboard 14 and the mouse 15 are connected to a USB interface included in the input unit 44 or an interface compatible with short-range wireless communication. If the touch screen is provided integrally with the display 16 , the display 16 may be connected to a USB interface or HDMI (registered trademark) interface included in the input section 44 .
 出力部45は、少なくとも1つの出力用インタフェースを含む。出力用インタフェースは、例えば、USBインタフェース、HDMI(登録商標)インタフェース、又はBluetooth(登録商標)などの近距離無線通信規格に対応したインタフェースである。出力部45は、画像処理装置11の動作によって得られるデータを出力する。本実施形態では、出力部45に含まれるUSBインタフェース又はHDMI(登録商標)インタフェースにディスプレイ16が接続される。 The output unit 45 includes at least one output interface. The output interface is, for example, a USB interface, an HDMI (registered trademark) interface, or an interface compatible with a short-range wireless communication standard such as Bluetooth (registered trademark). The output unit 45 outputs data obtained by the operation of the image processing device 11 . In this embodiment, the display 16 is connected to a USB interface or HDMI (registered trademark) interface included in the output unit 45 .
 画像処理装置11の機能は、本実施形態に係る画像処理プログラムを、制御部41としてのプロセッサで実行することにより実現される。すなわち、画像処理装置11の機能は、ソフトウェアにより実現される。画像処理プログラムは、画像処理装置11の動作をコンピュータに実行させることで、コンピュータを画像処理装置11として機能させる。すなわち、コンピュータは、画像処理プログラムに従って画像処理装置11の動作を実行することにより画像処理装置11として機能する。 The functions of the image processing device 11 are realized by executing the image processing program according to the present embodiment with a processor as the control unit 41 . That is, the functions of the image processing device 11 are realized by software. The image processing program causes the computer to function as the image processing device 11 by causing the computer to execute the operation of the image processing device 11 . That is, the computer functions as the image processing device 11 by executing the operation of the image processing device 11 according to the image processing program.
 プログラムは、非一時的なコンピュータ読取り可能な媒体に記憶しておくことができる。非一時的なコンピュータ読取り可能な媒体は、例えば、フラッシュメモリ、磁気記録装置、光ディスク、光磁気記録媒体、又はROMである。プログラムの流通は、例えば、プログラムを記憶したSDカード、DVD、又はCD-ROMなどの可搬型媒体を販売、譲渡、又は貸与することによって行う。「SD」は、Secure Digitalの略語である。「DVD」は、digital versatile discの略語である。「CD-ROM」は、compact disc read only memoryの略語である。プログラムをサーバのストレージに格納しておき、サーバから他のコンピュータにプログラムを転送することにより、プログラムを流通させてもよい。プログラムをプログラムプロダクトとして提供してもよい。 The program can be stored on a non-transitory computer-readable medium. A non-transitory computer-readable medium is, for example, a flash memory, a magnetic recording device, an optical disk, a magneto-optical recording medium, or a ROM. Program distribution is performed, for example, by selling, assigning, or lending a portable medium such as an SD card, DVD, or CD-ROM storing the program. "SD" is an abbreviation for Secure Digital. "DVD" is an abbreviation for digital versatile disc. "CD-ROM" is an abbreviation for compact disc read only memory. The program may be distributed by storing the program in the storage of the server and transferring the program from the server to another computer. A program may be provided as a program product.
 コンピュータは、例えば、可搬型媒体に記憶されたプログラム又はサーバから転送されたプログラムを、一旦、主記憶装置に格納する。そして、コンピュータは、主記憶装置に格納されたプログラムをプロセッサで読み取り、読み取ったプログラムに従った処理をプロセッサで実行する。コンピュータは、可搬型媒体から直接プログラムを読み取り、プログラムに従った処理を実行してもよい。コンピュータは、コンピュータにサーバからプログラムが転送される度に、逐次、受け取ったプログラムに従った処理を実行してもよい。サーバからコンピュータへのプログラムの転送は行わず、実行指示及び結果取得のみによって機能を実現する、いわゆるASP型のサービスによって処理を実行してもよい。「ASP」は、application service providerの略語である。プログラムには、電子計算機による処理の用に供する情報であってプログラムに準ずるものを含む。例えば、コンピュータに対する直接の指令ではないがコンピュータの処理を規定する性質を有するデータは、「プログラムに準ずるもの」に該当する。 A computer, for example, temporarily stores a program stored in a portable medium or a program transferred from a server in a main storage device. Then, the computer reads the program stored in the main storage device with the processor, and executes processing according to the read program with the processor. The computer may read the program directly from the portable medium and execute processing according to the program. The computer may execute processing according to the received program every time the program is transferred from the server to the computer. The processing may be executed by a so-called ASP type service that realizes the function only by executing the execution instruction and obtaining the result without transferring the program from the server to the computer. "ASP" is an abbreviation for application service provider. The program includes information to be used for processing by a computer and conforming to the program. For example, data that is not a direct instruction to a computer but that has the property of prescribing the processing of the computer corresponds to "things equivalent to a program."
 画像処理装置11の一部又は全ての機能が、制御部41としてのプログラマブル回路又は専用回路により実現されてもよい。すなわち、画像処理装置11の一部又は全ての機能が、ハードウェアにより実現されてもよい。 A part or all of the functions of the image processing device 11 may be realized by a programmable circuit or a dedicated circuit as the control unit 41. That is, part or all of the functions of the image processing device 11 may be realized by hardware.
 図5を参照して、本実施形態に係る画像処理システム10の動作を説明する。画像処理システム10の動作は、本実施形態に係る画像表示方法に相当する。 The operation of the image processing system 10 according to this embodiment will be described with reference to FIG. The operation of the image processing system 10 corresponds to the image display method according to this embodiment.
 図5のフローの開始前に、ユーザによって、プローブ20がプライミングされる。その後、プローブ20が駆動ユニット13のプローブ接続部34及びプローブクランプ部37に嵌め込まれ、駆動ユニット13に接続及び固定される。そして、プローブ20が血管又は心臓などの生体組織60内の目的部位まで挿入される。 The probe 20 is primed by the user before the flow of FIG. 5 starts. After that, the probe 20 is fitted into the probe connection portion 34 and the probe clamp portion 37 of the drive unit 13 and connected and fixed to the drive unit 13 . Then, the probe 20 is inserted to a target site in a living tissue 60 such as a blood vessel or heart.
 ステップS101において、スイッチ群39に含まれるスキャンスイッチが押され、更にスイッチ群39に含まれるプルバックスイッチが押されることで、いわゆるプルバック操作が行われる。プローブ20は、生体組織60の内部で、プルバック操作によって軸方向に後退する超音波振動子25により超音波を送信する。超音波振動子25は、生体組織60の内部を移動しながら放射線状に超音波を送信する。超音波振動子25は、送信した超音波の反射波を受信する。プローブ20は、超音波振動子25により受信した反射波の信号を画像処理装置11に入力する。画像処理装置11の制御部41は、入力された信号を処理して生体組織60の断面画像を順次生成することで、複数の断面画像を含む断層データ51を取得する。 In step S101, the scan switch included in the switch group 39 is pressed, and the pullback switch included in the switch group 39 is pressed, so that a so-called pullback operation is performed. The probe 20 transmits ultrasonic waves by means of the ultrasonic transducer 25 retracted in the axial direction by a pullback operation inside the biological tissue 60 . The ultrasonic transducer 25 radially transmits ultrasonic waves while moving inside the living tissue 60 . The ultrasonic transducer 25 receives reflected waves of the transmitted ultrasonic waves. The probe 20 inputs the signal of the reflected wave received by the ultrasonic transducer 25 to the image processing device 11 . The control unit 41 of the image processing apparatus 11 processes the input signal to sequentially generate cross-sectional images of the biological tissue 60, thereby acquiring tomographic data 51 including a plurality of cross-sectional images.
 具体的には、プローブ20は、生体組織60の内部で超音波振動子25を周方向に回転させながら、かつ軸方向に移動させながら、超音波振動子25により、回転中心から外側に向かう複数方向に超音波を送信する。プローブ20は、生体組織60の内部で複数方向のそれぞれに存在する反射物からの反射波を超音波振動子25により受信する。プローブ20は、受信した反射波の信号を、駆動ユニット13及びケーブル12を介して画像処理装置11に送信する。画像処理装置11の通信部43は、プローブ20から送信された信号を受信する。通信部43は、受信した信号をA/D変換する。通信部43は、A/D変換した信号を制御部41に入力する。制御部41は、入力された信号を処理して、超音波振動子25の超音波の送信方向に存在する反射物からの反射波の強度値分布を算出する。制御部41は、算出した強度値分布に相当する輝度値分布を持つ2次元画像を生体組織60の断面画像として順次生成することで、断面画像のデータセットである断層データ51を取得する。制御部41は、取得した断層データ51を記憶部42に記憶させる。 Specifically, the probe 20 rotates the ultrasonic transducer 25 in the circumferential direction inside the living tissue 60 and moves it in the axial direction, and rotates the ultrasonic transducer 25 toward the outside from the center of rotation. Sends ultrasound in a direction. The probe 20 receives reflected waves from reflecting objects present in each of a plurality of directions inside the living tissue 60 by the ultrasonic transducer 25 . The probe 20 transmits the received reflected wave signal to the image processing device 11 via the drive unit 13 and the cable 12 . The communication unit 43 of the image processing device 11 receives the signal transmitted from the probe 20 . The communication unit 43 A/D converts the received signal. The communication unit 43 inputs the A/D converted signal to the control unit 41 . The control unit 41 processes the input signal and calculates the intensity value distribution of the reflected waves from the reflectors present in the transmission direction of the ultrasonic waves from the ultrasonic transducer 25 . The control unit 41 sequentially generates two-dimensional images having a luminance value distribution corresponding to the calculated intensity value distribution as cross-sectional images of the biological tissue 60, thereby acquiring tomographic data 51, which is a data set of cross-sectional images. The control unit 41 causes the storage unit 42 to store the obtained tomographic data 51 .
 本実施形態において、超音波振動子25が受信する反射波の信号は、断層データ51の生データに相当し、画像処理装置11が反射波の信号を処理して生成する断面画像は、断層データ51の加工データに相当する。 In this embodiment, the signal of the reflected wave received by the ultrasonic transducer 25 corresponds to the raw data of the tomographic data 51, and the cross-sectional image generated by processing the signal of the reflected wave by the image processing device 11 is the tomographic data. 51 processing data.
 本実施形態の一変形例として、画像処理装置11の制御部41は、プローブ20から入力された信号をそのまま断層データ51として記憶部42に記憶させてもよい。あるいは、制御部41は、プローブ20から入力された信号を処理して算出した反射波の強度値分布を示すデータを断層データ51として記憶部42に記憶させてもよい。すなわち、断層データ51は、生体組織60の断面画像のデータセットに限られず、超音波振動子25の各移動位置における生体組織60の断面を何らかの形式で表すデータであればよい。 As a modification of the present embodiment, the control unit 41 of the image processing device 11 may store the signal input from the probe 20 as the tomographic data 51 in the storage unit 42 as it is. Alternatively, the control unit 41 may store, as the tomographic data 51 , data indicating the intensity value distribution of the reflected wave calculated by processing the signal input from the probe 20 in the storage unit 42 . In other words, the tomographic data 51 is not limited to a data set of cross-sectional images of the living tissue 60, and may be data representing cross-sections of the living tissue 60 at each movement position of the ultrasonic transducer 25 in some format.
 本実施形態の一変形例として、周方向に回転しながら複数方向に超音波を送信する超音波振動子25の代わりに、回転することなく複数方向に超音波を送信する超音波振動子を用いてもよい。 As a modification of the present embodiment, an ultrasonic transducer that transmits ultrasonic waves in multiple directions without rotating is used instead of the ultrasonic transducer 25 that transmits ultrasonic waves in multiple directions while rotating in the circumferential direction. may
 本実施形態の一変形例として、断層データ51は、IVUSを用いて取得される代わりに、OFDI又はOCTを用いて取得されてもよい。「OFDI」は、optical frequency domain imagingの略語である。「OCT」は、optical coherence tomographyの略語である。OFDI又はOCTが用いられる場合、生体組織60の内腔61を移動しながら断層データ51を取得するセンサとして、生体組織60の内腔61で超音波を送信して断層データ51を取得する超音波振動子25の代わりに、生体組織60の内腔61で光を放射して断層データ51を取得するセンサが用いられる。 As a modification of this embodiment, the tomographic data 51 may be acquired using OFDI or OCT instead of being acquired using IVUS. "OFDI" is an abbreviation for optical frequency domain imaging. "OCT" is an abbreviation for optical coherence tomography. When OFDI or OCT is used, as a sensor that acquires tomographic data 51 while moving through the lumen 61 of the biological tissue 60, ultrasonic waves that transmit ultrasonic waves in the lumen 61 of the biological tissue 60 to acquire the tomographic data 51 Instead of the vibrator 25, a sensor that acquires the tomographic data 51 by emitting light in the lumen 61 of the biological tissue 60 is used.
 本実施形態の一変形例として、画像処理装置11が生体組織60の断面画像のデータセットを生成する代わりに、他の装置が同様のデータセットを生成し、画像処理装置11はそのデータセットを当該他の装置から取得してもよい。すなわち、画像処理装置11の制御部41が、IVUSの信号を処理して生体組織60の断面画像を生成する代わりに、他の装置が、IVUSの信号を処理して生体組織60の断面画像を生成し、生成した断面画像を画像処理装置11に入力してもよい。 As a modification of the present embodiment, instead of the image processing device 11 generating a dataset of cross-sectional images of the biological tissue 60, another device generates a similar dataset, and the image processing device 11 generates the dataset. It may be obtained from the other device. That is, instead of the control unit 41 of the image processing device 11 processing the IVUS signal to generate a cross-sectional image of the biological tissue 60, another device processes the IVUS signal to generate a cross-sectional image of the biological tissue 60. You may generate|occur|produce and input the produced|generated cross-sectional image into the image processing apparatus 11. FIG.
 ステップS102において、画像処理装置11の制御部41は、ステップS101で取得した断層データ51に基づいて生体組織60の3次元データ52を生成する。すなわち、制御部41は、センサによって取得された断層データ51に基づいて3次元データ52を生成する。ここで、既に生成済みの3次元データ52が存在する場合、全ての3次元データ52を一から生成し直すのではなく、更新された断層データ51が対応する箇所のデータのみを更新することが好ましい。その場合、3次元データ52を生成する際のデータ処理量を削減し、後のステップS103における3次元画像53のリアルタイム性を向上させることができる。 In step S102, the control unit 41 of the image processing apparatus 11 generates three-dimensional data 52 of the biological tissue 60 based on the tomographic data 51 acquired in step S101. That is, the control unit 41 generates the three-dimensional data 52 based on the tomographic data 51 acquired by the sensor. Here, if the generated three-dimensional data 52 already exists, it is possible to update only the data at the location corresponding to the updated tomographic data 51 instead of regenerating all the three-dimensional data 52 from scratch. preferable. In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 in the subsequent step S103 can be improved.
 具体的には、画像処理装置11の制御部41は、記憶部42に記憶された断層データ51に含まれる生体組織60の断面画像を積層して3次元化することで、生体組織60の3次元データ52を生成する。3次元化の手法としては、サーフェスレンダリング又はボリュームレンダリングなどのレンダリング手法、並びにそれに付随した、環境マッピングを含むテクスチャマッピング、及びバンプマッピングなどの種々の処理のうち任意の手法が用いられる。制御部41は、生成した3次元データ52を記憶部42に記憶させる。 Specifically, the control unit 41 of the image processing device 11 stacks the cross-sectional images of the living tissue 60 included in the tomographic data 51 stored in the storage unit 42 to three-dimensionalize the living tissue 60 . Dimensional data 52 is generated. As a three-dimensional rendering method, any one of rendering methods such as surface rendering or volume rendering, and associated processing such as texture mapping including environment mapping, bump mapping, and the like is used. The control unit 41 causes the storage unit 42 to store the generated three-dimensional data 52 .
 アブレーションカテーテル、又は心房中隔穿刺用のカテーテルなど、IVUSカテーテルとは別のカテーテル63が生体組織60に挿入されている場合、断層データ51には、生体組織60のデータと同じように、カテーテル63のデータが含まれている。そのため、ステップS102において、制御部41により生成される3次元データ52にも、第1データとしての生体組織60のデータと同じように、第2データとしてのカテーテル63のデータが含まれる。 If a catheter 63 other than the IVUS catheter, such as an ablation catheter or a catheter for atrial septal puncture, is inserted into the biological tissue 60, the tomographic data 51 includes the catheter 63 as well as the data of the biological tissue 60. data is included. Therefore, in step S102, the three-dimensional data 52 generated by the control unit 41 also includes the data of the catheter 63 as second data in the same way as the data of the biological tissue 60 as the first data.
 図6を参照して、カテーテル63が生体組織60に挿入されている場合にステップS102で行われる処理の詳細を説明する。 Details of the processing performed in step S102 when the catheter 63 is inserted into the biological tissue 60 will be described with reference to FIG.
 図6のフローの開始前に、画像処理装置11の制御部41は、ステップS101で取得した断層データ51に含まれる断面画像のピクセル群を2つ以上のクラスに分類する。これら2つ以上のクラスには、少なくとも「生体組織」のクラスと、「カテーテル」のクラスとが含まれ、「血球」のクラス、ガイドワイヤなど、「カテーテル」以外の「医療器具」のクラス、ステントなどの「留置物」のクラス、又は石灰若しくはプラークなどの「病変」のクラスが更に含まれていてもよい。分類方法としては、任意の方法を用いてよいが、本実施形態では、学習済みモデルによって断面画像のピクセル群を分類する方法が用いられる。学習済みモデルは、事前に機械学習を行うことによって、サンプルとなるIVUSの断面画像から、各クラスに該当する領域を検出できるように調教されている。 Before starting the flow of FIG. 6, the control unit 41 of the image processing apparatus 11 classifies the pixel groups of the cross-sectional image included in the tomographic data 51 acquired in step S101 into two or more classes. These two or more classes include at least a "living tissue" class and a "catheter" class, a "blood cell" class, a guidewire class and other "medical device" classes other than "catheters", A class of "indwelling objects" such as stents, or a class of "lesions" such as lime or plaque may also be included. Any method may be used as the classification method, but in this embodiment, a method of classifying pixel groups of cross-sectional images using a trained model is used. The learned model is trained by performing machine learning in advance so that it can detect regions corresponding to each class from a sample IVUS cross-sectional image.
 ステップS200aにおいて、画像処理装置11の制御部41は、「生体組織」のクラスに分類した領域を積層させることで生体組織60の3次元オブジェクトを構築する。制御部41は、構築した生体組織60の3次元オブジェクトを3次元空間に反映させる。ステップS200bにおいて、制御部41は、「カテーテル」のクラスに分類した領域を積層させることでカテーテル63の3次元オブジェクトを構築する。カテーテル63は、本実施形態ではセグメンテーションにより抽出されるが、物体検出など、他の手法により抽出されてもよい。例えばカテーテル位置のみを抽出する手法を用いて、その位置を考慮したオブジェクトをカテーテル63のオブジェクトとして構築してもよい。制御部41は、構築したカテーテル63の3次元オブジェクトを3次元空間に反映させる。ステップS200cにおいて、制御部41は、生体組織60のボクセルである組織ボクセルごとに、ステップS201からステップS205の処理を実行する。この時点では、制御部41は、3次元空間において、図7に示すような仮想のカメラ71、及び仮想の光源72をそれぞれ任意の位置に配置してよい。カメラ71の位置は、3次元画像53をディスプレイ16に表示させる際の「視点」に相当する。光源72の数及び相対位置は、図示したものに限らず、適宜変更することができる。生体組織60の3次元オブジェクトは、任意の切断面で切断されていてもよい。 In step S200a, the control unit 41 of the image processing apparatus 11 builds a three-dimensional object of the living tissue 60 by stacking regions classified into the "living tissue" class. The control unit 41 reflects the constructed three-dimensional object of the biological tissue 60 in the three-dimensional space. In step S200b, the control unit 41 builds a three-dimensional object of the catheter 63 by stacking the regions classified into the "catheter" class. The catheter 63 is extracted by segmentation in this embodiment, but may be extracted by other techniques such as object detection. For example, a method of extracting only the catheter position may be used to construct an object considering the position as an object of the catheter 63 . The control unit 41 reflects the constructed three-dimensional object of the catheter 63 in the three-dimensional space. In step S<b>200 c , the control unit 41 executes the processing of steps S<b>201 to S<b>205 for each tissue voxel that is the voxel of the living tissue 60 . At this point, the control unit 41 may place the virtual camera 71 and the virtual light source 72 as shown in FIG. 7 at arbitrary positions in the three-dimensional space. The position of the camera 71 corresponds to the “viewpoint” when displaying the three-dimensional image 53 on the display 16 . The number and relative positions of the light sources 72 are not limited to those illustrated, and can be changed as appropriate. The three-dimensional object of the living tissue 60 may be cut at any cutting plane.
 ステップS201において、画像処理装置11の制御部41は、視点と組織ボクセルとを結ぶ直線を含む、当該直線の延長線上に、カテーテル63のボクセルであるカテーテルボクセルがあるかどうかを判定する。延長線上にカテーテルボクセルがなければ、ステップS202において、制御部41は、組織ボクセルの色として、「生体組織」のクラスに予め割り当てられた色である第1色を適用する。延長線上にカテーテルボクセルがあれば、ステップS203において、制御部41は、視点と組織ボクセルとの間の距離である第1距離、及び視点と延長線上にあるカテーテルボクセルとの間の距離である第2距離を算出する。第1距離が第2距離よりも長い、すなわち、カテーテル63が生体組織60の前に存在するのであれば、ステップS204において、制御部41は、組織ボクセルの色として、「カテーテル」のクラスに予め割り当てられた色である第2色を適用する。第1距離が第2距離よりも短い、すなわち、カテーテル63が生体組織60の裏又は内部に存在するのであれば、ステップS205において、制御部41は、組織ボクセルの色として、第1色及び第2色の中間色など、第1色及び第2色とは異なる第3色を適用する。本実施形態では、制御部41は、第3色に相当する中間色として、第1色を7割、第2色を3割含む色を適用する。 In step S201, the control unit 41 of the image processing device 11 determines whether or not there is a catheter voxel, which is the voxel of the catheter 63, on an extension of the straight line including the straight line connecting the viewpoint and the tissue voxel. If there is no catheter voxel on the extension line, in step S202, the control unit 41 applies the first color, which is the color pre-assigned to the "living tissue" class, as the color of the tissue voxel. If there is a catheter voxel on the extension line, in step S203 the control unit 41 calculates a first distance that is the distance between the viewpoint and the tissue voxel, and a first distance that is the distance between the viewpoint and the catheter voxel on the extension line. 2 Calculate the distance. If the first distance is longer than the second distance, that is, if the catheter 63 exists in front of the living tissue 60, in step S204, the control unit 41 selects the color of the tissue voxel from the "catheter" class in advance. Apply the second color, which is the assigned color. If the first distance is shorter than the second distance, that is, if the catheter 63 exists behind or inside the biological tissue 60, in step S205, the control unit 41 sets the first color and the second color as the tissue voxel colors. Apply a third color that is different from the first and second colors, such as a color halfway between the two colors. In this embodiment, the control unit 41 applies a color containing 70% of the first color and 30% of the second color as the intermediate color corresponding to the third color.
 図6のフローで行われたボクセル群の色付けに関するデータは、3次元データ52の一部として、記憶部42に記憶される。 The data regarding the coloring of the voxel group performed in the flow of FIG. 6 is stored in the storage unit 42 as part of the three-dimensional data 52.
 ステップS103において、画像処理装置11の制御部41は、ステップS102で生成した3次元データ52を3次元画像53としてディスプレイ16に表示させる。 In step S103, the control unit 41 of the image processing device 11 causes the display 16 to display the three-dimensional data 52 generated in step S102 as a three-dimensional image 53.
 具体的には、画像処理装置11の制御部41は、記憶部42に記憶された3次元データ52から3次元画像53を生成する。制御部41は、生成した3次元画像53を、出力部45を介してディスプレイ16に表示させる。 Specifically, the control unit 41 of the image processing device 11 generates a 3D image 53 from the 3D data 52 stored in the storage unit 42 . The control unit 41 causes the display 16 to display the generated three-dimensional image 53 via the output unit 45 .
 ステップS104において、ユーザの操作があれば、ステップS105からステップS108の処理が行われる。ユーザの操作がなければ、ステップS105からステップS108の処理はスキップされる。 If there is a user operation in step S104, the processing from step S105 to step S108 is performed. If there is no user operation, the processing from step S105 to step S108 is skipped.
 ステップS105において、画像処理装置11の制御部41は、図7に示すような開口62の位置を設定する操作を、入力部44を介して受け付ける。開口62の位置は、ステップS103で表示された3次元画像53において、開口62を通じて生体組織60の内腔61が露出するような位置に設定される。 In step S105, the control unit 41 of the image processing device 11 receives an operation for setting the position of the opening 62 as shown in FIG. The position of the opening 62 is set such that the lumen 61 of the living tissue 60 is exposed through the opening 62 in the three-dimensional image 53 displayed in step S103.
 具体的には、画像処理装置11の制御部41は、ディスプレイ16に表示されている3次元画像53において、ユーザがキーボード14、マウス15、又はディスプレイ16と一体的に設けられたタッチスクリーンを用いて生体組織60の一部を切り落とす操作を、入力部44を介して受け付ける。図7の例では、制御部41は、生体組織60の断面が開いた形状になるように生体組織60の一部を切り落とす操作を受け付ける。「生体組織60の断面」は、生体組織60の横断面でもよいし、生体組織60の縦断面でもよいし、又は生体組織60の他の断面でもよい。「生体組織60の横断面」とは、生体組織60の中で超音波振動子25が移動する方向に対して垂直に生体組織60を切断した切断面のことである。「生体組織60の縦断面」とは、生体組織60の中で超音波振動子25が移動する方向に沿って生体組織60を切断した切断面のことである。「生体組織60の他の断面」とは、生体組織60の中で超音波振動子25が移動する方向に対して斜めに生体組織60を切断した切断面のことである。「開いた形状」は、例えば、略C字状、略U字状、略3字状、又はこれらのいずれかが血管の分岐部、若しくは肺静脈口など、生体組織60に元々空いている孔の存在によって部分的に欠けた形状である。図7の例では、生体組織60の断面が略C字状になっている。 Specifically, the control unit 41 of the image processing device 11 allows the user to use the keyboard 14 , the mouse 15 , or the touch screen provided integrally with the display 16 to display the three-dimensional image 53 displayed on the display 16 . An operation for cutting off a portion of the living tissue 60 is received via the input unit 44 . In the example of FIG. 7, the control unit 41 receives an operation of cutting off a portion of the living tissue 60 so that the cross section of the living tissue 60 has an open shape. The “cross section of the living tissue 60 ” may be a cross section of the living tissue 60 , a longitudinal cross section of the living tissue 60 , or other cross sections of the living tissue 60 . The “transverse section of the biological tissue 60 ” is a cross section obtained by cutting the biological tissue 60 perpendicularly to the direction in which the ultrasonic transducer 25 moves in the biological tissue 60 . The “longitudinal section of the biological tissue 60 ” is a cut plane obtained by cutting the biological tissue 60 along the direction in which the ultrasonic transducer 25 moves in the biological tissue 60 . “Another cross section of the biological tissue 60 ” is a cross section obtained by cutting the biological tissue 60 obliquely with respect to the direction in which the ultrasonic transducer 25 moves in the biological tissue 60 . The “open shape” is, for example, a substantially C-shape, a substantially U-shape, a substantially three-shape, or any of these shapes, such as a bifurcation of a blood vessel, a pulmonary vein ostium, or a hole originally opened in the living tissue 60 . It is a shape partially lacking due to the presence of In the example of FIG. 7, the cross section of the living tissue 60 is substantially C-shaped.
 ステップS106において、画像処理装置11の制御部41は、ステップS105で受け付けた操作によって設定された位置を開口62の位置に決定する。 In step S106, the control unit 41 of the image processing device 11 determines the position of the opening 62 as the position set by the operation received in step S105.
 具体的には、画像処理装置11の制御部41は、記憶部42に記憶された3次元データ52において、生体組織60の、ユーザの操作によって切り落とされた部分の境界の3次元座標を開口62の縁の3次元座標として特定する。制御部41は、特定した3次元座標を記憶部42に記憶させる。 Specifically, the control unit 41 of the image processing device 11 sets the three-dimensional coordinates of the boundary of the cut off portion of the living tissue 60 by the user's operation to the opening 62 in the three-dimensional data 52 stored in the storage unit 42 . , as the three-dimensional coordinates of the edge of the The control unit 41 causes the storage unit 42 to store the identified three-dimensional coordinates.
 ステップS107において、画像処理装置11の制御部41は、3次元画像53において生体組織60の内腔61を露出させる開口62を3次元データ52に形成する。 In step S<b>107 , the control unit 41 of the image processing device 11 forms an opening 62 in the three-dimensional data 52 that exposes the lumen 61 of the biological tissue 60 in the three-dimensional image 53 .
 具体的には、画像処理装置11の制御部41は、記憶部42に記憶された3次元データ52において、記憶部42に記憶された3次元座標で特定される部分を、3次元画像53をディスプレイ16に表示させる際に非表示又は透明になるように設定する。 Specifically, the control unit 41 of the image processing device 11 converts the portion specified by the three-dimensional coordinates stored in the storage unit 42 into the three-dimensional image 53 in the three-dimensional data 52 stored in the storage unit 42. It is set to be hidden or transparent when displayed on the display 16. - 特許庁
 ステップS108において、画像処理装置11の制御部41は、ステップS107で形成した開口62の位置に応じて、3次元画像53をディスプレイ16に表示させる際の視点を調整する。本実施形態では、制御部41は、生体組織60の内表面から開口62を通って生体組織60の外部に延びる直線の上に視点を配置する。よって、ユーザが開口62から生体組織60の内部を覗き込んで生体組織60の内腔61を仮想的に観察することができる。 In step S108, the control unit 41 of the image processing device 11 adjusts the viewpoint when displaying the three-dimensional image 53 on the display 16 according to the position of the opening 62 formed in step S107. In this embodiment, the control unit 41 arranges the viewpoint on a straight line extending from the inner surface of the living tissue 60 to the outside of the living tissue 60 through the opening 62 . Therefore, the user can look into the interior of the living tissue 60 through the opening 62 and virtually observe the lumen 61 of the living tissue 60 .
 具体的には、画像処理装置11の制御部41は、ディスプレイ16に表示されている3次元画像53において、非表示又は透明になるように設定した部分を通じて生体組織60の内腔61が見える位置に、仮想のカメラ71を配置する。図7の例では、制御部41は、生体組織60の断面において、生体組織60の内表面から開口62の第1端縁E1を通って生体組織60の外部に延びる第1直線L1と、生体組織60の内表面から開口62の第2端縁E2を通って生体組織60の外部に延びる第2直線L2とで挟まれる領域AFの中に仮想のカメラ71を配置する。第1直線L1が生体組織60の内表面と交わる点は、第2直線L2が生体組織60の内表面と交わる点と同一の点Ptである。よって、領域AFのどの位置に仮想のカメラ71を配置しても、ユーザが生体組織60の内表面の点Ptを観察することができる。 Specifically, the control unit 41 of the image processing device 11 controls the position where the lumen 61 of the biological tissue 60 can be seen through the portion set to be hidden or transparent in the three-dimensional image 53 displayed on the display 16. , a virtual camera 71 is arranged. In the example of FIG. 7, the control unit 41 controls, in the cross section of the biological tissue 60, a first straight line L1 extending from the inner surface of the biological tissue 60 to the outside of the biological tissue 60 through the first edge E1 of the opening 62, A virtual camera 71 is arranged in an area AF sandwiched by a second straight line L2 extending from the inner surface of the tissue 60 through the second edge E2 of the opening 62 to the outside of the living tissue 60 . The point where the first straight line L1 intersects the inner surface of the living tissue 60 is the same point Pt as the point where the second straight line L2 intersects the inner surface of the living tissue 60 . Therefore, the user can observe the point Pt on the inner surface of the living tissue 60 regardless of the position of the virtual camera 71 in the area AF.
 図7の例では、点Ptは、開口62の第1端縁E1と開口62の第2端縁E2とを結んだ第3直線L3の中点Pcから第3直線L3に対して垂直に引いた第4直線L4が生体組織60の内表面と交わる点と同一である。よって、ユーザが開口62を通じて生体組織60の内表面の点Ptを観察しやすい。特に、図7に示すように、第4直線L4の延長線上に仮想のカメラ71を配置すると、ユーザが生体組織60の内表面の点Ptを観察しやすくなる。 In the example of FIG. 7, the point Pt is drawn perpendicularly to the third straight line L3 from the middle point Pc of the third straight line L3 connecting the first edge E1 of the opening 62 and the second edge E2 of the opening 62. It is the same as the point where the fourth straight line L4 intersects the inner surface of the living tissue 60 . Therefore, the user can easily observe the point Pt on the inner surface of the living tissue 60 through the opening 62 . In particular, placing the virtual camera 71 on the extension of the fourth straight line L4 as shown in FIG. 7 makes it easier for the user to observe the point Pt on the inner surface of the living tissue 60 .
 仮想のカメラ71の位置は、開口62を介して生体組織60の内腔61を観察可能な任意の位置でよいが、本実施形態では開口62に対向する範囲内である。仮想のカメラ71の位置は、開口62の中央部に対向する中間位置に設定されることが好ましい。 The position of the virtual camera 71 may be any position where the lumen 61 of the living tissue 60 can be observed through the opening 62, but in the present embodiment it is within the range facing the opening 62. The position of the virtual camera 71 is preferably set at an intermediate position facing the central portion of the opening 62 .
 ステップS108でも、カテーテル63が生体組織60に挿入されている場合には、組織ボクセルごとに、ステップS201からステップS205の処理が実行される。図7の例では、生体組織60の一部である介在部66がカメラ71とカテーテル63との間に存在している。すなわち、カテーテル63が介在部66の裏に存在している。そのため、制御部41は、介在部66に対応するボクセルの色として、第1色及び第2色の中間色などの第3色を適用する。制御部41は、生体組織60の、カメラ71から見える部分のうち、介在部66を除く部分に対応するボクセルの色としては、第1色を適用する。介在部66では、生体組織60の表面68及び断面69のうち、断面69のみがカメラ71を向いている。そのため、結果としては、断面69のうち、介在部66に対応する領域が第3色で色付けされ、残りの領域は第1色で色付けされる。 Also in step S108, if the catheter 63 is inserted into the living tissue 60, the processing from step S201 to step S205 is executed for each tissue voxel. In the example of FIG. 7, an intervening portion 66 that is part of the living tissue 60 exists between the camera 71 and the catheter 63 . That is, the catheter 63 exists behind the intervening portion 66 . Therefore, the control unit 41 applies a third color, such as an intermediate color between the first color and the second color, as the color of the voxels corresponding to the intervening portion 66 . The control unit 41 applies the first color as the color of the voxels corresponding to the portion of the living tissue 60 visible from the camera 71 excluding the intervening portion 66 . Of the surface 68 and the cross section 69 of the living tissue 60 , only the cross section 69 faces the camera 71 in the intervening portion 66 . As a result, the area of the cross section 69 corresponding to the intervening portion 66 is colored with the third color, and the remaining area is colored with the first color.
 画像処理装置11の制御部41は、カテーテル63が生体組織60に挿入されている場合でも、ステップS201からステップS205の処理を実行しない第1モードと、カテーテル63が生体組織60に挿入されている場合に、ステップS201からステップS205の処理を実行する第2モードとの間で表示モードを切り替えてもよい。例えば、第1モードでは、図8に示すように、カテーテル63が介在部66の裏に存在する場合でも、断面69のうち、介在部66に対応する領域が、残りの領域と同じように第1色で色付けされる。すなわち、3次元画像53において生体組織60の、介在部66に隣接する部分の断面に適用されるテクスチャと同じテクスチャが介在部66の断面に適用される。これに対し、第2モードでは、図9に示すように、カテーテル63が介在部66の裏に存在する場合には、断面69のうち、介在部66に対応する領域が、第3色で色付けされる。すなわち、カテーテル63を表す画像67として、3次元画像53において生体組織60の、介在部66に隣接する部分の断面に適用されるテクスチャとは異なるテクスチャが介在部66の断面に適用される。 The control unit 41 of the image processing apparatus 11 selects a first mode in which the processing from step S201 to step S205 is not executed even when the catheter 63 is inserted into the biological tissue 60, and a second mode in which the processing from step S201 to step S205 is not executed. In this case, the display mode may be switched between the second mode in which the processing from step S201 to step S205 is executed. For example, in the first mode, as shown in FIG. 8, even if the catheter 63 is behind the intervening portion 66, the area of the cross section 69 corresponding to the interposing portion 66 is the same as the rest of the area. Colored with one color. That is, the same texture as that applied to the cross section of the portion of the biological tissue 60 adjacent to the intervening portion 66 in the three-dimensional image 53 is applied to the cross section of the intervening portion 66 . On the other hand, in the second mode, as shown in FIG. 9, when the catheter 63 exists behind the intervening portion 66, the area of the cross section 69 corresponding to the intervening portion 66 is colored with the third color. be done. That is, as the image 67 representing the catheter 63 , a texture different from the texture applied to the cross section of the portion of the biological tissue 60 adjacent to the intervening portion 66 in the three-dimensional image 53 is applied to the cross section of the intervening portion 66 .
 表示モードの切替えは、ユーザ操作によって手動で行われてもよいし、又は任意のイベントをトリガとして自動的に行われてもよい。 The switching of the display mode may be performed manually by user operation, or may be performed automatically with an arbitrary event as a trigger.
 ステップS109において、断層データ51の更新があれば、ステップS110及びステップS111の処理が行われる。断層データ51の更新がなければ、ステップS104において、ユーザ操作の有無が再度確認される。 In step S109, if the tomographic data 51 is updated, the processes of steps S110 and S111 are performed. If there is no update of the tomographic data 51, the presence or absence of user operation is confirmed again in step S104.
 ステップS110において、画像処理装置11の制御部41は、ステップS101の処理と同様に、プローブ20から入力された信号を処理して生体組織60の断面画像を新たに生成することで、少なくとも1つの新たな断面画像を含む断層データ51を取得する。 In step S110, the control unit 41 of the image processing device 11 processes the signal input from the probe 20 to newly generate a cross-sectional image of the living tissue 60, similarly to the processing of step S101, thereby obtaining at least one cross-sectional image. Obtain tomographic data 51 including a new cross-sectional image.
 ステップS111において、画像処理装置11の制御部41は、ステップS110で取得した断層データ51に基づいて生体組織60の3次元データ52を更新する。すなわち、制御部41は、センサによって取得された断層データ51に基づいて3次元データ52を更新する。そして、ステップS103において、制御部41は、ステップS111で更新した3次元データ52を3次元画像53としてディスプレイ16に表示させる。ステップS111においては、更新された断層データ51が対応する箇所のデータのみを更新することが好ましい。その場合、3次元データ52を更新する際のデータ処理量を削減し、ステップS111において、3次元画像53のリアルタイム性を向上させることができる。 At step S111, the control unit 41 of the image processing apparatus 11 updates the three-dimensional data 52 of the living tissue 60 based on the tomographic data 51 acquired at step S110. That is, the control unit 41 updates the three-dimensional data 52 based on the tomographic data 51 acquired by the sensor. Then, in step S103, the control unit 41 causes the display 16 to display the three-dimensional data 52 updated in step S111 as the three-dimensional image 53. FIG. In step S111, it is preferable to update only the data corresponding to the updated tomographic data 51. FIG. In that case, the amount of data processing when updating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 can be improved in step S111.
 ステップS111でも、カテーテル63が生体組織60に挿入されている場合には、組織ボクセルごとに、ステップS201からステップS205の処理が実行される。 Also in step S111, if the catheter 63 is inserted into the living tissue 60, the processing from step S201 to step S205 is executed for each tissue voxel.
 2回目以降のステップS105からステップS108において、画像処理装置11の制御部41は、開口62の位置を第1位置から第2位置に変更する場合に、視点を第1位置に応じた第3位置から第2位置に応じた第4位置に移動させる。制御部41は、視点の第3位置から第4位置への移動に合わせて、3次元画像53をディスプレイ16に表示させる際の仮想の光源72を移動させる。 In steps S105 to S108 from the second time onward, when changing the position of the opening 62 from the first position to the second position, the control unit 41 of the image processing device 11 changes the viewpoint to the third position corresponding to the first position. to a fourth position corresponding to the second position. The control unit 41 moves the virtual light source 72 when displaying the three-dimensional image 53 on the display 16 in accordance with the movement of the viewpoint from the third position to the fourth position.
 制御部41は、生体組織60の断面において、開口62の周方向位置を変更する場合に、仮想のカメラ71の移動に使用する回転行列を用いて、仮想の光源72を移動させる。 The control unit 41 moves the virtual light source 72 using the rotation matrix used for moving the virtual camera 71 when changing the circumferential position of the opening 62 in the cross section of the living tissue 60 .
 制御部41は、開口62の位置を第1位置から第2位置に変更する場合に、視点を第3位置から第4位置に瞬時に切り替えてもよいが、本実施形態では、視点が第3位置から第4位置へ徐々に移動する動画像を3次元画像53としてディスプレイ16に表示させる。そのため、視点が移動したことがユーザに伝わりやすい。 When the position of the opening 62 is changed from the first position to the second position, the control unit 41 may instantaneously switch the viewpoint from the third position to the fourth position. A moving image gradually moving from the position to the fourth position is displayed on the display 16 as the three-dimensional image 53 . Therefore, it is easy for the user to know that the viewpoint has moved.
 本実施形態の一変形例として、ステップS105において、画像処理装置11の制御部41は、開口62の位置を設定する操作とともに、ユーザが見たい目標点の位置を設定する操作を、入力部44を介して受け付けてもよい。 As a modification of the present embodiment, in step S105, the control unit 41 of the image processing apparatus 11 causes the input unit 44 to perform an operation of setting the position of the opening 62 and an operation of setting the position of the target point that the user wants to see. may be accepted through
 具体的には、画像処理装置11の制御部41は、ディスプレイ16に表示されている3次元画像53において、ユーザがキーボード14、マウス15、又はディスプレイ16と一体的に設けられたタッチスクリーンを用いて目標点の位置を指定する操作を、入力部44を介して受け付けてもよい。図7の例では、制御部41は、第1直線L1及び第2直線L2が生体組織60の内表面と交わる点の位置として、点Ptの位置を設定する操作を、入力部44を介して受け付けてもよい。 Specifically, the control unit 41 of the image processing device 11 allows the user to use the keyboard 14 , the mouse 15 , or the touch screen provided integrally with the display 16 to display the three-dimensional image 53 displayed on the display 16 . An operation of designating the position of the target point using the input unit 44 may be accepted. In the example of FIG. 7, the control unit 41 performs an operation of setting the position of the point Pt as the position of the point where the first straight line L1 and the second straight line L2 intersect the inner surface of the biological tissue 60 via the input unit 44. may be accepted.
 本実施形態の一変形例として、ステップS105において、画像処理装置11の制御部41は、開口62の位置を設定する操作の代わりに、ユーザが見たい目標点の位置を設定する操作を、入力部44を介して受け付けてもよい。そして、ステップS106において、制御部41は、ステップS105で受け付けた操作によって設定された位置に応じて、開口62の位置を決定してもよい。 As a modification of the present embodiment, in step S105, the control unit 41 of the image processing device 11 inputs an operation for setting the position of the target point that the user wants to see instead of the operation for setting the position of the opening 62. It may be accepted via the unit 44 . Then, in step S106, the control unit 41 may determine the position of the opening 62 according to the position set by the operation received in step S105.
 具体的には、画像処理装置11の制御部41は、ディスプレイ16に表示されている3次元画像53において、ユーザがキーボード14、マウス15、又はディスプレイ16と一体的に設けられたタッチスクリーンを用いて目標点の位置を指定する操作を、入力部44を介して受け付けてもよい。そして、制御部41は、その目標点の位置に応じて、開口62の位置を決定してもよい。図7の例では、制御部41は、第1直線L1及び第2直線L2が生体組織60の内表面と交わる点の位置として、点Ptの位置を設定する操作を、入力部44を介して受け付けてもよい。制御部41は、生体組織60の断面において、点Ptを中心、中心角を予め設定されるか、又はユーザによって指定される角度とする扇形の領域を領域AFに決定してもよい。制御部41は、生体組織60の、領域AFと重なる位置を開口62の位置に決定してもよい。制御部41は、生体組織60の内表面の、点Ptを通る接線に垂直な法線を第4直線L4に決定してもよい。 Specifically, the control unit 41 of the image processing device 11 allows the user to use the keyboard 14 , the mouse 15 , or the touch screen provided integrally with the display 16 to display the three-dimensional image 53 displayed on the display 16 . An operation of designating the position of the target point using the input unit 44 may be accepted. Then, the control section 41 may determine the position of the opening 62 according to the position of the target point. In the example of FIG. 7, the control unit 41 performs an operation of setting the position of the point Pt as the position of the point where the first straight line L1 and the second straight line L2 intersect the inner surface of the biological tissue 60 via the input unit 44. may be accepted. The control unit 41 may determine, as the area AF, a fan-shaped area centered at the point Pt and having a center angle preset or specified by the user in the cross section of the biological tissue 60 . The control unit 41 may determine the position of the living tissue 60 overlapping the area AF as the position of the opening 62 . The control unit 41 may determine a normal line perpendicular to a tangent line passing through the point Pt of the inner surface of the living tissue 60 as the fourth straight line L4.
 領域AFは、開口62の幅よりも狭く設定されてもよい。すなわち、領域AFは、開口62の第1端縁E1と開口62の第2端縁E2とのうち少なくともいずれかを含まないように設定されてもよい。 The area AF may be set narrower than the width of the opening 62 . That is, the area AF may be set so as not to include at least one of the first edge E1 of the opening 62 and the second edge E2 of the opening 62 .
 本実施形態の一変形例として、第1直線L1が生体組織60の内表面と交わる点は、第2直線L2が生体組織60の内表面と交わる点と同一でなくてもよい。例えば、第1直線L1が生体組織60の内表面と交わる点である点P1、及び第2直線L2が生体組織60の内表面と交わる点である点P2は、点Ptを中心とする円周上にあってもよい。すなわち、点P1及び点P2は、点Ptから略等距離にあってもよい。 As a modification of the present embodiment, the point at which the first straight line L1 intersects the inner surface of the living tissue 60 may not be the same as the point at which the second straight line L2 intersects the inner surface of the living tissue 60. For example, a point P1 at which the first straight line L1 intersects the inner surface of the living tissue 60 and a point P2 at which the second straight line L2 intersects the inner surface of the living tissue 60 are the circumference of a circle centered at the point Pt. may be above. That is, the points P1 and P2 may be substantially equidistant from the point Pt.
 上述のように、本実施形態では、画像処理装置11の制御部41は、生体組織60を表す第1データと、生体組織60の内腔61に位置する対象物を表す第2データとを含む3次元データ52を3次元画像53としてディスプレイ16に表示させる。制御部41は、3次元データ52を参照して、生体組織60と対象物との位置関係を特定する。制御部41は、特定した位置関係に応じて、生体組織60の、対象物と、3次元画像53が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定する。制御部41は、介在組織が存在すると判定した場合に、3次元画像53において介在組織に対応する位置に対象物を表す画像を表示する制御を行う。したがって、本実施形態によれば、介在組織が存在する場合でも、対象物の位置を確認できるようになる。 As described above, in the present embodiment, the control unit 41 of the image processing device 11 includes the first data representing the biological tissue 60 and the second data representing the object located in the lumen 61 of the biological tissue 60. The three-dimensional data 52 is displayed on the display 16 as a three-dimensional image 53 . The control unit 41 refers to the three-dimensional data 52 to identify the positional relationship between the living tissue 60 and the object. According to the specified positional relationship, the control unit 41 determines that there is an intervening tissue, which is a portion of the living tissue 60 that is interposed between the object and the viewpoint set when the three-dimensional image 53 is displayed. determine whether When determining that an intervening tissue exists, the control unit 41 performs control to display an image representing the object at a position corresponding to the intervening tissue in the three-dimensional image 53 . Therefore, according to this embodiment, it is possible to confirm the position of the object even when there is an intervening tissue.
 本実施形態では、画像処理装置11の制御部41は、生体組織60及び対象物を観察するセンサによって得られたデータを参照して、生体組織60と対象物との位置関係を特定する。センサは、IVUSに用いられる超音波振動子25に限らず、OFDI、OCT、CT検査、体外エコー検査、又はX線検査に用いられるセンサなど、任意のセンサでよい。「CT」は、computed tomographyの略語である。制御部41は、特定した位置関係に応じて、生体組織60の、対象物と、3次元画像53が表示される際に設定される視点との間に介在する部分を介在組織として検出する。図7の例では、カテーテル63が対象物、介在部66が介在組織に相当する。 In the present embodiment, the control unit 41 of the image processing device 11 refers to data obtained by a sensor that observes the living tissue 60 and the object, and identifies the positional relationship between the living tissue 60 and the object. The sensor is not limited to the ultrasonic transducer 25 used for IVUS, and may be any sensor such as a sensor used for OFDI, OCT, CT examination, extracorporeal echo examination, or X-ray examination. "CT" is an abbreviation for computed tomography. The control unit 41 detects, as an intervening tissue, a portion of the living tissue 60 interposed between the object and the viewpoint set when the three-dimensional image 53 is displayed, according to the specified positional relationship. In the example of FIG. 7, the catheter 63 corresponds to the object, and the intervening portion 66 corresponds to the intervening tissue.
 画像処理装置11の制御部41は、介在組織が存在すると判定した場合に、本実施形態では、3次元画像53において介在組織の断面に対象物を表す画像を表示する制御を行うが、本実施形態の一変形例として、3次元画像53において介在組織の表面に対象物を表す画像を表示する制御を行ってもよい。 In this embodiment, when the control unit 41 of the image processing device 11 determines that an intervening tissue exists, it performs control to display an image representing the target object in the cross section of the intervening tissue in the three-dimensional image 53 in this embodiment. As a modification of the form, control may be performed to display an image representing the object on the surface of the intervening tissue in the three-dimensional image 53 .
 図9の例では、3次元空間において介在組織の断面がカメラ71を向いているため、画像処理装置11の制御部41は、対象物を表す画像67として、3次元画像53において生体組織60の、介在組織に隣接する部分の断面に適用されるテクスチャとは異なるテクスチャを介在組織の断面に適用している。仮に、3次元空間において介在組織の表面がカメラ71を向いているとした場合は、制御部41は、対象物を表す画像として、3次元画像53において生体組織60の、介在組織に隣接する部分の表面に適用されるテクスチャとは異なるテクスチャを介在組織の表面に適用してもよい。 In the example of FIG. 9, since the cross section of the intervening tissue faces the camera 71 in the three-dimensional space, the control unit 41 of the image processing device 11 selects the image 67 representing the object as the image 67 of the biological tissue 60 in the three-dimensional image 53. , applying a different texture to the cross-section of the intervening tissue than the texture applied to the cross-section of the portion adjacent to the intervening tissue. Assuming that the surface of the intervening tissue faces the camera 71 in the three-dimensional space, the control unit 41 selects a portion of the living tissue 60 adjacent to the intervening tissue in the three-dimensional image 53 as an image representing the object. A texture may be applied to the surface of the intervening tissue that is different from the texture applied to the surface of the tissue.
 本実施形態の一変形例として、画像処理装置11の制御部41は、介在組織が存在すると判定した場合に、3次元画像53において介在組織を透過させて対象物を表す画像を表示する制御を行ってもよい。図7の例では、カテーテル63が介在部66の裏に存在しているため、制御部41は、介在部66に対応するボクセルの色を変えているが、その代わりに介在部66の透明度を上げてもよい。この変形例において、制御部41は、対象物を表す画像として、3次元画像53において介在組織の視点とは反対側に対象物を表す3次元画像を配置してもよい。 As a modification of the present embodiment, the control unit 41 of the image processing device 11 performs control to display an image representing the target object with the intervening tissue transparent in the three-dimensional image 53 when it is determined that the intervening tissue exists. you can go In the example of FIG. 7, since the catheter 63 exists behind the intervening portion 66, the control unit 41 changes the color of the voxels corresponding to the intervening portion 66, but instead changes the transparency of the intervening portion 66. You can raise it. In this modification, the control unit 41 may arrange a three-dimensional image representing the object on the opposite side of the viewpoint of the intervening tissue in the three-dimensional image 53 as the image representing the object.
 本実施形態は、図7に示したようなカテーテル63などの対象物だけでなく、図10から図12に示すように生体組織60に関連付けられた目印73に適用されてもよい。その場合、画像処理装置11の制御部41は、生体組織60を表す第1データと、目印73を表す第3データとを含む3次元データ52を3次元画像53としてディスプレイ16に表示させる。制御部41は、3次元データ52を参照して、生体組織60と目印73との位置関係を特定する。制御部41は、特定した位置関係に応じて、生体組織60の、目印73と、3次元画像53が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定する。制御部41は、介在組織が存在すると判定した場合に、3次元画像53において介在組織に対応する位置に目印73を表す画像74を表示する制御を行う。したがって、介在組織が存在する場合でも、目印73の位置を確認できるようになる。 This embodiment may be applied not only to an object such as a catheter 63 as shown in FIG. 7, but also to landmarks 73 associated with living tissue 60 as shown in FIGS. 10-12. In that case, the control unit 41 of the image processing device 11 causes the display 16 to display the three-dimensional data 52 including the first data representing the living tissue 60 and the third data representing the mark 73 as the three-dimensional image 53 . The control unit 41 refers to the three-dimensional data 52 to identify the positional relationship between the living tissue 60 and the mark 73 . According to the specified positional relationship, the control unit 41 determines that there is an intervening tissue that is a portion of the living tissue 60 that is interposed between the mark 73 and the viewpoint set when the three-dimensional image 53 is displayed. determine whether When determining that an intervening tissue exists, the control unit 41 performs control to display an image 74 representing a mark 73 at a position corresponding to the intervening tissue in the three-dimensional image 53 . Therefore, even if an intervening tissue exists, the position of the mark 73 can be confirmed.
 3次元データ52に含まれる第1データは、生体組織60の内腔61に挿入され、生体組織60を観察するセンサによって得られたデータを基に構築され、当該センサによって新たなデータが得られる度に更新される。すなわち、第1データは、センサにより逐次更新されるように構成されている。画像処理装置11の制御部41は、目印73が付けられると、目印73を指定する指定データを取得する。具体的には、制御部41は、3次元空間内の少なくとも1箇所を目印73として指定するユーザ操作を受け付けることで、目印73を指定するデータを指定データとして取得する。制御部41は、取得した指定データを記憶部42に記憶させる。制御部41は、記憶部42に記憶された指定データを基に第3データを構築する。制御部41は、構築した第3データを3次元データ52に含めた上で、3次元データ52を3次元画像53としてディスプレイ16に表示させる。すなわち、制御部41は、第3データを3次元画像53の一部としてディスプレイ16に表示させる。 The first data included in the three-dimensional data 52 is constructed based on data obtained by a sensor that is inserted into the lumen 61 of the biological tissue 60 and observes the biological tissue 60, and new data is obtained by the sensor. updated from time to time. That is, the first data is configured to be sequentially updated by the sensor. When the mark 73 is attached, the control unit 41 of the image processing device 11 acquires designation data designating the mark 73 . Specifically, the control unit 41 receives a user operation that designates at least one place in the three-dimensional space as the mark 73, thereby acquiring data designating the mark 73 as designation data. The control unit 41 causes the storage unit 42 to store the acquired designation data. The control unit 41 constructs the third data based on the designated data stored in the storage unit 42 . The control unit 41 includes the constructed third data in the three-dimensional data 52 and causes the display 16 to display the three-dimensional data 52 as a three-dimensional image 53 . That is, the control unit 41 causes the display 16 to display the third data as part of the three-dimensional image 53 .
 目印73は、生体組織60の焼灼された箇所、3次元空間内で距離を測定するための始点及び終点、又は焼灼を避けなければならない神経の位置など、3次元空間において生体組織60、又は生体組織60の内腔61若しくは周辺の少なくとも1箇所に付けられることで、生体組織60に関連付けられる。目印73が付けられる際には、プローブ20が進退させられて3次元データ52が更新されていき、目印73が付けられた箇所の座標が3次元空間内の固定点として記録される。 The landmarks 73 are the locations of the living tissue 60 or living tissue 60 in three-dimensional space, such as the ablated portion of the living tissue 60, the start and end points for measuring distances in three-dimensional space, or the positions of nerves that should be avoided from being ablated. It is associated with the living tissue 60 by being attached to the lumen 61 of the tissue 60 or at least one location around it. When the mark 73 is attached, the probe 20 is moved forward and backward to update the three-dimensional data 52, and the coordinates of the location marked with the mark 73 are recorded as a fixed point in the three-dimensional space.
 図10の例では、生体組織60は、心筋である。この例では、目印73は、心筋の表面に付けられている。プルバック操作が行われると、図11に示すように、目印73が心筋組織内に埋まってしまうことがある。すなわち、心筋組織の一部である介在部66がカメラ71と目印73との間に存在する場合がある。その場合、目印73を3次元画像53上で確認することができない。そこで、画像処理装置11の制御部41は、介在部66が存在すると判定した場合に、図12に示すように、3次元画像53において介在部66の表面に目印73を表す画像74を表示する制御を行う。具体的には、制御部41は、介在部66に対応するボクセルの色を、心筋組織の色である第1色と、目印73の色である第2色との中間色など、第1色及び第2色とは異なる第3色に変える。 In the example of FIG. 10, the living tissue 60 is myocardium. In this example, the markings 73 are applied to the surface of the myocardium. When the pullback operation is performed, the mark 73 may become embedded in the myocardial tissue, as shown in FIG. That is, an intervening portion 66 that is part of the myocardial tissue may exist between the camera 71 and the mark 73 . In that case, the mark 73 cannot be confirmed on the three-dimensional image 53 . Therefore, when the control unit 41 of the image processing device 11 determines that the intervening portion 66 exists, as shown in FIG. control. Specifically, the control unit 41 sets the color of the voxel corresponding to the intervening portion 66 to a first color and an intermediate color between a first color that is the color of the myocardial tissue and a second color that is the color of the mark 73 . Change to a third color that is different from the second color.
 図12の例では、3次元空間において介在部66の表面がカメラ71を向いているため、画像処理装置11の制御部41は、目印73を表す画像74として、3次元画像53において生体組織60の、介在部66に隣接する部分の表面に適用されるテクスチャとは異なるテクスチャを介在部66の表面に適用している。仮に、3次元空間において介在部66の断面がカメラ71を向いているとした場合は、制御部41は、目印73を表す画像として、3次元画像53において生体組織60の、介在部66に隣接する部分の断面に適用されるテクスチャとは異なるテクスチャを介在部66の断面に適用してもよい。 In the example of FIG. 12 , since the surface of the intervening portion 66 faces the camera 71 in the three-dimensional space, the control unit 41 of the image processing device 11 selects the biological tissue 60 in the three-dimensional image 53 as the image 74 representing the mark 73 . , a texture different from the texture applied to the surface of the portion adjacent to the intervening portion 66 is applied to the surface of the intervening portion 66 . Assuming that the cross section of the intervening portion 66 faces the camera 71 in the three-dimensional space, the control unit 41 creates an image representing the mark 73 in the three-dimensional image 53 of the biological tissue 60 adjacent to the intervening portion 66 . A different texture may be applied to the cross-section of the interposer 66 than the texture applied to the cross-section of the intervening portion.
 図12の例では、目印73が介在部66の内部に存在しているため、画像処理装置11の制御部41は、介在部66に対応するボクセルの色を変えている。仮に、目印73が介在部66の裏に存在しているとした場合は、制御部41は、介在部66の透明度を上げてもよい。すなわち、制御部41は、介在部66が存在すると判定した場合に、3次元画像53において介在部66を透過させて目印73を表す画像を表示する制御を行ってもよい。その場合、制御部41は、目印73を表す画像として、3次元画像53において介在組織の視点とは反対側に目印73を表す3次元画像を配置してもよい。 In the example of FIG. 12 , since the mark 73 exists inside the intervening portion 66 , the control section 41 of the image processing device 11 changes the color of the voxel corresponding to the intervening portion 66 . If the mark 73 exists behind the intervening portion 66 , the control section 41 may increase the transparency of the intervening portion 66 . That is, when the control unit 41 determines that the interposed portion 66 exists, the control portion 41 may perform control to display an image representing the mark 73 through the intervening portion 66 in the three-dimensional image 53 . In that case, the control unit 41 may arrange a three-dimensional image representing the mark 73 on the opposite side of the viewpoint of the intervening tissue in the three-dimensional image 53 as the image representing the mark 73 .
 本開示は上述の実施形態に限定されるものではない。例えば、ブロック図に記載の2つ以上のブロックを統合してもよいし、又は1つのブロックを分割してもよい。フローチャートに記載の2つ以上のステップを記述に従って時系列に実行する代わりに、各ステップを実行する装置の処理能力に応じて、又は必要に応じて、並列的に又は異なる順序で実行してもよい。その他、本開示の趣旨を逸脱しない範囲での変更が可能である。 The present disclosure is not limited to the above-described embodiments. For example, two or more blocks shown in the block diagrams may be combined, or a single block may be split. Instead of executing two or more steps described in the flow chart in chronological order as described, each step may be executed in parallel or in a different order, depending on the processing power of the device executing each step or as desired. good. Other modifications are possible without departing from the scope of the present disclosure.
 10 画像処理システム
 11 画像処理装置
 12 ケーブル
 13 駆動ユニット
 14 キーボード
 15 マウス
 16 ディスプレイ
 17 接続端子
 18 カートユニット
 20 プローブ
 21 駆動シャフト
 22 ハブ
 23 シース
 24 外管
 25 超音波振動子
 26 中継コネクタ
 31 スキャナユニット
 32 スライドユニット
 33 ボトムカバー
 34 プローブ接続部
 35 スキャナモータ
 36 差込口
 37 プローブクランプ部
 38 スライドモータ
 39 スイッチ群
 41 制御部
 42 記憶部
 43 通信部
 44 入力部
 45 出力部
 51 断層データ
 52 3次元データ
 53 3次元画像
 60 生体組織
 61 内腔
 62 開口
 63 カテーテル
 64 リッジ
 65 卵円窩
 66 介在部
 67 画像
 68 表面
 69 断面
 71 カメラ
 72 光源
 73 目印
 74 画像
 80 画面
REFERENCE SIGNS LIST 10 image processing system 11 image processing device 12 cable 13 drive unit 14 keyboard 15 mouse 16 display 17 connection terminal 18 cart unit 20 probe 21 drive shaft 22 hub 23 sheath 24 outer tube 25 ultrasonic transducer 26 relay connector 31 scanner unit 32 slide Unit 33 Bottom cover 34 Probe connection part 35 Scanner motor 36 Insertion port 37 Probe clamp part 38 Slide motor 39 Switch group 41 Control part 42 Storage part 43 Communication part 44 Input part 45 Output part 51 Tomographic data 52 Three-dimensional data 53 Three-dimensional Image 60 Tissue 61 Lumen 62 Aperture 63 Catheter 64 Ridge 65 Fossa Oval 66 Interposition 67 Image 68 Surface 69 Section 71 Camera 72 Light Source 73 Landmark 74 Image 80 Screen

Claims (13)

  1.  生体組織を表す第1データと、前記生体組織の内腔に位置する対象物を表す第2データ、又は前記生体組織に関連付けられた目印を表す第3データとを含む3次元データを3次元画像としてディスプレイに表示させる画像処理装置であって、
     前記3次元データを参照して、前記生体組織と前記対象物又は前記目印との位置関係を特定し、特定した位置関係に応じて、前記生体組織の、前記対象物又は前記目印と、前記3次元画像が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定し、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織に対応する位置に前記対象物又は前記目印を表す画像を表示する制御を行う制御部を備える画像処理装置。
    A three-dimensional image of three-dimensional data including first data representing a living tissue and second data representing an object located in a lumen of the living tissue or third data representing a mark associated with the living tissue An image processing device that displays on a display as
    The positional relationship between the biological tissue and the object or the mark is specified with reference to the three-dimensional data, and the object or the mark of the biological tissue and the three It is determined whether or not there is an intervening tissue that is a portion interposed between the viewpoint set when the dimensional image is displayed, and if it is determined that the intervening tissue exists, the intervening tissue is displayed in the three-dimensional image. An image processing apparatus comprising a control unit that performs control to display an image representing the object or the mark at a position corresponding to tissue.
  2.  前記制御部は、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織の表面に前記対象物又は前記目印を表す画像を表示する制御を行う請求項1に記載の画像処理装置。 The image processing according to claim 1, wherein, when determining that the intervening tissue exists, the control unit performs control to display an image representing the object or the mark on the surface of the intervening tissue in the three-dimensional image. Device.
  3.  前記制御部は、前記対象物又は前記目印を表す画像として、前記3次元画像において前記生体組織の、前記介在組織に隣接する部分の表面に適用されるテクスチャとは異なるテクスチャを前記介在組織の前記表面に適用する請求項2に記載の画像処理装置。 The control unit assigns a texture of the intervening tissue that is different from a texture applied to a surface of a portion of the biological tissue adjacent to the intervening tissue in the three-dimensional image as an image representing the object or the mark. 3. An image processing device according to claim 2, for surface application.
  4.  前記制御部は、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織の断面に前記対象物又は前記目印を表す画像を表示する制御を行う請求項1に記載の画像処理装置。 The image processing according to claim 1, wherein, when determining that the intervening tissue exists, the control unit performs control to display an image representing the target object or the mark on a cross section of the intervening tissue in the three-dimensional image. Device.
  5.  前記制御部は、前記対象物又は前記目印を表す画像として、前記3次元画像において前記生体組織の、前記介在組織に隣接する部分の断面に適用されるテクスチャとは異なるテクスチャを前記介在組織の前記断面に適用する請求項4に記載の画像処理装置。 The control unit assigns a texture of the intervening tissue that is different from a texture applied to a cross section of a portion of the biological tissue adjacent to the intervening tissue in the three-dimensional image as an image representing the object or the mark. 5. The image processing apparatus according to claim 4, applied to a cross section.
  6.  前記制御部は、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織を透過させて前記対象物又は前記目印を表す画像を表示する制御を行う請求項1に記載の画像処理装置。 The image according to claim 1, wherein, when determining that the intervening tissue exists, the control unit performs control to display an image representing the object or the mark through the intervening tissue in the three-dimensional image. processing equipment.
  7.  前記制御部は、前記対象物又は前記目印を表す画像として、前記3次元画像において前記介在組織の前記視点とは反対側に前記対象物又は前記目印を表す3次元画像を配置する請求項6に記載の画像処理装置。 7. The control unit arranges, as the image representing the object or the mark, a three-dimensional image representing the object or the mark on a side opposite to the viewpoint of the intervening tissue in the three-dimensional image. The described image processing device.
  8.  前記3次元データは、前記生体組織の前記内腔に挿入され、前記生体組織及び前記対象物を観察するセンサによって得られたデータを基に構築されたデータをそれぞれ前記第1データ及び前記第2データとして含み、
     前記対象物は、前記生体組織の前記内腔に挿入されたカテーテルである請求項1から請求項7のいずれか1項に記載の画像処理装置。
    The three-dimensional data are the first data and the second data constructed based on data obtained by a sensor inserted into the lumen of the living tissue and observing the living tissue and the object, respectively. including as data,
    The image processing apparatus according to any one of claims 1 to 7, wherein the object is a catheter inserted into the lumen of the living tissue.
  9.  前記3次元データは、前記生体組織の前記内腔に挿入され、前記生体組織を観察するセンサによって得られたデータを基に構築され、前記センサによって新たなデータが得られる度に更新されるデータを前記第1データとして含み、
     前記制御部は、前記目印を指定する指定データを取得し、取得した指定データを基に前記第3データを構築し、前記第3データを前記3次元データに含める請求項1から請求項7のいずれか1項に記載の画像処理装置。
    The three-dimensional data is constructed based on data obtained by a sensor that is inserted into the lumen of the living tissue and observes the living tissue, and is updated each time new data is obtained by the sensor. as the first data,
    8. The method according to any one of claims 1 to 7, wherein the control unit acquires designation data designating the mark, constructs the third data based on the acquired designation data, and includes the third data in the three-dimensional data. The image processing device according to any one of items 1 and 2.
  10.  請求項1から請求項9のいずれか1項に記載の画像処理装置と、
     前記生体組織及び前記対象物を観察するセンサと
    を備える画像処理システム。
    an image processing device according to any one of claims 1 to 9;
    An image processing system, comprising: a sensor that observes the living tissue and the object.
  11.  前記ディスプレイを更に備える請求項10に記載の画像処理システム。 The image processing system according to claim 10, further comprising the display.
  12.  生体組織を表す第1データと、前記生体組織の内腔に位置する対象物を表す第2データ、又は前記生体組織に関連付けられた目印を表す第3データとを含む3次元データを3次元画像としてディスプレイに表示する画像表示方法であって、
     コンピュータが、前記3次元データを参照して、前記生体組織と前記対象物又は前記目印との位置関係を特定し、
     前記コンピュータが、特定した位置関係に応じて、前記生体組織の、前記対象物又は前記目印と、前記3次元画像が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定し、
     前記コンピュータが、前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織に対応する位置に前記対象物又は前記目印を表す画像を表示する制御を行う画像表示方法。
    A three-dimensional image of three-dimensional data including first data representing a living tissue and second data representing an object located in a lumen of the living tissue or third data representing a mark associated with the living tissue An image display method for displaying on a display as
    A computer refers to the three-dimensional data to identify the positional relationship between the biological tissue and the object or the mark;
    Intervening tissue, which is a portion of the living tissue interposed between the object or the mark and a viewpoint set when the three-dimensional image is displayed, according to the positional relationship specified by the computer. is present, and
    An image display method for performing control to display an image representing the object or the mark at a position corresponding to the intervening tissue in the three-dimensional image when the computer determines that the intervening tissue exists.
  13.  生体組織を表す第1データと、前記生体組織の内腔に位置する対象物を表す第2データ、又は前記生体組織に関連付けられた目印を表す第3データとを含む3次元データを3次元画像としてディスプレイに表示させるコンピュータに、
     前記3次元データを参照して、前記生体組織と前記対象物又は前記目印との位置関係を特定する処理と、
     特定した位置関係に応じて、前記生体組織の、前記対象物又は前記目印と、前記3次元画像が表示される際に設定される視点との間に介在する部分である介在組織が存在するかどうかを判定する処理と、
     前記介在組織が存在すると判定した場合に、前記3次元画像において前記介在組織に対応する位置に前記対象物又は前記目印を表す画像を表示する制御を行う処理と
    を実行させる画像処理プログラム。
    A three-dimensional image of three-dimensional data including first data representing a living tissue and second data representing an object located in a lumen of the living tissue or third data representing a mark associated with the living tissue on the computer to display on the display as
    a process of specifying a positional relationship between the biological tissue and the object or the mark by referring to the three-dimensional data;
    Depending on the specified positional relationship, whether there is an intervening tissue that is a portion of the living tissue that intervenes between the object or the mark and the viewpoint set when the three-dimensional image is displayed a process of determining whether
    an image processing program for executing a control process for displaying an image representing the object or the mark at a position corresponding to the intervening tissue in the three-dimensional image when it is determined that the intervening tissue exists.
PCT/JP2022/009239 2021-03-26 2022-03-03 Image processing device, image processing system, image display method, and image processing program WO2022202200A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023508893A JPWO2022202200A1 (en) 2021-03-26 2022-03-03

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-054083 2021-03-26
JP2021054083 2021-03-26

Publications (1)

Publication Number Publication Date
WO2022202200A1 true WO2022202200A1 (en) 2022-09-29

Family

ID=83397108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/009239 WO2022202200A1 (en) 2021-03-26 2022-03-03 Image processing device, image processing system, image display method, and image processing program

Country Status (2)

Country Link
JP (1) JPWO2022202200A1 (en)
WO (1) WO2022202200A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001299756A (en) * 2000-04-25 2001-10-30 Toshiba Corp Ultrasonograph capable of detecting localization of catheter or small diameter probe
JP2002522106A (en) * 1998-08-03 2002-07-23 カーディアック・パスウェイズ・コーポレーション Dynamically changeable 3D graphic model of the human body

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002522106A (en) * 1998-08-03 2002-07-23 カーディアック・パスウェイズ・コーポレーション Dynamically changeable 3D graphic model of the human body
JP2001299756A (en) * 2000-04-25 2001-10-30 Toshiba Corp Ultrasonograph capable of detecting localization of catheter or small diameter probe

Also Published As

Publication number Publication date
JPWO2022202200A1 (en) 2022-09-29

Similar Documents

Publication Publication Date Title
JP7300352B2 (en) Diagnosis support device, diagnosis support system, and diagnosis support method
US20220218309A1 (en) Diagnostic assistance device, diagnostic assistance system, and diagnostic assistance method
WO2022202200A1 (en) Image processing device, image processing system, image display method, and image processing program
JP5498090B2 (en) Image processing apparatus and ultrasonic diagnostic apparatus
WO2023013601A1 (en) Image processing device, image processing system, image processing method, and image processing program
WO2022202202A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2023176741A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2022202201A1 (en) Image processing device, image processing system, image displaying method, and image processing program
WO2022071251A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2022071250A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2022202203A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2023054001A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2021200296A1 (en) Image processing device, image processing system, image display method, and image processing program
CN114554970B (en) Diagnosis support device, diagnosis support system, and diagnosis support method
WO2021065746A1 (en) Diagnostic support device, diagnostic support system, and diagnostic support method
JP2023024072A (en) Image processing device, image processing system, image display method, and image processing program
WO2024071054A1 (en) Image processing device, image display system, image display method, and image processing program
WO2021065963A1 (en) Diagnostic assistance device, diagnostic assistance system, and diagnostic assistance method
WO2022085373A1 (en) Image processing device, image processing system, image displaying method, and image processing program
US20220039778A1 (en) Diagnostic assistance device and diagnostic assistance method
WO2021200294A1 (en) Image processing device, image processing system, image display method, and image processing program
US20240108313A1 (en) Image processing device, image display system, image processing method, and image processing program
WO2022202302A1 (en) Computer program, information processing method, and information processing device
WO2020203873A1 (en) Diagnosis support device, diagnosis support system, and diagnosis support method
CN115484872A (en) Image processing device, image processing system, image display method, and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22774996

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023508893

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22774996

Country of ref document: EP

Kind code of ref document: A1