WO2023176741A1 - Image processing device, image processing system, image display method, and image processing program - Google Patents

Image processing device, image processing system, image display method, and image processing program Download PDF

Info

Publication number
WO2023176741A1
WO2023176741A1 PCT/JP2023/009449 JP2023009449W WO2023176741A1 WO 2023176741 A1 WO2023176741 A1 WO 2023176741A1 JP 2023009449 W JP2023009449 W JP 2023009449W WO 2023176741 A1 WO2023176741 A1 WO 2023176741A1
Authority
WO
WIPO (PCT)
Prior art keywords
image processing
point
intersection
screen
processing device
Prior art date
Application number
PCT/JP2023/009449
Other languages
French (fr)
Japanese (ja)
Inventor
泰一 坂本
俊祐 吉澤
クレモン ジャケ
ステフェン チェン
トマ エン
亮介 佐賀
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2023176741A1 publication Critical patent/WO2023176741A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters

Definitions

  • the present disclosure relates to an image processing device, an image processing system, an image display method, and an image processing program.
  • Patent Documents 1 to 3 describe techniques for generating three-dimensional images of heart chambers or blood vessels using a US imaging system.
  • US is an abbreviation for ultrasound.
  • Patent Document 4 discloses that when two points on a screen on which a three-dimensional image is displayed are specified, the two-dimensional coordinates and density of the two points are determined, and the density difference is added to the distance between the two-dimensional coordinates. A method for calculating the distance between the two points is disclosed.
  • IVUS is an abbreviation for intravascular ultrasound.
  • IVUS is a device or method that provides two-dimensional images in a plane perpendicular to the longitudinal axis of a catheter.
  • IVUS is often used for procedures that use a catheter separate from the IVUS catheter, such as ablation.
  • a septal puncture needle inserted into the right atrium punctures the fossa ovalis, creating a path from the right atrium to the left atrium. law is used.
  • puncture it is desirable to carefully confirm the puncture position because there is a risk of complications such as perforation or cardiac tamponade.
  • an IVUS catheter that can obtain 360-degree information is excellent in confirming the puncture position within the same plane.
  • images are acquired intermittently along the IVUS catheter axis, making it difficult to image the three-dimensional structure.
  • An object of the present disclosure is to improve the accuracy of distance calculation between two points specified on a screen on which a three-dimensional image is displayed.
  • An image processing device renders a living tissue object based on a positional relationship between a viewpoint set in a virtual three-dimensional space and an object of biological tissue arranged in the three-dimensional space.
  • an image processing device that displays the object on a screen as a three-dimensional image of the living tissue, the image processing device displaying the object on the screen as a three-dimensional image of the living tissue, the image processing device A first corresponding point corresponding to one of the two specified positions and a second corresponding point corresponding to the other on the plane corresponding to the above are identified, and the viewpoint and the first corresponding point are identified in the three-dimensional space.
  • a control unit is provided that calculates the distance between the intersection points and outputs the obtained calculation result.
  • control unit outputs a numerical value representing the distance on the screen as the calculation result.
  • the position specifying operation includes, as a first operation, an operation of pressing a push button of an input device, and the control unit is configured to control the location on the screen when the first operation is performed on the plane.
  • a point corresponding to the position of the pointer is specified as the first corresponding point.
  • the first operation is an operation of pressing the push button while pressing a predetermined first key.
  • the position specifying operation includes, as a second operation, an operation of releasing the push button, which is performed following the first operation and a drag operation of moving the pointer while holding the push button;
  • the control unit specifies, as the second corresponding point, a point on the plane that corresponds to the position of the pointer when the second operation is performed.
  • the second operation is an operation of releasing the push button while pressing a predetermined second key.
  • control unit may control a first corresponding position on the screen corresponding to an intersection between the plane and a straight line connecting the viewpoint and the first intersection in the three-dimensional space; A mark is displayed at each second corresponding position corresponding to an intersection between the plane and a straight line connecting the viewpoint and the second intersection in space.
  • control unit specifies a corresponding range on the plane that corresponds to the specified range in response to a range specification operation that specifies a range on the screen, and specifies the corresponding range on the plane that corresponds to the specified range, and A mark displayed at a position corresponding to an intersection point between the first intersection point and the second intersection point that exists in a three-dimensional area extending from the viewpoint in a conical shape through the outer edge of the corresponding range in the three-dimensional space. change the appearance of
  • control unit receives an operation to delete marks whose appearance has been changed all at once.
  • An image processing system as one aspect of the present disclosure includes the image processing device and a display that displays the screen.
  • An image display method as an aspect of the present disclosure renders an object based on a positional relationship between a viewpoint set in a virtual three-dimensional space and an object of biological tissue arranged in the three-dimensional space.
  • an image display method for displaying the object on a screen as a three-dimensional image of the living tissue comprising: A first corresponding point corresponding to one of the two specified positions and a second corresponding point corresponding to the other on the plane corresponding to the above are identified, and the viewpoint and the first corresponding point are identified in the three-dimensional space.
  • intersection points that is an intersection between the object and an extension of a straight line connecting the viewpoint
  • second intersection point that is an intersection between the object and an extension of the straight line that connects the viewpoint and the second corresponding point in the three-dimensional space.
  • An image processing program renders an object based on a positional relationship between a viewpoint set in a virtual three-dimensional space and an object of biological tissue arranged in the three-dimensional space. and displays the object on the screen as a three-dimensional image of the living tissue, in response to a position designation operation that specifies two positions on the screen, a plane corresponding to the screen in the three-dimensional space.
  • a process of calculating the distance between the two and a process of outputting the obtained calculation result are executed.
  • the accuracy of distance calculation between two points specified on a screen on which a three-dimensional image is displayed is improved.
  • FIG. 1 is a perspective view of an image processing system according to an embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating an example of distance calculation performed by the image processing system according to the embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • FIG. 3 is a diagram illustrating an example of region calculation performed by the image processing system according to the embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • 1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • FIG. 1 is a diagram showing an example of a two-dimensional image displayed on a display by an image processing system according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating an example of a cutting area formed by an image processing system according to an embodiment of the present disclosure.
  • FIG. 1 is a block diagram showing the configuration of an image processing device according to an embodiment of the present disclosure.
  • FIG. 2 is a perspective view of a probe and a drive unit according to an embodiment of the present disclosure.
  • 1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure.
  • 1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure.
  • 1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure.
  • 1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure.
  • 1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure.
  • the image processing device 11 is a computer that displays three-dimensional data 52 representing a biological tissue 60 as a three-dimensional image 53 on a display 16. As shown in FIGS. 2 to 5, the image processing device 11 performs image processing based on the positional relationship between the viewpoint V0 set in the virtual three-dimensional space and the object 54 of the biological tissue 60 arranged in the three-dimensional space. , renders the object 54, and displays the object 54 on the screen 80 as a three-dimensional image 53 of the biological tissue 60. Viewpoint V0 corresponds to the position of virtual camera 71 arranged in three-dimensional space.
  • the image processing device 11 specifies a first corresponding point Q1 and a second corresponding point Q2 on the plane 55 corresponding to the screen 80 in the three-dimensional space in response to the position specifying operation. do.
  • the position designation operation is an operation to designate two positions on the screen 80.
  • the first corresponding point Q1 is a point on the plane 55 that corresponds to one of the two positions designated by the position designation operation.
  • the second corresponding point Q2 is a point on the plane 55 that corresponds to the other of the two positions designated by the position designation operation.
  • the image processing device 11 calculates the distance
  • the first intersection R1 is the intersection of the object 54 and an extension of the straight line connecting the viewpoint V0 and the first corresponding point Q1 in the three-dimensional space.
  • the second intersection R2 is the intersection of the object 54 and an extension of the straight line connecting the viewpoint V0 and the second corresponding point Q2 in the three-dimensional space.
  • the image processing device 11 outputs the obtained calculation results. Specifically, the image processing device 11 outputs a numerical value representing the distance on the screen 80 as the calculation result. Alternatively, the image processing device 11 may output the calculation result in another format such as audio.
  • FIG. 4 shows an example in which the numerical value "10 mm" is displayed on the screen 80 as the calculation result.
  • the accuracy of distance calculation between two points specified on the screen 80 on which the three-dimensional image 53 is displayed is improved. For example, even if the position of the viewpoint V0 with respect to the object 54 is changed and the way the three-dimensional image 53 is displayed changes, the distance between two specified points will change depending on the position of the viewpoint V0, not the display density. Therefore, accurate distance calculation is possible.
  • the distance between two designated points is not the distance between the coordinates on the plane 55 corresponding to the screen 80
  • the position designation operation includes an operation of pressing a push button on the input device as a first operation.
  • the first operation is an operation of pressing a button on the mouse 15 as a push button of the input device, but it may also be an operation of pressing a specific key of the keyboard 14 as a push button of the input device.
  • the image processing device 11 identifies a point on the plane 55 that corresponds to the position of the pointer 86 on the screen 80 when the first operation was performed as the first corresponding point Q1.
  • FIG. 2 shows an example in which the pointer 86 is located at a position corresponding to the upper end of the fossa ovalis 66 represented in the three-dimensional image 53 when the first operation is performed.
  • the pointer 86 has an arrow shape in this example, it may have another shape such as a cross shape.
  • the shape of the pointer 86 may be changed so that the user can easily recognize the operation mode. Then, when the user switches the operation mode from the position specifying operation mode to another mode, the shape of the pointer 86 may be changed again.
  • the image processing device 11 displays the mark 87 at the first corresponding position on the screen 80.
  • the first corresponding position is a position on the screen 80 that corresponds to the intersection of the plane 55 and a straight line connecting the viewpoint V0 and the first intersection R1 in the three-dimensional space. After the first operation is performed, the first corresponding position remains the same as the position specified in the first operation until the viewpoint V0 is moved. However, when the viewpoint V0 is moved, the first corresponding position also changes to the first corresponding position. Moves from a specified position with one operation.
  • FIG. 3 shows an example in which the mark 87 is displayed at a position corresponding to the upper end of the fossa ovalis 66 as the first corresponding position.
  • the first operation may be an operation of pressing a push button while pressing a predetermined first key.
  • the first key is, for example, the Ctrl key or the Shift key on the keyboard 14.
  • the position specifying operation includes, as a second operation, an operation of releasing the push button, which is performed following the first operation and a drag operation of moving the pointer 86 while holding down the push button.
  • the image processing device 11 identifies a point on the plane 55 that corresponds to the position of the pointer 86 when the second operation is performed as a second corresponding point Q2.
  • FIG. 3 shows an example in which the pointer 86 is at a position corresponding to the lower end of the fossa ovalis 66 when the second operation is performed.
  • the image processing device 11 displays the mark 88 at the second corresponding position on the screen 80.
  • the second corresponding position is a position on the screen 80 that corresponds to the intersection of the plane 55 and a straight line connecting the viewpoint V0 and the second intersection R2 in the three-dimensional space. After the second operation is performed, the second corresponding position remains the same as the position specified by the second operation until the viewpoint V0 is moved. However, when the viewpoint V0 is moved, the second corresponding position also changes to the second corresponding position. Move from the specified position with 2 operations.
  • FIG. 4 shows an example in which the mark 88 is displayed at a position corresponding to the lower end of the fossa ovalis 66 as the second corresponding position.
  • the second operation may be an operation of releasing a push button while pressing a predetermined second key.
  • the second key is, for example, the Ctrl key or the Shift key on the keyboard 14.
  • the second key may be the same key as the first key, or may be a different key from the first key.
  • the user moves the pointer 86 to a desired position using the mouse 15, presses a button on the mouse 15 to specify the starting point position, and moves the pointer 86 to another position while holding down the button on the mouse 15.
  • the end point position can be specified by moving the mouse 15 to a desired position and releasing the button on the mouse 15.
  • the image processing device 11 specifies the three-dimensional coordinates (xq1, yq1, dq) corresponding to the specified starting point position as the coordinates of the first corresponding point Q1.
  • the image processing device 11 specifies the three-dimensional coordinates (xq2, yq2, dq) corresponding to the designated end point position as the coordinates of the second corresponding point Q2.
  • the image processing device 11 calculates three-dimensional coordinates (xr1, yr1) where a straight line passing through the coordinates (xv, yv, dv) of the viewpoint V0 and the coordinates (xq1, yq1, dq) of the first corresponding point Q1 reaches the object 54. , dr1) as the coordinates of the first intersection R1.
  • the image processing device 11 calculates three-dimensional coordinates (xr2, yr2) where a straight line passing through the coordinates (xv, yv, dv) of the viewpoint V0 and the coordinates (xq2, yq2, dq) of the second corresponding point Q2 reaches the object 54. , dr2) as the coordinates of the second intersection R2.
  • the image processing device 11 calculates the Euclidean distance ⁇ ((xr2 ⁇ xr1) 2 +(yr2 ⁇ yr1) 2 + (dr2-dr1) 2 ) is calculated.
  • the image processing device 11 outputs a numerical value representing the calculated Euclidean distance on the screen 80. Therefore, the user can easily and accurately measure any distance in the three-dimensional image 53, such as the length of the fossa ovalis 66.
  • the image processing device 11 further specifies a corresponding range 56 on the plane 55 corresponding to the screen 80 in the three-dimensional space in response to the range specification operation, as shown in FIGS. 6 to 9.
  • the range specification operation is an operation for specifying a range 89 on the screen 80.
  • the corresponding range 56 is a range on the plane 55 that corresponds to the designated range 89.
  • the image processing device 11 selects a mark displayed on the screen 80 at a position corresponding to an intersection existing in the three-dimensional area 57 as shown in FIG. 9 between the first intersection R1 and the second intersection R2. Change appearance, such as color or shape.
  • the three-dimensional area 57 is an area that extends in a conical shape from the viewpoint V0 through the outer edge of the corresponding range 56 in the three-dimensional space.
  • FIG. 9 shows an example in which both the first intersection point R1 and the second intersection point R2 exist within the three-dimensional area 57.
  • FIG. 8 shows an example in which the colors of marks 87 and 88 displayed on the screen 80 at positions corresponding to the first intersection point R1 and the second intersection point R2 are changed.
  • the image processing device 11 accepts batch operations on marks existing within the specified range 89, such as an operation to collectively delete marks whose appearance has been changed. Therefore, the user can perform efficient operations such as selecting arbitrary marks on the screen 80 and erasing them all at once.
  • the range specification operation includes an operation of pressing a push button on the input device as a third operation.
  • the third operation is an operation of pressing a button on the mouse 15 as a push button of the input device, but it may also be an operation of pressing a specific key of the keyboard 14 as a push button of the input device.
  • the image processing device 11 identifies a point on the plane 55 that corresponds to the position of the pointer 86 on the screen 80 when the third operation was performed.
  • FIG. 6 shows an example in which the pointer 86 is at a position corresponding to a point away from the fossa ovalis 66 to the upper left when the third operation is performed.
  • the pointer 86 has an arrow shape in this example, it may have another shape such as a cross shape.
  • the shape of the pointer 86 may be changed so that the user can easily recognize the operation mode. Then, when the user switches the operation mode from the range specification operation mode to another mode, the shape of the pointer 86 may be changed again.
  • the third operation may be an operation of pressing a push button while pressing a predetermined third key.
  • the third key is, for example, the Ctrl key or the Shift key on the keyboard 14.
  • the third key may be the same key as the first key, or may be a different key from the first key.
  • the third key may be the same key as the second key, or may be a different key from the second key.
  • the range specifying operation includes, as a fourth operation, an operation of releasing the push button, which is performed following the third operation and the drag operation of moving the pointer 86 while holding down the push button.
  • the image processing device 11 identifies a point on the plane 55 that corresponds to the position of the pointer 86 when the fourth operation was performed.
  • FIG. 7 shows an example in which the pointer 86 is located at a position corresponding to a point farther to the lower right from the fossa ovalis 66 when the fourth operation is performed.
  • the fourth operation may be an operation of releasing a push button while pressing a predetermined fourth key.
  • the fourth key is, for example, the Ctrl key or the Shift key on the keyboard 14.
  • the fourth key may be the same key as the first key, or may be a different key from the first key.
  • the fourth key may be the same key as the second key, or may be a different key from the second key.
  • the fourth key may be the same key as the third key, or may be a different key from the third key.
  • the image processing device 11 determines the point on the plane 55 corresponding to the position of the pointer 86 on the screen 80 when the third operation is performed, and the point 86 when the fourth operation is performed.
  • a rectangular range whose diagonal vertices are the point corresponding to the position of is specified as the corresponding range 56.
  • FIG. 8 shows, as a range 89, a rectangular range having diagonal vertices at a position corresponding to a point distant to the upper left from the fossa ovalis 66 and a position corresponding to a point corresponding to a point distant to the lower right from the fossa ovalis 66.
  • An example where is specified is shown.
  • a circular range may be specified as the range 89 instead of a rectangular range. That is, the image processing device 11 uses a point on the plane 55 that corresponds to the position of the pointer 86 on the screen 80 when the third operation was performed as a center point, and a point on the plane 55 that corresponds to the position of the pointer 86 on the screen 80 when the fourth operation was performed.
  • a circular range whose circumferential points are points corresponding to the positions of may be specified as the corresponding range 56.
  • FIG. 10 shows an example in which the pointer 86 is located at a position corresponding to the center point of the fossa ovalis 66 when the third operation is performed.
  • FIG. 10 shows an example in which the pointer 86 is located at a position corresponding to the center point of the fossa ovalis 66 when the third operation is performed.
  • FIG. 11 shows an example in which the pointer 86 is at a position corresponding to a point downwardly away from the fossa ovalis 66 when the fourth operation is performed.
  • a circular range is designated as a range 89, with a center point at a position corresponding to the center point of the fossa ovalis 66 and a circumferential point at a position corresponding to a point downwardly away from the fossa ovalis 66.
  • FIG. 12 shows an example in which the colors of the marks 87 and 88 displayed on the screen 80 at positions corresponding to the first intersection point R1 and the second intersection point R2 are changed. .
  • the user moves the pointer 86 to a desired position with the mouse 15, presses the button of the mouse 15 to specify one vertex position, and while holding down the button of the mouse 15, moves the pointer 86 to a desired position. can be moved to another desired position and the rectangular range can be designated as range 89 by releasing the mouse 15 button.
  • the user moves the pointer 86 to a desired position with the mouse 15, presses the button of the mouse 15 to specify the center point position, and while holding down the button of the mouse 15, moves the pointer 86 to a desired position. 86 to another desired position and release the mouse 15 button to designate the circular area as area 89.
  • the image processing device 11 specifies a two-dimensional range corresponding to the designated range 89 as the corresponding range 56, as shown in FIG.
  • the image processing device 11 specifies a cone-shaped area extending through the coordinates (xv, yv, dv) of the viewpoint V0 and the outer edge of the corresponding range 56 as a three-dimensional area 57.
  • the image processing device 11 determines that the coordinates (xr1, yr1, dr1) of the first intersection R1 and the coordinates (xr2, yr2, dr2) of the second intersection R2 are within the three-dimensional area 57.
  • the image processing device 11 changes the colors of the marks 87 and 88 displayed on the screen 80 at positions corresponding to the first intersection R1 and the second intersection R2, respectively. Therefore, the user can easily select the marks 87 and 88.
  • the image processing device 11 forms a cutting region 62 in the three-dimensional data 52 that exposes the inner cavity 63 of the biological tissue 60 in the three-dimensional image 53.
  • the image processing device 11 adjusts the viewpoint V0 when displaying the three-dimensional image 53 on the display 16 according to the position of the cutting area 62.
  • the image processing device 11 receives a user operation requesting rotation of the viewpoint V0, the image processing device 11 passes through a reference point located in the lumen 63 on a reference plane that extends horizontally in the three-dimensional image 53 and includes the viewpoint V0, and rotates the reference plane.
  • the position of the cutting area 62 is changed from the first position, which is the position when the user's operation was performed, to the second position rotated around the rotation axis extending in the direction perpendicular to .
  • the horizontal direction refers to the XY directions shown in FIG.
  • the direction perpendicular to the reference plane is the Z direction shown in FIG.
  • the reference point may be any point located within the lumen 63 on the reference plane, such as the center point of the IVUS catheter, but in this embodiment, it is the center of gravity of the lumen 63 on the reference plane.
  • the corresponding center of gravity B1 becomes the reference point.
  • the image processing device 11 rotates the viewpoint V0 around the rotation axis according to the second position.
  • the usefulness for confirming the position within the living tissue 60 is improved.
  • a procedure such as ablation using IVUS
  • a user such as a doctor performing the procedure while operating a catheter or a clinical engineer operating the IVUS system while looking at the display 16
  • the viewpoint V0 can be rotated around the rotation axis.
  • the image processing device 11 identifies a cross section 64 of the living tissue 60 and a region 65 corresponding to the cutting region 62 in the cross section 64.
  • the two-dimensional image 58 and the three-dimensional image 53 are displayed on the display 16.
  • the position of the camera 71 with respect to the cross section 64 is displayed.
  • the user can understand from the two-dimensional image 58 what kind of structure the portion of the biological tissue 60 that is cut out and not displayed in the three-dimensional image 53 has. For example, if the user is a surgeon, it becomes easier to perform surgery on the inside of the living tissue 60.
  • the biological tissue 60 includes, for example, blood vessels or organs such as the heart.
  • the biological tissue 60 is not limited to anatomically a single organ or a part thereof, but also includes a tissue having a lumen spanning multiple organs.
  • An example of such a tissue is, specifically, a part of the vascular tissue that extends from the upper part of the inferior vena cava, passes through the right atrium, and reaches the lower part of the superior vena cava.
  • an operation panel 81 In FIGS. 2 to 4, 6 to 8, and 10 to 12, an operation panel 81, a two-dimensional image 58, a three-dimensional image 53, and a pointer 86 are displayed on a screen 80.
  • the operation panel 81 is a GUI component for setting the cutting area 62. “GUI” is an abbreviation for graphical user interface.
  • the operation panel 81 includes a check box 82 for selecting whether to activate the setting of the cutting area 62, a slider 83 for setting the base angle, a slider 84 for setting the opening angle, and a center of gravity.
  • a check box 85 is provided for selecting whether or not to use the .
  • the base angle is the rotation angle of one of the two straight lines L1 and L2 extending from one point M in the cross-sectional image representing the cross-section 64 of the biological tissue 60. Therefore, setting the base angle corresponds to setting the direction of the straight line L1.
  • the opening angle is the angle between the two straight lines L1 and L2. Therefore, setting the opening angle corresponds to setting the angle formed by the two straight lines L1 and L2.
  • Point M is the center of gravity of cross section 64. Point M may be set at a point other than the center of gravity on the cross section 64 if it is selected not to use the center of gravity.
  • the two-dimensional image 58 is an image obtained by processing a cross-sectional image.
  • the color of an area 65 corresponding to the cutting area 62 is changed to clearly indicate which part of the cross section 64 has been cut.
  • the viewpoint V0 when displaying the three-dimensional image 53 on the screen 80 is adjusted according to the position of the cutting region 62.
  • the cutting area 62 can be determined using the two-dimensional image 58. Specifically, as shown in FIG. 13, by adjusting the base angle or opening angle and setting the position or size of the area 65 separated by two straight lines L1 and L2 in the two-dimensional image 58. , the position or size of the cutting area 62 can be set. For example, if the base angle is changed so that the straight line L1 is rotated approximately 90 degrees counterclockwise, a region 65a that is moved in accordance with the change in the base angle is obtained in the two-dimensional image 58a. Then, the position of the cutting area 62 is adjusted according to the position of the area 65a.
  • the opening angle is changed so that the angle between the two straight lines L1 and L2 becomes larger, a region 65b that is enlarged according to the change in the opening angle is obtained in the two-dimensional image 58b.
  • the size of the cutting area 62 is adjusted according to the size of the area 65b.
  • the position of the camera 71 may be adjusted as appropriate depending on the position or size of the cutting area 62.
  • the image corresponding to the current position of the sensor that is, the latest image, is always displayed as the two-dimensional image 58.
  • the base angle may be set by dragging the straight line L1 instead of being set by operating the slider 83, or by inputting a numerical value. Good too.
  • the opening angle may be set by dragging the straight line L2 or by inputting a numerical value.
  • the cutting area 62 determined using the two-dimensional image 58 is hidden or transparent.
  • the X direction and the Y direction perpendicular to the X direction each correspond to the lateral direction of the lumen 63 of the living tissue 60.
  • the Z direction orthogonal to the X direction and the Y direction corresponds to the longitudinal direction of the lumen 63 of the living tissue 60.
  • the image processing device 11 uses the three-dimensional data 52 to calculate the positions of the centers of gravity B1, B2, B3, and B4 of the cross sections C1, C2, C3, and C4 of the living tissue 60, respectively.
  • the image processing device 11 sets two planes that intersect at a line Lb passing through the positions of the centers of gravity B1, B2, B3, and B4 and include two straight lines L1 and L2, respectively, as cutting planes P1 and P2. For example, if point M shown in FIGS.
  • the image processing device 11 forms a region in the three-dimensional image 53 between the cutting planes P1 and P2 and exposing the inner cavity 63 of the living tissue 60 as a cutting region 62 in the three-dimensional data 52 .
  • cross sections C1, C2, C3, and C4 are shown as multiple cross sections in the transverse direction of the internal cavity 63 of the living tissue 60, but the number of cross sections for which the center of gravity position is calculated is four. However, the number is preferably the same as the number of cross-sectional images obtained by IVUS.
  • the check box 85 on the operation panel 81 is not checked, that is, it is selected not to use the center of gravity.
  • the image processing device 11 intersects at an arbitrary line passing through point M, such as a straight line extending in the Z direction through point M, and includes two straight lines L1 and L2, respectively.
  • the planes are set as cutting planes P1 and P2.
  • the image processing system 10 includes an image processing device 11, a cable 12, a drive unit 13, a keyboard 14, a mouse 15, and a display 16.
  • the image processing device 11 is a dedicated computer specialized for image diagnosis in this embodiment, it may be a general-purpose computer such as a PC. "PC” is an abbreviation for personal computer.
  • the cable 12 is used to connect the image processing device 11 and the drive unit 13.
  • the drive unit 13 is a device that is used by being connected to the probe 20 shown in FIG. 16 and drives the probe 20.
  • Drive unit 13 is also called MDU.
  • MDU is an abbreviation for motor drive unit.
  • the probe 20 is applied to IVUS.
  • the probe 20 is also called an IVUS catheter or an imaging catheter.
  • the keyboard 14, mouse 15, and display 16 are connected to the image processing device 11 via any cable or wirelessly.
  • the display 16 is, for example, an LCD, an organic EL display, or an HMD.
  • LCD is an abbreviation for liquid crystal display.
  • EL is an abbreviation for electro luminescence.
  • HMD is an abbreviation for head-mounted display.
  • the image processing system 10 further includes a connection terminal 17 and a cart unit 18 as options.
  • connection terminal 17 is used to connect the image processing device 11 and external equipment.
  • the connection terminal 17 is, for example, a USB terminal.
  • USB is an abbreviation for Universal Serial Bus.
  • the external device is, for example, a recording medium such as a magnetic disk drive, a magneto-optical disk drive, or an optical disk drive.
  • the cart unit 18 is a cart with casters for movement.
  • An image processing device 11, a cable 12, and a drive unit 13 are installed in the cart body of the cart unit 18.
  • a keyboard 14, a mouse 15, and a display 16 are installed on the top table of the cart unit 18.
  • the probe 20 includes a drive shaft 21, a hub 22, a sheath 23, an outer tube 24, an ultrasonic transducer 25, and a relay connector 26.
  • the drive shaft 21 passes through a sheath 23 inserted into the body cavity of a living body and an outer tube 24 connected to the proximal end of the sheath 23, and extends to the inside of the hub 22 provided at the proximal end of the probe 20.
  • the drive shaft 21 has an ultrasonic transducer 25 at its tip that transmits and receives signals, and is rotatably provided within the sheath 23 and the outer tube 24 .
  • Relay connector 26 connects sheath 23 and outer tube 24.
  • the hub 22, the drive shaft 21, and the ultrasonic transducer 25 are connected to each other so that they each move forward and backward in the axial direction. Therefore, for example, when the hub 22 is pushed toward the distal end, the drive shaft 21 and the ultrasonic transducer 25 move inside the sheath 23 toward the distal end. For example, when the hub 22 is pulled toward the proximal end, the drive shaft 21 and the ultrasonic transducer 25 move inside the sheath 23 toward the proximal end, as shown by the arrows.
  • the drive unit 13 includes a scanner unit 31, a slide unit 32, and a bottom cover 33.
  • the scanner unit 31 is also called a pullback unit. Scanner unit 31 is connected to image processing device 11 via cable 12 .
  • the scanner unit 31 includes a probe connection section 34 that connects to the probe 20 and a scanner motor 35 that is a drive source that rotates the drive shaft 21 .
  • the probe connecting portion 34 is detachably connected to the probe 20 via the insertion port 36 of the hub 22 provided at the base end of the probe 20. Inside the hub 22, the base end of the drive shaft 21 is rotatably supported, and the rotational force of the scanner motor 35 is transmitted to the drive shaft 21. Further, signals are transmitted and received between the drive shaft 21 and the image processing device 11 via the cable 12.
  • the image processing device 11 generates a tomographic image of a living body lumen and performs image processing based on signals transmitted from the drive shaft 21 .
  • the slide unit 32 carries the scanner unit 31 so that it can move forward and backward, and is mechanically and electrically connected to the scanner unit 31.
  • the slide unit 32 includes a probe clamp section 37, a slide motor 38, and a switch group 39.
  • the probe clamp section 37 is disposed coaxially with the probe connection section 34 on the distal side thereof, and supports the probe 20 connected to the probe connection section 34 .
  • the slide motor 38 is a drive source that generates axial driving force.
  • the scanner unit 31 is moved forward and backward by the drive of the slide motor 38, and the drive shaft 21 is accordingly moved forward and backward in the axial direction.
  • the slide motor 38 is, for example, a servo motor.
  • the switch group 39 includes, for example, a forward switch and a pullback switch that are pressed when moving the scanner unit 31 forward or backward, and a scan switch that is pressed when starting and ending image depiction.
  • the switch group 39 is not limited to this example, and various switches may be included in the switch group 39 as necessary.
  • the bottom cover 33 covers the bottom surface of the slide unit 32 and the entire circumference of the side surface on the bottom side, and is movable toward and away from the bottom surface of the slide unit 32.
  • the configuration of the image processing device 11 will be described with reference to FIG. 15.
  • the image processing device 11 includes a control section 41, a storage section 42, a communication section 43, an input section 44, and an output section 45.
  • the control unit 41 includes at least one processor, at least one programmable circuit, at least one dedicated circuit, or any combination thereof.
  • the processor is a general-purpose processor such as a CPU or GPU, or a dedicated processor specialized for specific processing.
  • CPU is an abbreviation for central processing unit.
  • GPU is an abbreviation for graphics processing unit.
  • the programmable circuit is, for example, an FPGA.
  • FPGA is an abbreviation for field-programmable gate array.
  • the dedicated circuit is, for example, an ASIC.
  • ASIC is an abbreviation for application specific integrated circuit.
  • the control unit 41 executes processing related to the operation of the image processing device 11 while controlling each part of the image processing system 10 including the image processing device 11.
  • the storage unit 42 includes at least one semiconductor memory, at least one magnetic memory, at least one optical memory, or any combination thereof.
  • the semiconductor memory is, for example, RAM or ROM.
  • RAM is an abbreviation for random access memory.
  • ROM is an abbreviation for read only memory.
  • the RAM is, for example, SRAM or DRAM.
  • SRAM is an abbreviation for static random access memory.
  • DRAM is an abbreviation for dynamic random access memory.
  • the ROM is, for example, an EEPROM.
  • EEPROM is an abbreviation for electrically erasable programmable read only memory.
  • the storage unit 42 functions as, for example, a main storage device, an auxiliary storage device, or a cache memory.
  • the storage unit 42 stores data used for the operation of the image processing device 11, such as tomographic data 51, and data obtained by the operation of the image processing device 11, such as 3D data 52 and 3D images 53. .
  • the communication unit 43 includes at least one communication interface.
  • the communication interface is, for example, a wired LAN interface, a wireless LAN interface, or an image diagnosis interface that receives and A/D converts IVUS signals.
  • LAN is an abbreviation for local area network.
  • A/D is an abbreviation for analog to digital.
  • the communication unit 43 receives data used for the operation of the image processing device 11 and transmits data obtained by the operation of the image processing device 11.
  • the drive unit 13 is connected to an image diagnosis interface included in the communication section 43.
  • the input unit 44 includes at least one input interface.
  • the input interface is, for example, a USB interface, an HDMI (registered trademark) interface, or an interface compatible with a short-range wireless communication standard such as Bluetooth (registered trademark).
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI registered trademark
  • HDMI High-Definition Multimedia Interface
  • the input unit 44 accepts user operations such as inputting data used for the operation of the image processing device 11 .
  • the keyboard 14 and mouse 15 are connected to a USB interface included in the input unit 44 or an interface compatible with near field communication. If the touch screen is provided integrally with the display 16, the display 16 may be connected to a USB interface or an HDMI (registered trademark) interface included in the input section 44.
  • the output unit 45 includes at least one output interface.
  • the output interface is, for example, a USB interface, an HDMI (registered trademark) interface, or an interface compatible with a short-range wireless communication standard such as Bluetooth (registered trademark).
  • the output unit 45 outputs data obtained by the operation of the image processing device 11.
  • the display 16 is connected to a USB interface or an HDMI (registered trademark) interface included in the output unit 45.
  • the functions of the image processing device 11 are realized by executing the image processing program according to the present embodiment by a processor serving as the control unit 41. That is, the functions of the image processing device 11 are realized by software.
  • the image processing program causes the computer to function as the image processing apparatus 11 by causing the computer to execute the operations of the image processing apparatus 11 . That is, the computer functions as the image processing device 11 by executing the operations of the image processing device 11 according to the image processing program.
  • the program may be stored on a non-transitory computer-readable medium.
  • the non-transitory computer-readable medium is, for example, a flash memory, a magnetic recording device, an optical disk, a magneto-optical recording medium, or a ROM.
  • Distribution of the program is performed, for example, by selling, transferring, or lending a portable medium such as an SD card, DVD, or CD-ROM that stores the program.
  • SD is an abbreviation for Secure Digital.
  • DVD is an abbreviation for digital versatile disc.
  • CD-ROM is an abbreviation for compact disc read only memory.
  • the program may be distributed by storing the program in the storage of a server and transferring the program from the server to another computer.
  • the program may be provided as a program product.
  • a computer temporarily stores a program stored on a portable medium or a program transferred from a server in its main storage device. Then, the computer uses a processor to read a program stored in the main memory, and causes the processor to execute processing according to the read program.
  • a computer may read a program directly from a portable medium and execute processing according to the program. The computer may sequentially execute processing according to the received program each time the program is transferred to the computer from the server. Processing may be performed using a so-called ASP type service that implements functions only by issuing execution instructions and obtaining results without transferring programs from the server to the computer. “ASP” is an abbreviation for application service provider.
  • the program includes information that is used for processing by an electronic computer and is equivalent to a program. For example, data that is not a direct command to a computer but has the property of regulating computer processing falls under "something similar to a program.”
  • a part or all of the functions of the image processing device 11 may be realized by a programmable circuit or a dedicated circuit as the control unit 41. That is, some or all of the functions of the image processing device 11 may be realized by hardware.
  • the operation of the image processing system 10 according to this embodiment will be described with reference to FIGS. 17 and 18.
  • the operation of the image processing system 10 corresponds to the image display method according to this embodiment.
  • the probe 20 is primed by the user. Thereafter, the probe 20 is fitted into the probe connection part 34 and probe clamp part 37 of the drive unit 13, and is connected and fixed to the drive unit 13. The probe 20 is then inserted to a target site within the living tissue 60 such as a blood vessel or heart.
  • step S101 the scan switch included in the switch group 39 is pressed, and the pullback switch included in the switch group 39 is further pressed, thereby performing a so-called pullback operation.
  • the probe 20 transmits ultrasonic waves inside the living tissue 60 by using the ultrasonic transducer 25 that retreats in the axial direction by a pullback operation.
  • the ultrasonic transducer 25 transmits ultrasonic waves in a radial manner while moving inside the living tissue 60 .
  • the ultrasonic transducer 25 receives reflected waves of the transmitted ultrasonic waves.
  • the probe 20 inputs the signal of the reflected wave received by the ultrasound transducer 25 to the image processing device 11 .
  • the control unit 41 of the image processing device 11 acquires tomographic data 51 including a plurality of cross-sectional images by processing the input signals and sequentially generating cross-sectional images of the biological tissue 60.
  • the probe 20 rotates the ultrasonic transducer 25 in the circumferential direction and moves it in the axial direction inside the living tissue 60, and the ultrasonic transducer 25 causes a plurality of waves outward from the center of rotation. Send ultrasound waves in the direction.
  • the probe 20 uses the ultrasonic transducer 25 to receive reflected waves from reflective objects existing in multiple directions inside the living tissue 60 .
  • the probe 20 transmits the received reflected wave signal to the image processing device 11 via the drive unit 13 and the cable 12.
  • the communication unit 43 of the image processing device 11 receives the signal transmitted from the probe 20.
  • the communication unit 43 performs A/D conversion on the received signal.
  • the communication section 43 inputs the A/D converted signal to the control section 41 .
  • the control unit 41 processes the input signal and calculates the intensity value distribution of reflected waves from a reflecting object existing in the ultrasonic wave transmission direction of the ultrasonic transducer 25 .
  • the control unit 41 acquires tomographic data 51, which is a data set of cross-sectional images, by sequentially generating two-dimensional images having a luminance value distribution corresponding to the calculated intensity value distribution as cross-sectional images of the biological tissue 60.
  • the control unit 41 causes the storage unit 42 to store the acquired tomographic data 51.
  • the reflected wave signal received by the ultrasound transducer 25 corresponds to the raw data of the tomographic data 51, and the cross-sectional image that the image processing device 11 generates by processing the reflected wave signal corresponds to the tomographic data 51. This corresponds to 51 processed data.
  • the control unit 41 of the image processing device 11 may store the signal input from the probe 20 as it is in the storage unit 42 as the tomographic data 51.
  • the control unit 41 may cause the storage unit 42 to store data indicating the intensity value distribution of reflected waves calculated by processing the signal input from the probe 20 as the tomographic data 51.
  • the tomographic data 51 is not limited to a data set of cross-sectional images of the living tissue 60, but may be any data that represents the cross-section of the living tissue 60 at each movement position of the ultrasound transducer 25 in some format.
  • an ultrasonic vibrator that transmits ultrasonic waves in multiple directions without rotating is used. It's okay.
  • the tomographic data 51 may be acquired using OFDI or OCT instead of being acquired using IVUS.
  • OFDI is an abbreviation for optical frequency domain imaging.
  • OCT is an abbreviation for optical coherence tomography.
  • an ultrasound sensor that acquires tomographic data 51 by transmitting ultrasound in the lumen 63 of the living tissue 60 is used as a sensor that acquires the tomographic data 51 while moving through the lumen 63 of the living tissue 60.
  • a sensor is used that emits light in the lumen 63 of the living tissue 60 to acquire the tomographic data 51.
  • another device instead of the image processing device 11 generating a dataset of cross-sectional images of the living tissue 60, another device generates a similar dataset, and the image processing device 11 generates the dataset. It may also be acquired from the other device. That is, instead of the control unit 41 of the image processing device 11 processing the IVUS signal to generate a cross-sectional image of the biological tissue 60, another device processes the IVUS signal to generate a cross-sectional image of the biological tissue 60. The generated cross-sectional image may be input to the image processing device 11.
  • step S102 the control unit 41 of the image processing device 11 generates three-dimensional data 52 of the biological tissue 60 based on the tomographic data 51 acquired in step S101. That is, the control unit 41 generates three-dimensional data 52 based on the tomographic data 51 acquired by the sensor.
  • the control unit 41 instead of regenerating all 3D data 52 from scratch, it is possible to update only the data at the location to which the updated tomographic data 51 corresponds. preferable. In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 in the subsequent step S103 can be improved.
  • control unit 41 of the image processing device 11 stacks and three-dimensionalizes the cross-sectional images of the biological tissue 60 included in the tomographic data 51 stored in the storage unit 42, thereby creating three-dimensional images of the biological tissue 60.
  • Generate dimensional data 52 any one of various processing methods such as rendering methods such as surface rendering or volume rendering, and accompanying texture mapping including environment mapping, and bump mapping may be used.
  • the control unit 41 causes the storage unit 42 to store the generated three-dimensional data 52.
  • the tomographic data 51 includes data on the long medical device in the same way as the data of the living tissue 60. It is included. Therefore, in step S102, the three-dimensional data 52 generated by the control unit 41 also includes data on the elongated medical instrument, similar to the data on the living tissue 60.
  • the control unit 41 of the image processing device 11 classifies the pixel group of the cross-sectional image included in the cross-sectional data 51 acquired in step S101 into two or more classes.
  • These two or more classes include at least a "tissue” class to which the biological tissue 60 belongs, a "medical device” class to which long medical devices belong, a "blood cell” class, an indwelling stent, etc. may further include a class of "indwellings", or a class of "lesions", such as lime or plaque.
  • a method of classifying pixel groups of a cross-sectional image using a trained model is used.
  • the trained model is trained to detect regions corresponding to each class from sample IVUS cross-sectional images by performing machine learning in advance.
  • step S103 the control unit 41 of the image processing device 11 displays the three-dimensional data 52 generated in step S102 on the display 16 as a three-dimensional image 53.
  • the control unit 41 may set the angle at which the three-dimensional image 53 is displayed to an arbitrary angle.
  • the control unit 41 causes the display 16 to display the latest cross-sectional image included in the tomographic data 51 acquired in step S101 together with the three-dimensional image 53.
  • the control unit 41 of the image processing device 11 generates a three-dimensional image 53 from the three-dimensional data 52 stored in the storage unit 42.
  • the three-dimensional image 53 includes a group of three-dimensional objects such as a three-dimensional object representing the biological tissue 60 and a three-dimensional object representing a long medical instrument. That is, the control unit 41 generates a three-dimensional object of the biological tissue 60 from the data of the biological tissue 60 stored in the storage unit 42, and generates a three-dimensional object of the biological tissue 60 from the data of the long medical instrument stored in the storage unit 42. A three-dimensional object of a medical device is generated.
  • the control unit 41 displays the latest cross-sectional image of the biological tissue 60 included in the tomographic data 51 stored in the storage unit 42 and the generated three-dimensional image 53 on the display 16 via the output unit 45. to be displayed.
  • step S104 if the user performs a change operation to set the angle at which the three-dimensional image 53 is displayed, the process in step S105 is executed. If there is no change operation by the user, the process of step S106 is executed.
  • step S105 the control unit 41 of the image processing device 11 receives, via the input unit 44, an operation to set the angle at which the three-dimensional image 53 is displayed.
  • the control unit 41 adjusts the angle at which the three-dimensional image 53 is displayed to the set angle.
  • step S103 the control unit 41 causes the display 16 to display the three-dimensional image 53 at the angle set in step S105.
  • control unit 41 of the image processing device 11 allows the user to control the three-dimensional image 53 displayed on the display 16 by using the keyboard 14, the mouse 15, or the touch screen provided integrally with the display 16.
  • a rotation operation is accepted via the input unit 44.
  • the control unit 41 interactively adjusts the angle at which the three-dimensional image 53 is displayed on the display 16 in accordance with a user's operation.
  • the control unit 41 may control the input unit 44 to allow the user to input a numerical value of the angle at which the three-dimensional image 53 is to be displayed using the keyboard 14, the mouse 15, or a touch screen provided integrally with the display 16. Accepted through.
  • the control unit 41 adjusts the angle at which the three-dimensional image 53 is displayed on the display 16 in accordance with the input numerical value.
  • step S106 if the tomographic data 51 is updated, the processes of step S107 and step S108 are executed. If the tomographic data 51 is not updated, in step S104, the presence or absence of the user's change operation is checked again.
  • step S107 similarly to the process in step S101, the control unit 41 of the image processing device 11 processes the signal input from the probe 20 to newly generate a cross-sectional image of the biological tissue 60, thereby generating at least one cross-sectional image of the biological tissue 60.
  • Tomographic data 51 including a new cross-sectional image is acquired.
  • step S108 the control unit 41 of the image processing device 11 updates the three-dimensional data 52 of the biological tissue 60 based on the tomographic data 51 acquired in step S107. That is, the control unit 41 updates the three-dimensional data 52 based on the tomographic data 51 acquired by the sensor. Then, in step S103, the control unit 41 causes the display 16 to display the three-dimensional data 52 updated in step S108 as a three-dimensional image 53. The control unit 41 causes the display 16 to display the latest cross-sectional image included in the tomographic data 51 acquired in step S107 together with the three-dimensional image 53. In step S108, it is preferable to update only the data corresponding to the updated tomographic data 51. In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 can be improved in step S108.
  • step S111 if the user performs a setting operation to set the cutting area 62, the process in step S112 is executed.
  • step S112 the control unit 41 of the image processing device 11 receives an operation to set the cutting area 62 via the input unit 44.
  • control unit 41 of the image processing device 11 performs an operation to set an area 65 corresponding to the cutting area 62 on the cross-sectional image displayed on the display 16 in step S103 via the input unit 44. accept.
  • control unit 41 receives an operation for setting two straight lines L1 and L2 extending from one point M in the cross-sectional image as an operation for setting a region 65 corresponding to the cutting region 62.
  • control unit 41 of the image processing device 11 controls the base angle and the opening angle on the operation panel 81 as shown in FIGS. 2 to 4, 6 to 8, and 10 to 12.
  • An operation in which the user specifies using the keyboard 14 , mouse 15 , or touch screen provided integrally with the display 16 is accepted via the input unit 44 . That is, the control unit 41 specifies the direction of one of the two straight lines L1, L2 and the angle formed by the two straight lines L1, L2 as an operation for setting the two straight lines L1, L2. accepts operations to do.
  • the check box 85 on the operation panel 81 is checked, that is, the use of the center of gravity is selected.
  • control unit 41 of the image processing device 11 allows the user to move two straight lines L1 and L2 on the cross-sectional image displayed on the display 16 using the keyboard 14, the mouse 15, or the display 16.
  • An operation for drawing using an integrally provided touch screen may be accepted via the input unit 44. That is, the control unit 41 may receive an operation to draw the two straight lines L1 and L2 on the cross-sectional image as an operation to set the two straight lines L1 and L2.
  • step S113 the control unit 41 of the image processing device 11 uses the latest three-dimensional data 52 stored in the storage unit 42 to calculate the position of the center of gravity of a plurality of cross-sections in the transverse direction of the internal cavity 63 of the living tissue 60.
  • the latest three-dimensional data 52 is the three-dimensional data 52 generated in step S102 if the process of step S108 has not been executed, and the latest three-dimensional data 52 is the three-dimensional data 52 generated in step S102 if the process of step S108 has been executed. This refers to the three-dimensional data 52 that has been created.
  • step S113 can be executed using a procedure similar to that disclosed in International Publication No. 2021/200294.
  • step S114 the control unit 41 of the image processing device 11 performs smoothing on the calculation result of the center of gravity position in step S113. Specifically, the process in step S114 can be executed using a procedure similar to that disclosed in International Publication No. 2021/200294.
  • step S115 the control unit 41 of the image processing device 11 sets two planes intersecting by a line Lb passing through the center of gravity position calculated in step S113 as cutting planes P1 and P2, as shown in FIG. .
  • the control unit 41 performs smoothing on the calculation result of the center of gravity position in step S114 and then sets the cutting planes P1 and P2, but the process in step S114 may be omitted.
  • control unit 41 of the image processing device 11 sets the curve of the center of gravity position obtained as a result of the smoothing in step S114 as the line Lb.
  • the control unit 41 sets two planes that intersect at the set line Lb and include the two straight lines L1 and L2 set in step S112, respectively, as cutting planes P1 and P2.
  • the control unit 41 determines the three-dimensional coordinates intersecting the cut planes P1 and P2 of the biological tissue 60 in the latest three-dimensional data 52 stored in the storage unit 42, and the internal cavity 63 of the biological tissue 60 in the three-dimensional image 53. Specify as the three-dimensional coordinates of the edge of the opening to be exposed.
  • the control unit 41 causes the storage unit 42 to store the specified three-dimensional coordinates.
  • step S116 the control unit 41 of the image processing device 11 forms a region in the three-dimensional data 52, which is sandwiched between the cutting planes P1 and P2 in the three-dimensional image 53 and exposes the inner cavity 63 of the biological tissue 60, as a cutting region 62. do.
  • control unit 41 of the image processing device 11 converts the portion specified by the three-dimensional coordinates stored in the storage unit 42 into a three-dimensional image in the latest three-dimensional data 52 stored in the storage unit 42. 53 is set to be hidden or transparent when displayed on the display 16. That is, the control unit 41 forms the cutting area 62 in accordance with the area 65 set in step S112.
  • step S117 the control unit 41 of the image processing device 11 causes the display 16 to display the three-dimensional data 52 in which the cutting area 62 was formed in step S116 as the three-dimensional image 53.
  • the control unit 41 displays a cross section 64 represented by the cross-sectional image displayed on the display 16 in step S103, which is indicated by the tomographic data 51 newly acquired by the sensor, and a region 65 corresponding to the cutting region 62 in the cross section 64.
  • a two-dimensional image 58 representing the image is displayed on the display 16 together with the three-dimensional image 53.
  • control unit 41 of the image processing device 11 processes the latest cross-sectional image of the biological tissue 60 included in the tomographic data 51 stored in the storage unit 42, and converts the images from FIG. 4. Generate a two-dimensional image 58 as shown in FIGS. 6 to 8 and 10 to 12.
  • the control unit 41 is shown in FIGS. 2 to 4, 6 to 8, and 10 to 12, in which the portion specified by the three-dimensional coordinates stored in the storage unit 42 is hidden or transparent.
  • a three-dimensional image 53 is generated.
  • the control unit 41 displays the generated two-dimensional image 58 and three-dimensional image 53 on the display 16 via the output unit 45.
  • control unit 41 of the image processing device 11 generates a two-dimensional image 58 corresponding to the cutting area 62, as shown in FIGS. 2 to 4, 6 to 8, and 10 to 12.
  • An image is generated in which the color of the region 65 is different from that of the remaining regions. For example, it is conceivable to change the white part in a general IVUS image to red in the region 65.
  • step S118 if there is an operation to set the cutting area 62 as a user change operation, the process of step S119 is executed. If there is no change operation by the user, the process of step S120 is executed.
  • step S119 the control unit 41 of the image processing device 11 receives an operation to set the cutting area 62 via the input unit 44, similar to the process in step S112. Then, the processing from step S115 onwards is executed.
  • step S120 if the tomographic data 51 is updated, the processes of step S121 and step S122 are executed. If the tomographic data 51 is not updated, in step S118, the presence or absence of the user's change operation is checked again.
  • step S121 the control unit 41 of the image processing device 11 processes the signal input from the probe 20 to newly generate a cross-sectional image of the living tissue 60, similarly to the process in step S101 or step S107.
  • Tomographic data 51 including at least one new cross-sectional image is acquired.
  • step S122 the control unit 41 of the image processing device 11 updates the three-dimensional data 52 of the biological tissue 60 based on the tomographic data 51 acquired in step S121. After that, the processing from step S113 onwards is executed. In step S122, it is preferable to update only the data corresponding to the updated tomographic data 51. In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the data processing after step S113 can be improved.
  • step S103 or step S117 the control unit 41 of the image processing device 11 selects the viewpoint V0 set in the virtual three-dimensional space and the object of the biological tissue 60 arranged in the three-dimensional space. This is performed when the object 54 is rendered based on the positional relationship with the biological tissue 60 and the object 54 is already displayed on the screen 80 as the three-dimensional image 53 of the biological tissue 60 .
  • step S201 when a position specifying operation is performed, the control unit 41 of the image processing device 11 receives the position specifying operation via the input unit 44.
  • the position designation operation is an operation to designate two positions on the screen 80.
  • the position specifying operation includes an operation of pressing a button on the mouse 15 as the first operation, and is performed successively after the first operation and a drag operation of moving the pointer 86 while holding down the button of the mouse 15.
  • an operation of releasing a button on the mouse 15 is included as the second operation.
  • the first operation may be an operation of pressing a button on the mouse 15 while holding down a first key such as the Ctrl key or the Shift key on the keyboard 14.
  • the second operation may be an operation of releasing a button on the mouse 15 while pressing a second key such as the Ctrl key or the Shift key on the keyboard 14.
  • step S202 the control unit 41 of the image processing device 11 selects a first corresponding point Q1 and a second corresponding point Q1 on the plane 55 corresponding to the screen 80 in the three-dimensional space in response to the position specifying operation performed in step S201. Identify point Q2.
  • the first corresponding point Q1 is a point on the plane 55 that corresponds to one of the two positions designated by the position designation operation.
  • the second corresponding point Q2 is a point on the plane 55 that corresponds to the other of the two positions designated by the position designation operation.
  • the control unit 41 calculates the distance
  • the first intersection R1 is the intersection of the object 54 and an extension of the straight line connecting the viewpoint V0 and the first corresponding point Q1 in the three-dimensional space.
  • the second intersection R2 is the intersection of the object 54 and an extension of the straight line connecting the viewpoint V0 and the second corresponding point Q2 in the three-dimensional space.
  • the control unit 41 specifies the three-dimensional coordinates (xq1, yq1, dq) corresponding to the position specified by the first operation as the coordinates of the first corresponding point Q1.
  • the control unit 41 specifies the three-dimensional coordinates (xq2, yq2, dq) corresponding to the position specified by the second operation as the coordinates of the second corresponding point Q2.
  • the control unit 41 determines the three-dimensional coordinates (xr1, yr1, dr1) is specified as the coordinates of the first intersection point R1.
  • the control unit 41 determines the three-dimensional coordinates (xr2, yr2, dr2) is specified as the coordinates of the second intersection R2.
  • the control unit 41 calculates the Euclidean distance ⁇ ((xr2-xr1) 2 +(yr2-yr1) between the coordinates (xr1, yr1, dr1) of the first intersection R1 and the coordinates (xr2, yr2, dr2) of the second intersection R2. ) 2 + (dr2-dr1) 2 ) is calculated.
  • step S203 the control unit 41 of the image processing device 11 outputs the calculation result obtained in step S202. Specifically, the control unit 41 outputs the numerical value representing the Euclidean distance calculated in step S202 on the screen 80, as shown in FIG.
  • step S204 the control unit 41 of the image processing device 11 displays marks 87 and 88 at the first corresponding position and the second corresponding position on the screen 80, respectively.
  • the first corresponding position is a position on the screen 80 that corresponds to the intersection of the plane 55 and a straight line connecting the viewpoint V0 and the first intersection R1 in the three-dimensional space.
  • the second corresponding position is a position on the screen 80 that corresponds to the intersection of the plane 55 and a straight line connecting the viewpoint V0 and the second intersection R2 in the three-dimensional space.
  • the control unit 41 displays marks 87 and 88 at the positions designated by the first operation and the second operation, respectively. For example, if the position of the viewpoint V0 is changed as a result of a subsequent operation accepted in step S105, step S112, or step S119, the control unit 41 also changes the positions of the marks 87 and 88.
  • the flow in FIG. 19 may be repeated any number of times. For example, when N is an integer greater than or equal to 2, 2N marks may be displayed on the screen 80 as a result of the position designation operation being performed N or more times.
  • the flow in FIG. 20 is executed after the flow in FIG. 19 is executed at least once.
  • step S211 when a range specification operation is performed, the control unit 41 of the image processing device 11 receives the range specification operation via the input unit 44.
  • the range specification operation is an operation for specifying a range 89 on the screen 80.
  • the range specifying operation includes an operation of pressing a button on the mouse 15 as a third operation, and is performed following the third operation and a drag operation of moving the pointer 86 while holding down the button of the mouse 15.
  • an operation of releasing a button on the mouse 15 is included as the fourth operation.
  • the third operation may be an operation of pressing a button on the mouse 15 while holding down a third key such as the Ctrl key or the Shift key on the keyboard 14.
  • the fourth operation may be an operation of releasing a button on the mouse 15 while pressing a fourth key such as the Ctrl key or the Shift key on the keyboard 14.
  • step S212 the control unit 41 of the image processing device 11 specifies the corresponding range 56 on the plane 55 corresponding to the screen 80 in the three-dimensional space in response to the range specifying operation performed in step S211.
  • the corresponding range 56 is a range on the plane 55 that corresponds to the range 89 designated by the range designation operation.
  • the control unit 41 changes the appearance of the mark displayed on the screen 80 at a position corresponding to an intersection existing in the three-dimensional area 57 as shown in FIG. 9 among the marks displayed in step S204.
  • the three-dimensional area 57 is an area that extends in a conical shape from the viewpoint V0 through the outer edge of the corresponding range 56 in the three-dimensional space.
  • control unit 41 controls the control unit 41 to set two areas on the plane 55 corresponding to a regular range such as a rectangular range or a circular range extending from the position specified in the third operation to the position specified in the fourth operation.
  • the dimension range is specified as the corresponding range 56.
  • the control unit 41 specifies, as a three-dimensional region 57, a region extending in a conical shape through the coordinates (xv, yv, dv) of the viewpoint V0 and the outer edge of the corresponding range 56.
  • the control unit 41 changes the color of the mark displayed on the screen 80 at a position corresponding to an intersection located within the three-dimensional area 57 among the intersections identified in step S202.
  • the control unit 41 controls the first intersection point R1 and the second intersection point R2, as shown in FIG. 8 or FIG.
  • the colors of the marks 87 and 88 linked to each are changed.
  • the control unit 41 changes the color of one or more marks associated with the one or more intersections. May be changed.
  • Image processing system 11 Image processing device 12 Cable 13 Drive unit 14 Keyboard 15 Mouse 16 Display 17 Connection terminal 18 Cart unit 20 Probe 21 Drive shaft 22 Hub 23 Sheath 24 Outer tube 25 Ultrasonic transducer 26 Relay connector 31 Scanner unit 32 Slide Unit 33 Bottom cover 34 Probe connection section 35 Scanner motor 36 Inlet 37 Probe clamp section 38 Slide motor 39 Switch group 41 Control section 42 Storage section 43 Communication section 44 Input section 45 Output section 51 Fault data 52 3D data 53 3D Image 54 Object 55 Plane 56 Corresponding range 57 Three-dimensional area 58, 58a, 58b Two-dimensional image 60 Biological tissue 61 Inner surface 62 Cutting area 63 Lumen 64 Cross section 65, 65a, 65b Area 66 Oval fossa 71 Camera 80 Screen 81 Operation Panel 82 Checkbox 83 Slider 84 Slider 85 Checkbox 86 Pointer 87, 88 Mark 89 Range

Abstract

This image processing device, which, on the basis of the positional relationship between a viewing point set in a virtual three-dimensional space and a biological tissue object arranged in the three-dimensional space, carries out rendering of the object and displays the object on a screen as a three-dimensional image of the biological tissue, is provided with a control unit which: in response to a position specification operation for specifying two positions on the screen, specifies a first corresponding point which corresponds to one of the two specified positions on a plane corresponding to the screen in the three-dimensional space and a second corresponding point which corresponds to the other; calculates the distance between a first intersection, that is, an intersection where an extension of a straight line connecting the viewing point and the first corresponding point in the three-dimensional space intersects with the object, and a second intersection, that is, an intersection where an extension of a straight line connecting the viewing point and the second corresponding point in the three-dimensional space intersects with the object; and outputs the obtained calculation result.

Description

画像処理装置、画像処理システム、画像表示方法、及び画像処理プログラムImage processing device, image processing system, image display method, and image processing program
 本開示は、画像処理装置、画像処理システム、画像表示方法、及び画像処理プログラムに関する。 The present disclosure relates to an image processing device, an image processing system, an image display method, and an image processing program.
 特許文献1から特許文献3には、US画像システムを用いて心腔又は血管の3次元画像を生成する技術が記載されている。「US」は、ultrasoundの略語である。 Patent Documents 1 to 3 describe techniques for generating three-dimensional images of heart chambers or blood vessels using a US imaging system. "US" is an abbreviation for ultrasound.
 特許文献4には、3次元画像が表示された画面上の2点が指定されると、当該2点についての2次元座標及び濃度を求め、2次元座標間の距離に濃度差を加味して当該2点間の距離を算出する方法が開示されている。 Patent Document 4 discloses that when two points on a screen on which a three-dimensional image is displayed are specified, the two-dimensional coordinates and density of the two points are determined, and the density difference is added to the distance between the two-dimensional coordinates. A method for calculating the distance between the two points is disclosed.
米国特許出願公開第2010/0215238号明細書US Patent Application Publication No. 2010/0215238 米国特許第6385332号明細書US Patent No. 6,385,332 米国特許第6251072号明細書US Patent No. 6,251,072 特開昭63-063433号公報Japanese Unexamined Patent Publication No. 63-063433
 心腔内、心臓血管、及び下肢動脈領域などに対してIVUSを用いる治療が広く行われている。「IVUS」は、intravascular ultrasoundの略語である。IVUSとはカテーテル長軸に対して垂直平面の2次元画像を提供するデバイス又は方法のことである。 Treatment using IVUS is widely performed for the intracardiac, cardiovascular, and lower limb arterial regions. "IVUS" is an abbreviation for intravascular ultrasound. IVUS is a device or method that provides two-dimensional images in a plane perpendicular to the longitudinal axis of a catheter.
 IVUSは、しばしばアブレーションなど、IVUSカテーテルとは別のカテーテルを用いた手技に使用される。例えば、心房細動アブレーションなどにおける経心房中隔アプローチでは、右心房に挿入した中隔穿刺針で卵円窩を穿刺することにより、右心房から左心房への経路を形成するという、いわゆるブロッケンブロウ法が用いられる。穿刺時には、パーフォレーション又は心タンポナーデなどの合併症のリスクがあるため、穿刺位置を十分に確認することが望ましい。その点で、360度の情報を得られるIVUSカテーテルは、同一平面内での穿刺位置を確認することに優れている。しかし、IVUSを使用する場合、IVUSカテーテル軸に沿って断続的に画像が取得されるため、立体構造をイメージするのが難しい。その結果、軸方向での穿刺位置の確認が不十分になるおそれがある。そこで、IVUSの2次元画像から生体組織の構造を表現する3次元画像を自動生成し、生成した3次元画像を術者に向けて表示することが考えられる。生成した3次元画像をそのまま表示するだけでは、術者には組織の外壁しか見えないため、3次元画像において、生体組織の構造の一部を切り取り、内腔を覗けるようにすることが考えられる。卵円窩の長さなど、3次元画像中の任意の距離を測れるように、画面上の2点が指定されると、当該2点間の距離を自動算出し、算出した距離を術者に向けて表示することも考えられる。 IVUS is often used for procedures that use a catheter separate from the IVUS catheter, such as ablation. For example, in the transatrial septal approach for atrial fibrillation ablation, a septal puncture needle inserted into the right atrium punctures the fossa ovalis, creating a path from the right atrium to the left atrium. law is used. During puncture, it is desirable to carefully confirm the puncture position because there is a risk of complications such as perforation or cardiac tamponade. In this respect, an IVUS catheter that can obtain 360-degree information is excellent in confirming the puncture position within the same plane. However, when using IVUS, images are acquired intermittently along the IVUS catheter axis, making it difficult to image the three-dimensional structure. As a result, confirmation of the puncture position in the axial direction may become insufficient. Therefore, it is conceivable to automatically generate a three-dimensional image representing the structure of the living tissue from the two-dimensional image of IVUS and display the generated three-dimensional image to the surgeon. If the generated 3D image is displayed as is, the surgeon can only see the outer wall of the tissue, so it is possible to cut out part of the structure of the living tissue in the 3D image so that the inner cavity can be seen. . When two points on the screen are specified, the distance between the two points is automatically calculated and the calculated distance is sent to the surgeon so that any distance in the three-dimensional image, such as the length of the fossa ovalis, can be measured. It is also conceivable to display it directed towards the user.
 3次元オブジェクトに対して視点又は光源の位置を変更することで、3次元画像の表示のしかたを任意に変えられるようにすることが求められる。しかし、表示のしかたによっては、必ずしも表示濃度が奥行き距離に対応するとは限らないため、特許文献4に開示されているように、指定された2点間の距離を、濃度差を加味して算出するという方法は適用できない。仮にそのような方法を使用してしまうと、距離算出の正確性が不十分となる。 There is a need to be able to arbitrarily change the way a three-dimensional image is displayed by changing the viewpoint or the position of the light source with respect to the three-dimensional object. However, depending on the display method, the displayed density does not necessarily correspond to the depth distance, so as disclosed in Patent Document 4, the distance between two specified points is calculated by taking into account the difference in density. This method cannot be applied. If such a method were to be used, the accuracy of distance calculation would be insufficient.
 本開示の目的は、3次元画像が表示された画面上で指定された2点についての距離算出の正確性を向上させることである。 An object of the present disclosure is to improve the accuracy of distance calculation between two points specified on a screen on which a three-dimensional image is displayed.
 本開示の一態様としての画像処理装置は、仮想の3次元空間内に設定された視点と、前記3次元空間内に配置された生体組織のオブジェクトとの位置関係に基づき、前記オブジェクトのレンダリングを行って、前記オブジェクトを前記生体組織の3次元画像として画面に表示する画像処理装置であって、前記画面上の2つの位置を指定する位置指定操作に応じて、前記3次元空間内で前記画面に対応する平面上の、指定された2つの位置の一方に対応する第1対応点及び他方に対応する第2対応点を特定し、前記3次元空間内で前記視点と当該第1対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第1交点、及び前記3次元空間内で前記視点と当該第2対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第2交点の間の距離を算出し、得られた算出結果を出力する制御部を備える。 An image processing device according to an aspect of the present disclosure renders a living tissue object based on a positional relationship between a viewpoint set in a virtual three-dimensional space and an object of biological tissue arranged in the three-dimensional space. an image processing device that displays the object on a screen as a three-dimensional image of the living tissue, the image processing device displaying the object on the screen as a three-dimensional image of the living tissue, the image processing device A first corresponding point corresponding to one of the two specified positions and a second corresponding point corresponding to the other on the plane corresponding to the above are identified, and the viewpoint and the first corresponding point are identified in the three-dimensional space. a first intersection point that is an intersection between the object and an extension of a straight line connecting the viewpoint, and a second intersection point that is an intersection between the object and an extension of the straight line that connects the viewpoint and the second corresponding point in the three-dimensional space. A control unit is provided that calculates the distance between the intersection points and outputs the obtained calculation result.
 一実施形態として、前記制御部は、前記算出結果として、前記距離を表す数値を前記画面上に出力する。 In one embodiment, the control unit outputs a numerical value representing the distance on the screen as the calculation result.
 一実施形態として、前記位置指定操作は、入力機器の押しボタンを押す操作を第1操作として含み、前記制御部は、前記平面上の、前記第1操作が行われたときの前記画面上のポインタの位置に対応する点を前記第1対応点として特定する。 In one embodiment, the position specifying operation includes, as a first operation, an operation of pressing a push button of an input device, and the control unit is configured to control the location on the screen when the first operation is performed on the plane. A point corresponding to the position of the pointer is specified as the first corresponding point.
 一実施形態として、前記第1操作は、予め定められた第1キーを押しながら前記押しボタンを押す操作である。 In one embodiment, the first operation is an operation of pressing the push button while pressing a predetermined first key.
 一実施形態として、前記位置指定操作は、前記第1操作と、前記押しボタンを押したまま前記ポインタを動かすドラッグ操作とに続けて行われる、前記押しボタンを離す操作を第2操作として含み、前記制御部は、前記平面上の、前記第2操作が行われたときの前記ポインタの位置に対応する点を前記第2対応点として特定する。 In one embodiment, the position specifying operation includes, as a second operation, an operation of releasing the push button, which is performed following the first operation and a drag operation of moving the pointer while holding the push button; The control unit specifies, as the second corresponding point, a point on the plane that corresponds to the position of the pointer when the second operation is performed.
 一実施形態として、前記第2操作は、予め定められた第2キーを押しながら前記押しボタンを離す操作である。 In one embodiment, the second operation is an operation of releasing the push button while pressing a predetermined second key.
 一実施形態として、前記制御部は、前記画面上の、前記3次元空間内で前記視点と前記第1交点とを結ぶ直線と前記平面との交点に対応する第1対応位置、及び前記3次元空間内で前記視点と前記第2交点とを結ぶ直線と前記平面との交点に対応する第2対応位置のそれぞれにマークを表示する。 In one embodiment, the control unit may control a first corresponding position on the screen corresponding to an intersection between the plane and a straight line connecting the viewpoint and the first intersection in the three-dimensional space; A mark is displayed at each second corresponding position corresponding to an intersection between the plane and a straight line connecting the viewpoint and the second intersection in space.
 一実施形態として、前記制御部は、前記画面上の範囲を指定する範囲指定操作に応じて、前記平面上の、指定された範囲に対応する対応範囲を特定し、前記画面上の、前記第1交点と前記第2交点とのうち、前記3次元空間内で前記視点から当該対応範囲の外縁を通って錐状に広がる3次元領域内に存在する交点に対応する位置に表示しているマークの外観を変更する。 In one embodiment, the control unit specifies a corresponding range on the plane that corresponds to the specified range in response to a range specification operation that specifies a range on the screen, and specifies the corresponding range on the plane that corresponds to the specified range, and A mark displayed at a position corresponding to an intersection point between the first intersection point and the second intersection point that exists in a three-dimensional area extending from the viewpoint in a conical shape through the outer edge of the corresponding range in the three-dimensional space. change the appearance of
 一実施形態として、前記制御部は、外観を変更したマークを一括削除する操作を受け付ける。 In one embodiment, the control unit receives an operation to delete marks whose appearance has been changed all at once.
 本開示の一態様としての画像処理システムは、前記画像処理装置と、前記画面を表示するディスプレイとを備える。 An image processing system as one aspect of the present disclosure includes the image processing device and a display that displays the screen.
 本開示の一態様としての画像表示方法は、仮想の3次元空間内に設定された視点と、前記3次元空間内に配置された生体組織のオブジェクトとの位置関係に基づき、前記オブジェクトのレンダリングを行って、前記オブジェクトを前記生体組織の3次元画像として画面に表示する画像表示方法であって、前記画面上の2つの位置を指定する位置指定操作に応じて、前記3次元空間内で前記画面に対応する平面上の、指定された2つの位置の一方に対応する第1対応点及び他方に対応する第2対応点を特定し、前記3次元空間内で前記視点と当該第1対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第1交点、及び前記3次元空間内で前記視点と当該第2対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第2交点の間の距離を算出し、得られた算出結果を出力する、というものである。 An image display method as an aspect of the present disclosure renders an object based on a positional relationship between a viewpoint set in a virtual three-dimensional space and an object of biological tissue arranged in the three-dimensional space. an image display method for displaying the object on a screen as a three-dimensional image of the living tissue, the method comprising: A first corresponding point corresponding to one of the two specified positions and a second corresponding point corresponding to the other on the plane corresponding to the above are identified, and the viewpoint and the first corresponding point are identified in the three-dimensional space. a first intersection point that is an intersection between the object and an extension of a straight line connecting the viewpoint, and a second intersection point that is an intersection between the object and an extension of the straight line that connects the viewpoint and the second corresponding point in the three-dimensional space. The distance between the intersection points is calculated and the calculated result is output.
 本開示の一態様としての画像処理プログラムは、仮想の3次元空間内に設定された視点と、前記3次元空間内に配置された生体組織のオブジェクトとの位置関係に基づき、前記オブジェクトのレンダリングを行って、前記オブジェクトを前記生体組織の3次元画像として画面に表示するコンピュータに、前記画面上の2つの位置を指定する位置指定操作に応じて、前記3次元空間内で前記画面に対応する平面上の、指定された2つの位置の一方に対応する第1対応点及び他方に対応する第2対応点を特定する処理と、前記3次元空間内で前記視点と当該第1対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第1交点、及び前記3次元空間内で前記視点と当該第2対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第2交点の間の距離を算出する処理と、得られた算出結果を出力する処理とを実行させる。 An image processing program as an aspect of the present disclosure renders an object based on a positional relationship between a viewpoint set in a virtual three-dimensional space and an object of biological tissue arranged in the three-dimensional space. and displays the object on the screen as a three-dimensional image of the living tissue, in response to a position designation operation that specifies two positions on the screen, a plane corresponding to the screen in the three-dimensional space. The above process of identifying a first corresponding point corresponding to one of the two specified positions and a second corresponding point corresponding to the other, and connecting the viewpoint and the first corresponding point in the three-dimensional space. a first point of intersection that is an intersection between an extension of a straight line and the object, and a second point of intersection that is an intersection between an extension of a straight line connecting the viewpoint and the second corresponding point in the three-dimensional space and the object; A process of calculating the distance between the two and a process of outputting the obtained calculation result are executed.
 本開示によれば、3次元画像が表示された画面上で指定された2点についての距離算出の正確性が向上する。 According to the present disclosure, the accuracy of distance calculation between two points specified on a screen on which a three-dimensional image is displayed is improved.
本開示の実施形態に係る画像処理システムの斜視図である。FIG. 1 is a perspective view of an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムにより行われる距離算出の例を示す図である。FIG. 2 is a diagram illustrating an example of distance calculation performed by the image processing system according to the embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムにより行われる領域算出の例を示す図である。FIG. 3 is a diagram illustrating an example of region calculation performed by the image processing system according to the embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される画面の例を示す図である。1 is a diagram illustrating an example of a screen displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムによりディスプレイに表示される2次元画像の例を示す図である。1 is a diagram showing an example of a two-dimensional image displayed on a display by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムにより形成される切断領域の例を示す図である。FIG. 2 is a diagram illustrating an example of a cutting area formed by an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理装置の構成を示すブロック図である。FIG. 1 is a block diagram showing the configuration of an image processing device according to an embodiment of the present disclosure. 本開示の実施形態に係るプローブ及び駆動ユニットの斜視図である。FIG. 2 is a perspective view of a probe and a drive unit according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムの動作を示すフローチャートである。1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムの動作を示すフローチャートである。1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムの動作を示すフローチャートである。1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure. 本開示の実施形態に係る画像処理システムの動作を示すフローチャートである。1 is a flowchart showing the operation of an image processing system according to an embodiment of the present disclosure.
 以下、本開示の一実施形態について、図を参照して説明する。 Hereinafter, one embodiment of the present disclosure will be described with reference to the drawings.
 各図中、同一又は相当する部分には、同一符号を付している。本実施形態の説明において、同一又は相当する部分については、説明を適宜省略又は簡略化する。 In each figure, the same or corresponding parts are given the same reference numerals. In the description of this embodiment, the description of the same or corresponding parts will be omitted or simplified as appropriate.
 図1から図15を参照して、本実施形態の概要を説明する。 An overview of this embodiment will be described with reference to FIGS. 1 to 15.
 本実施形態に係る画像処理装置11は、生体組織60を表す3次元データ52を3次元画像53としてディスプレイ16に表示させるコンピュータである。画像処理装置11は、図2から図5に示すように、仮想の3次元空間内に設定された視点V0と、3次元空間内に配置された生体組織60のオブジェクト54との位置関係に基づき、オブジェクト54のレンダリングを行って、オブジェクト54を生体組織60の3次元画像53として画面80に表示する。視点V0は、3次元空間に配置される仮想のカメラ71の位置に相当する。 The image processing device 11 according to the present embodiment is a computer that displays three-dimensional data 52 representing a biological tissue 60 as a three-dimensional image 53 on a display 16. As shown in FIGS. 2 to 5, the image processing device 11 performs image processing based on the positional relationship between the viewpoint V0 set in the virtual three-dimensional space and the object 54 of the biological tissue 60 arranged in the three-dimensional space. , renders the object 54, and displays the object 54 on the screen 80 as a three-dimensional image 53 of the biological tissue 60. Viewpoint V0 corresponds to the position of virtual camera 71 arranged in three-dimensional space.
 画像処理装置11は、図2から図5に示すように、位置指定操作に応じて、3次元空間内で画面80に対応する平面55上の第1対応点Q1及び第2対応点Q2を特定する。位置指定操作は、画面80上の2つの位置を指定する操作である。第1対応点Q1は、平面55上の、位置指定操作で指定された2つの位置の一方に対応する点である。第2対応点Q2は、平面55上の、位置指定操作で指定された2つの位置の他方に対応する点である。画像処理装置11は、3次元空間内の、第1交点R1及び第2交点R2の間の距離||R2-R1||を算出する。第1交点R1は、3次元空間内で視点V0と第1対応点Q1とを結ぶ直線の延長線とオブジェクト54との交点である。第2交点R2は、3次元空間内で視点V0と第2対応点Q2とを結ぶ直線の延長線とオブジェクト54との交点である。画像処理装置11は、得られた算出結果を出力する。具体的には、画像処理装置11は、算出結果として、距離を表す数値を画面80上に出力する。あるいは、画像処理装置11は、算出結果を音声など、別の形式で出力してもよい。図4は、算出結果として、「10mm」という数値が画面80に表示される例を示している。 As shown in FIGS. 2 to 5, the image processing device 11 specifies a first corresponding point Q1 and a second corresponding point Q2 on the plane 55 corresponding to the screen 80 in the three-dimensional space in response to the position specifying operation. do. The position designation operation is an operation to designate two positions on the screen 80. The first corresponding point Q1 is a point on the plane 55 that corresponds to one of the two positions designated by the position designation operation. The second corresponding point Q2 is a point on the plane 55 that corresponds to the other of the two positions designated by the position designation operation. The image processing device 11 calculates the distance ||R2−R1|| between the first intersection R1 and the second intersection R2 in the three-dimensional space. The first intersection R1 is the intersection of the object 54 and an extension of the straight line connecting the viewpoint V0 and the first corresponding point Q1 in the three-dimensional space. The second intersection R2 is the intersection of the object 54 and an extension of the straight line connecting the viewpoint V0 and the second corresponding point Q2 in the three-dimensional space. The image processing device 11 outputs the obtained calculation results. Specifically, the image processing device 11 outputs a numerical value representing the distance on the screen 80 as the calculation result. Alternatively, the image processing device 11 may output the calculation result in another format such as audio. FIG. 4 shows an example in which the numerical value "10 mm" is displayed on the screen 80 as the calculation result.
 本実施形態によれば、3次元画像53が表示された画面80上で指定された2点についての距離算出の正確性が向上する。例えば、オブジェクト54に対して視点V0の位置が変更されて3次元画像53の表示のしかたが変わっても、指定された2点間の距離が、表示濃度ではなく、視点V0の位置に応じて算出されるため、正確な距離算出が可能である。しかも、図5に示すように、指定された2点間の距離が、画面80に対応する平面55上の座標間の距離||Q2-Q1||ではなく、オブジェクト54上の、視点V0から当該座標を通って延びる直線が達する座標間の距離||R2-R1||に応じて算出されるため、正確な距離算出が可能である。 According to this embodiment, the accuracy of distance calculation between two points specified on the screen 80 on which the three-dimensional image 53 is displayed is improved. For example, even if the position of the viewpoint V0 with respect to the object 54 is changed and the way the three-dimensional image 53 is displayed changes, the distance between two specified points will change depending on the position of the viewpoint V0, not the display density. Therefore, accurate distance calculation is possible. Moreover, as shown in FIG. 5, the distance between two designated points is not the distance between the coordinates on the plane 55 corresponding to the screen 80 ||Q2-Q1||, but from the viewpoint V0 on the object 54. Since the distance between the coordinates is calculated according to ||R2−R1||, which is reached by a straight line extending through the coordinates, accurate distance calculation is possible.
 位置指定操作は、入力機器の押しボタンを押す操作を第1操作として含む。第1操作は、本実施形態では、入力機器の押しボタンとして、マウス15のボタンを押す操作であるが、入力機器の押しボタンとして、キーボード14の特定のキーを押す操作でもよい。画像処理装置11は、平面55上の、第1操作が行われたときの画面80上のポインタ86の位置に対応する点を第1対応点Q1として特定する。図2は、第1操作が行われたときに、ポインタ86が3次元画像53で表された卵円窩66の上端に対応する位置にある例を示している。ポインタ86は、この例では矢印形状であるが、十字形状など、別の形状でもよい。例えば、ユーザが操作モードを他のモードから位置指定操作モードに切り替えたときに、ユーザが操作モードを認識しやすいようにポインタ86の形状が変更されてもよい。そして、ユーザが操作モードを位置指定操作モードから他のモードに切り替えたときに、ポインタ86の形状が再度変更されてもよい。 The position designation operation includes an operation of pressing a push button on the input device as a first operation. In this embodiment, the first operation is an operation of pressing a button on the mouse 15 as a push button of the input device, but it may also be an operation of pressing a specific key of the keyboard 14 as a push button of the input device. The image processing device 11 identifies a point on the plane 55 that corresponds to the position of the pointer 86 on the screen 80 when the first operation was performed as the first corresponding point Q1. FIG. 2 shows an example in which the pointer 86 is located at a position corresponding to the upper end of the fossa ovalis 66 represented in the three-dimensional image 53 when the first operation is performed. Although the pointer 86 has an arrow shape in this example, it may have another shape such as a cross shape. For example, when the user switches the operation mode from another mode to the position specification operation mode, the shape of the pointer 86 may be changed so that the user can easily recognize the operation mode. Then, when the user switches the operation mode from the position specifying operation mode to another mode, the shape of the pointer 86 may be changed again.
 本実施形態では、画像処理装置11は、画面80上の第1対応位置にマーク87を表示する。第1対応位置は、画面80上の、3次元空間内で視点V0と第1交点R1とを結ぶ直線と平面55との交点に対応する位置である。第1操作が行われた後、視点V0が動かされるまでは、第1対応位置は、第1操作で指定された位置と変わらないが、視点V0が動かされると、第1対応位置も、第1操作で指定された位置から動く。例えば、第1交点R1に相当するボクセルにマーク87の情報を対応付けて記憶することで、視点V0の位置が変わっても、記憶した情報を基に同じボクセルに対してマーク87を表示することができる。図3は、第1対応位置として、卵円窩66の上端に対応する位置にマーク87が表示される例を示している。 In this embodiment, the image processing device 11 displays the mark 87 at the first corresponding position on the screen 80. The first corresponding position is a position on the screen 80 that corresponds to the intersection of the plane 55 and a straight line connecting the viewpoint V0 and the first intersection R1 in the three-dimensional space. After the first operation is performed, the first corresponding position remains the same as the position specified in the first operation until the viewpoint V0 is moved. However, when the viewpoint V0 is moved, the first corresponding position also changes to the first corresponding position. Moves from a specified position with one operation. For example, by storing the information of the mark 87 in association with the voxel corresponding to the first intersection point R1, even if the position of the viewpoint V0 changes, the mark 87 can be displayed for the same voxel based on the stored information. I can do it. FIG. 3 shows an example in which the mark 87 is displayed at a position corresponding to the upper end of the fossa ovalis 66 as the first corresponding position.
 誤操作を防止する観点から、第1操作は、予め定められた第1キーを押しながら押しボタンを押す操作であってもよい。第1キーは、例えば、キーボード14のCtrlキー又はShiftキーである。 From the viewpoint of preventing erroneous operations, the first operation may be an operation of pressing a push button while pressing a predetermined first key. The first key is, for example, the Ctrl key or the Shift key on the keyboard 14.
 位置指定操作は、第1操作と、押しボタンを押したままポインタ86を動かすドラッグ操作とに続けて行われる、押しボタンを離す操作を第2操作として含む。画像処理装置11は、平面55上の、第2操作が行われたときのポインタ86の位置に対応する点を第2対応点Q2として特定する。図3は、第2操作が行われたときに、ポインタ86が卵円窩66の下端に対応する位置にある例を示している。 The position specifying operation includes, as a second operation, an operation of releasing the push button, which is performed following the first operation and a drag operation of moving the pointer 86 while holding down the push button. The image processing device 11 identifies a point on the plane 55 that corresponds to the position of the pointer 86 when the second operation is performed as a second corresponding point Q2. FIG. 3 shows an example in which the pointer 86 is at a position corresponding to the lower end of the fossa ovalis 66 when the second operation is performed.
 本実施形態では、画像処理装置11は、画面80上の第2対応位置にマーク88を表示する。第2対応位置は、画面80上の、3次元空間内で視点V0と第2交点R2とを結ぶ直線と平面55との交点に対応する位置である。第2操作が行われた後、視点V0が動かされるまでは、第2対応位置は、第2操作で指定された位置と変わらないが、視点V0が動かされると、第2対応位置も、第2操作で指定された位置から動く。例えば、第2交点R2に相当するボクセルにマーク88の情報を対応付けて記憶することで、視点V0の位置が変わっても、記憶した情報を基に同じボクセルに対してマーク88を表示することができる。図4は、第2対応位置として、卵円窩66の下端に対応する位置にマーク88が表示される例を示している。 In this embodiment, the image processing device 11 displays the mark 88 at the second corresponding position on the screen 80. The second corresponding position is a position on the screen 80 that corresponds to the intersection of the plane 55 and a straight line connecting the viewpoint V0 and the second intersection R2 in the three-dimensional space. After the second operation is performed, the second corresponding position remains the same as the position specified by the second operation until the viewpoint V0 is moved. However, when the viewpoint V0 is moved, the second corresponding position also changes to the second corresponding position. Move from the specified position with 2 operations. For example, by storing the information of the mark 88 in association with the voxel corresponding to the second intersection point R2, even if the position of the viewpoint V0 changes, the mark 88 can be displayed for the same voxel based on the stored information. I can do it. FIG. 4 shows an example in which the mark 88 is displayed at a position corresponding to the lower end of the fossa ovalis 66 as the second corresponding position.
 誤操作を防止する観点から、第2操作は、予め定められた第2キーを押しながら押しボタンを離す操作であってもよい。第2キーは、例えば、キーボード14のCtrlキー又はShiftキーである。第2キーは、第1キーと同じキーでもよいし、又は第1キーとは異なるキーでもよい。 From the viewpoint of preventing erroneous operations, the second operation may be an operation of releasing a push button while pressing a predetermined second key. The second key is, for example, the Ctrl key or the Shift key on the keyboard 14. The second key may be the same key as the first key, or may be a different key from the first key.
 図2から図4の例によれば、ユーザは、マウス15でポインタ86を所望の位置に動かし、マウス15のボタンを押して始点位置を指定し、マウス15のボタンを押したままポインタ86を別の所望の位置に動かし、マウス15のボタンを離して終点位置を指定することができる。画像処理装置11は、図5に示したように、指定された始点位置に対応する3次元座標(xq1,yq1,dq)を第1対応点Q1の座標として特定する。画像処理装置11は、指定された終点位置に対応する3次元座標(xq2,yq2,dq)を第2対応点Q2の座標として特定する。画像処理装置11は、視点V0の座標(xv,yv,dv)と第1対応点Q1の座標(xq1,yq1,dq)とを通過する直線がオブジェクト54に到達する3次元座標(xr1,yr1,dr1)を第1交点R1の座標として特定する。画像処理装置11は、視点V0の座標(xv,yv,dv)と第2対応点Q2の座標(xq2,yq2,dq)とを通過する直線がオブジェクト54に到達する3次元座標(xr2,yr2,dr2)を第2交点R2の座標として特定する。画像処理装置11は、第1交点R1の座標(xr1,yr1,dr1)及び第2交点R2の座標(xr2,yr2,dr2)の間のユークリッド距離√((xr2-xr1)+(yr2-yr1)+(dr2-dr1))を算出する。画像処理装置11は、算出したユークリッド距離を表す数値を画面80上に出力する。したがって、ユーザは、卵円窩66の長さなど、3次元画像53中の任意の距離を簡単かつ正確に測ることができる。 According to the examples in FIGS. 2 to 4, the user moves the pointer 86 to a desired position using the mouse 15, presses a button on the mouse 15 to specify the starting point position, and moves the pointer 86 to another position while holding down the button on the mouse 15. The end point position can be specified by moving the mouse 15 to a desired position and releasing the button on the mouse 15. As shown in FIG. 5, the image processing device 11 specifies the three-dimensional coordinates (xq1, yq1, dq) corresponding to the specified starting point position as the coordinates of the first corresponding point Q1. The image processing device 11 specifies the three-dimensional coordinates (xq2, yq2, dq) corresponding to the designated end point position as the coordinates of the second corresponding point Q2. The image processing device 11 calculates three-dimensional coordinates (xr1, yr1) where a straight line passing through the coordinates (xv, yv, dv) of the viewpoint V0 and the coordinates (xq1, yq1, dq) of the first corresponding point Q1 reaches the object 54. , dr1) as the coordinates of the first intersection R1. The image processing device 11 calculates three-dimensional coordinates (xr2, yr2) where a straight line passing through the coordinates (xv, yv, dv) of the viewpoint V0 and the coordinates (xq2, yq2, dq) of the second corresponding point Q2 reaches the object 54. , dr2) as the coordinates of the second intersection R2. The image processing device 11 calculates the Euclidean distance √((xr2−xr1) 2 +(yr2− yr1) 2 + (dr2-dr1) 2 ) is calculated. The image processing device 11 outputs a numerical value representing the calculated Euclidean distance on the screen 80. Therefore, the user can easily and accurately measure any distance in the three-dimensional image 53, such as the length of the fossa ovalis 66.
 画像処理装置11は、更に、図6から図9に示すように、範囲指定操作に応じて、3次元空間内で画面80に対応する平面55上の対応範囲56を特定する。範囲指定操作は、画面80上の範囲89を指定する操作である。対応範囲56は、平面55上の、指定された範囲89に対応する範囲である。画像処理装置11は、画面80上の、第1交点R1と第2交点R2とのうち、図9に示すような3次元領域57内に存在する交点に対応する位置に表示しているマークの色又は形状などの外観を変更する。3次元領域57は、3次元空間内で視点V0から対応範囲56の外縁を通って錐状に広がる領域である。図9は、第1交点R1及び第2交点R2の両方が3次元領域57内に存在する例を示している。図8は、画面80上の、第1交点R1及び第2交点R2に対応する位置にそれぞれ表示されているマーク87,88の色が変更される例を示している。 The image processing device 11 further specifies a corresponding range 56 on the plane 55 corresponding to the screen 80 in the three-dimensional space in response to the range specification operation, as shown in FIGS. 6 to 9. The range specification operation is an operation for specifying a range 89 on the screen 80. The corresponding range 56 is a range on the plane 55 that corresponds to the designated range 89. The image processing device 11 selects a mark displayed on the screen 80 at a position corresponding to an intersection existing in the three-dimensional area 57 as shown in FIG. 9 between the first intersection R1 and the second intersection R2. Change appearance, such as color or shape. The three-dimensional area 57 is an area that extends in a conical shape from the viewpoint V0 through the outer edge of the corresponding range 56 in the three-dimensional space. FIG. 9 shows an example in which both the first intersection point R1 and the second intersection point R2 exist within the three-dimensional area 57. FIG. 8 shows an example in which the colors of marks 87 and 88 displayed on the screen 80 at positions corresponding to the first intersection point R1 and the second intersection point R2 are changed.
 画像処理装置11は、外観を変更したマークを一括削除する操作など、指定された範囲89内に存在するマークに対する一括操作を受け付ける。したがって、ユーザは、画面80上で任意のマークを選び、一括で消すなど、効率的な操作を行うことができる。 The image processing device 11 accepts batch operations on marks existing within the specified range 89, such as an operation to collectively delete marks whose appearance has been changed. Therefore, the user can perform efficient operations such as selecting arbitrary marks on the screen 80 and erasing them all at once.
 範囲指定操作は、入力機器の押しボタンを押す操作を第3操作として含む。第3操作は、本実施形態では、入力機器の押しボタンとして、マウス15のボタンを押す操作であるが、入力機器の押しボタンとして、キーボード14の特定のキーを押す操作でもよい。画像処理装置11は、平面55上の、第3操作が行われたときの画面80上のポインタ86の位置に対応する点を特定する。図6は、第3操作が行われたときに、ポインタ86が卵円窩66から左上方に離れた点に対応する位置にある例を示している。ポインタ86は、この例では矢印形状であるが、十字形状など、別の形状でもよい。例えば、ユーザが操作モードを他のモードから範囲指定操作モードに切り替えたときに、ユーザが操作モードを認識しやすいようにポインタ86の形状が変更されてもよい。そして、ユーザが操作モードを範囲指定操作モードから他のモードに切り替えたときに、ポインタ86の形状が再度変更されてもよい。 The range specification operation includes an operation of pressing a push button on the input device as a third operation. In this embodiment, the third operation is an operation of pressing a button on the mouse 15 as a push button of the input device, but it may also be an operation of pressing a specific key of the keyboard 14 as a push button of the input device. The image processing device 11 identifies a point on the plane 55 that corresponds to the position of the pointer 86 on the screen 80 when the third operation was performed. FIG. 6 shows an example in which the pointer 86 is at a position corresponding to a point away from the fossa ovalis 66 to the upper left when the third operation is performed. Although the pointer 86 has an arrow shape in this example, it may have another shape such as a cross shape. For example, when the user switches the operation mode from another mode to the range specification operation mode, the shape of the pointer 86 may be changed so that the user can easily recognize the operation mode. Then, when the user switches the operation mode from the range specification operation mode to another mode, the shape of the pointer 86 may be changed again.
 誤操作を防止する観点から、第3操作は、予め定められた第3キーを押しながら押しボタンを押す操作であってもよい。第3キーは、例えば、キーボード14のCtrlキー又はShiftキーである。第3キーは、第1キーと同じキーでもよいし、又は第1キーとは異なるキーでもよい。第3キーは、第2キーと同じキーでもよいし、又は第2キーとは異なるキーでもよい。 From the viewpoint of preventing erroneous operations, the third operation may be an operation of pressing a push button while pressing a predetermined third key. The third key is, for example, the Ctrl key or the Shift key on the keyboard 14. The third key may be the same key as the first key, or may be a different key from the first key. The third key may be the same key as the second key, or may be a different key from the second key.
 範囲指定操作は、第3操作と、押しボタンを押したままポインタ86を動かすドラッグ操作とに続けて行われる、押しボタンを離す操作を第4操作として含む。画像処理装置11は、平面55上の、第4操作が行われたときのポインタ86の位置に対応する点を特定する。図7は、第4操作が行われたときに、ポインタ86が卵円窩66から右下方に離れた点に対応する位置にある例を示している。 The range specifying operation includes, as a fourth operation, an operation of releasing the push button, which is performed following the third operation and the drag operation of moving the pointer 86 while holding down the push button. The image processing device 11 identifies a point on the plane 55 that corresponds to the position of the pointer 86 when the fourth operation was performed. FIG. 7 shows an example in which the pointer 86 is located at a position corresponding to a point farther to the lower right from the fossa ovalis 66 when the fourth operation is performed.
 誤操作を防止する観点から、第4操作は、予め定められた第4キーを押しながら押しボタンを離す操作であってもよい。第4キーは、例えば、キーボード14のCtrlキー又はShiftキーである。第4キーは、第1キーと同じキーでもよいし、又は第1キーとは異なるキーでもよい。第4キーは、第2キーと同じキーでもよいし、又は第2キーとは異なるキーでもよい。第4キーは、第3キーと同じキーでもよいし、又は第3キーとは異なるキーでもよい。 From the viewpoint of preventing erroneous operations, the fourth operation may be an operation of releasing a push button while pressing a predetermined fourth key. The fourth key is, for example, the Ctrl key or the Shift key on the keyboard 14. The fourth key may be the same key as the first key, or may be a different key from the first key. The fourth key may be the same key as the second key, or may be a different key from the second key. The fourth key may be the same key as the third key, or may be a different key from the third key.
 本実施形態では、画像処理装置11は、平面55上の、第3操作が行われたときの画面80上のポインタ86の位置に対応する点と、第4操作が行われたときのポインタ86の位置に対応する点とを対角の頂点とする矩形範囲を対応範囲56として特定する。図8は、範囲89として、卵円窩66から左上方に離れた点に対応する位置と、卵円窩66から右下方に離れた点に対応する位置とに対角の頂点がある矩形範囲が指定される例を示している。 In the present embodiment, the image processing device 11 determines the point on the plane 55 corresponding to the position of the pointer 86 on the screen 80 when the third operation is performed, and the point 86 when the fourth operation is performed. A rectangular range whose diagonal vertices are the point corresponding to the position of is specified as the corresponding range 56. FIG. 8 shows, as a range 89, a rectangular range having diagonal vertices at a position corresponding to a point distant to the upper left from the fossa ovalis 66 and a position corresponding to a point corresponding to a point distant to the lower right from the fossa ovalis 66. An example where is specified is shown.
 図10から図12に示すように、範囲89として、矩形範囲の代わりに円形範囲を指定できるようにしてもよい。すなわち、画像処理装置11は、平面55上の、第3操作が行われたときの画面80上のポインタ86の位置に対応する点を中心点とし、第4操作が行われたときのポインタ86の位置に対応する点を円周点とする円形範囲を対応範囲56として特定してもよい。図10は、第3操作が行われたときに、ポインタ86が卵円窩66の中心点に対応する位置にある例を示している。図11は、第4操作が行われたときに、ポインタ86が卵円窩66から下方に離れた点に対応する位置にある例を示している。図12は、範囲89として、卵円窩66の中心点に対応する位置に中心点があり、卵円窩66から下方に離れた点に対応する位置に円周点がある円形範囲が指定される例を示している。図12は、図8と同じように、画面80上の、第1交点R1及び第2交点R2に対応する位置にそれぞれ表示されているマーク87,88の色が変更される例を示している。 As shown in FIGS. 10 to 12, a circular range may be specified as the range 89 instead of a rectangular range. That is, the image processing device 11 uses a point on the plane 55 that corresponds to the position of the pointer 86 on the screen 80 when the third operation was performed as a center point, and a point on the plane 55 that corresponds to the position of the pointer 86 on the screen 80 when the fourth operation was performed. A circular range whose circumferential points are points corresponding to the positions of may be specified as the corresponding range 56. FIG. 10 shows an example in which the pointer 86 is located at a position corresponding to the center point of the fossa ovalis 66 when the third operation is performed. FIG. 11 shows an example in which the pointer 86 is at a position corresponding to a point downwardly away from the fossa ovalis 66 when the fourth operation is performed. In FIG. 12, a circular range is designated as a range 89, with a center point at a position corresponding to the center point of the fossa ovalis 66 and a circumferential point at a position corresponding to a point downwardly away from the fossa ovalis 66. An example is shown below. Similar to FIG. 8, FIG. 12 shows an example in which the colors of the marks 87 and 88 displayed on the screen 80 at positions corresponding to the first intersection point R1 and the second intersection point R2 are changed. .
 図6から図8の例によれば、ユーザは、マウス15でポインタ86を所望の位置に動かし、マウス15のボタンを押して1つの頂点位置を指定し、マウス15のボタンを押したままポインタ86を別の所望の位置に動かし、マウス15のボタンを離して矩形範囲を範囲89として指定することができる。あるいは、図10から図12の例によれば、ユーザは、マウス15でポインタ86を所望の位置に動かし、マウス15のボタンを押して中心点位置を指定し、マウス15のボタンを押したままポインタ86を別の所望の位置に動かし、マウス15のボタンを離して円形範囲を範囲89として指定することができる。画像処理装置11は、図9に示したように、指定された範囲89に対応する2次元範囲を対応範囲56として特定する。画像処理装置11は、視点V0の座標(xv,yv,dv)と対応範囲56の外縁を通って錐状に広がる領域を3次元領域57として特定する。画像処理装置11は、第1交点R1の座標(xr1,yr1,dr1)及び第2交点R2の座標(xr2,yr2,dr2)が3次元領域57内であると判定する。画像処理装置11は、画面80上の、第1交点R1と第2交点R2とに対応する位置にそれぞれ表示しているマーク87,88の色を変更する。したがって、ユーザは、マーク87,88を簡単に選ぶことができる。 According to the example of FIGS. 6 to 8, the user moves the pointer 86 to a desired position with the mouse 15, presses the button of the mouse 15 to specify one vertex position, and while holding down the button of the mouse 15, moves the pointer 86 to a desired position. can be moved to another desired position and the rectangular range can be designated as range 89 by releasing the mouse 15 button. Alternatively, according to the example of FIGS. 10 to 12, the user moves the pointer 86 to a desired position with the mouse 15, presses the button of the mouse 15 to specify the center point position, and while holding down the button of the mouse 15, moves the pointer 86 to a desired position. 86 to another desired position and release the mouse 15 button to designate the circular area as area 89. The image processing device 11 specifies a two-dimensional range corresponding to the designated range 89 as the corresponding range 56, as shown in FIG. The image processing device 11 specifies a cone-shaped area extending through the coordinates (xv, yv, dv) of the viewpoint V0 and the outer edge of the corresponding range 56 as a three-dimensional area 57. The image processing device 11 determines that the coordinates (xr1, yr1, dr1) of the first intersection R1 and the coordinates (xr2, yr2, dr2) of the second intersection R2 are within the three-dimensional area 57. The image processing device 11 changes the colors of the marks 87 and 88 displayed on the screen 80 at positions corresponding to the first intersection R1 and the second intersection R2, respectively. Therefore, the user can easily select the marks 87 and 88.
 画像処理装置11は、図14に示すように、3次元画像53において生体組織60の内腔63を露出させる切断領域62を3次元データ52に形成する。画像処理装置11は、切断領域62の位置に応じて、3次元画像53をディスプレイ16に表示させる際の視点V0を調整する。画像処理装置11は、視点V0の回転を要求するユーザ操作を受け付けると、3次元画像53において水平方向に延び視点V0を含む基準平面上における内腔63内に位置する基準点を通りかつ基準平面と垂直な方向に延びる回転軸周りに、切断領域62の位置を、ユーザ操作が行われたときの位置である第1位置から回転させた第2位置に変更する。水平方向とは、図14に示すXY方向のことである。基準平面と垂直な方向とは、図14に示すZ方向のことである。基準点は、IVUSカテーテルの中心点など、基準平面上における内腔63内に位置する任意の点でよいが、本実施形態では、基準平面上における内腔63の重心である。図14の例では、生体組織60の断面C1が基準平面と重なるとき、対応する重心B1が基準点となる。生体組織60の断面C2,C3,C4、及び対応する重心B2,B3,B4についても同様である。画像処理装置11は、第2位置に応じて、回転軸周りに視点V0を回転させる。 As shown in FIG. 14, the image processing device 11 forms a cutting region 62 in the three-dimensional data 52 that exposes the inner cavity 63 of the biological tissue 60 in the three-dimensional image 53. The image processing device 11 adjusts the viewpoint V0 when displaying the three-dimensional image 53 on the display 16 according to the position of the cutting area 62. When the image processing device 11 receives a user operation requesting rotation of the viewpoint V0, the image processing device 11 passes through a reference point located in the lumen 63 on a reference plane that extends horizontally in the three-dimensional image 53 and includes the viewpoint V0, and rotates the reference plane. The position of the cutting area 62 is changed from the first position, which is the position when the user's operation was performed, to the second position rotated around the rotation axis extending in the direction perpendicular to . The horizontal direction refers to the XY directions shown in FIG. The direction perpendicular to the reference plane is the Z direction shown in FIG. The reference point may be any point located within the lumen 63 on the reference plane, such as the center point of the IVUS catheter, but in this embodiment, it is the center of gravity of the lumen 63 on the reference plane. In the example of FIG. 14, when the cross section C1 of the living tissue 60 overlaps the reference plane, the corresponding center of gravity B1 becomes the reference point. The same applies to the cross sections C2, C3, and C4 of the living tissue 60, and the corresponding centers of gravity B2, B3, and B4. The image processing device 11 rotates the viewpoint V0 around the rotation axis according to the second position.
 本実施形態によれば、生体組織60内の位置を確認するための有用性が向上する。例えば、IVUSを使用したアブレーションなどの手技が行われる際には、カテーテルを操作しながら手技を行っている医師、又はディスプレイ16を見ながらIVUSのシステムを操作している臨床工学技士などのユーザは、特定のユーザ操作を行うことで、回転軸周りに視点V0を回転させることができる。 According to this embodiment, the usefulness for confirming the position within the living tissue 60 is improved. For example, when a procedure such as ablation using IVUS is performed, a user such as a doctor performing the procedure while operating a catheter or a clinical engineer operating the IVUS system while looking at the display 16 By performing a specific user operation, the viewpoint V0 can be rotated around the rotation axis.
 画像処理装置11は、図2から図4、図6から図8、及び図10から図12に示すように、生体組織60の断面64と、断面64において切断領域62に相当する領域65とを表す2次元画像58を3次元画像53とともにディスプレイ16に表示させる。2次元画像58では、断面64に対するカメラ71の位置が表示されている。 As shown in FIGS. 2 to 4, 6 to 8, and 10 to 12, the image processing device 11 identifies a cross section 64 of the living tissue 60 and a region 65 corresponding to the cutting region 62 in the cross section 64. The two-dimensional image 58 and the three-dimensional image 53 are displayed on the display 16. In the two-dimensional image 58, the position of the camera 71 with respect to the cross section 64 is displayed.
 本実施形態によれば、どのように生体組織60の構造の一部が切り取られているかを示すことができる。したがって、3次元画像53において生体組織60の切り取られて表示されていない部分がどのような構造であるかを、ユーザが2次元画像58から把握することができる。例えば、ユーザが術者であれば、生体組織60の内部に対する施術を行いやすくなる。 According to this embodiment, it is possible to show how a part of the structure of the living tissue 60 is cut out. Therefore, the user can understand from the two-dimensional image 58 what kind of structure the portion of the biological tissue 60 that is cut out and not displayed in the three-dimensional image 53 has. For example, if the user is a surgeon, it becomes easier to perform surgery on the inside of the living tissue 60.
 生体組織60は、例えば、血管、又は心臓などの臓器を含む。生体組織60は、解剖学的に単一の器官又はその一部のみに限らず、複数の器官を跨いで内腔を有する組織も含む。そのような組織の一例として、具体的には、下大静脈の上部から右心房を抜けて上大静脈の下部に至る血管系組織の一部が挙げられる。 The biological tissue 60 includes, for example, blood vessels or organs such as the heart. The biological tissue 60 is not limited to anatomically a single organ or a part thereof, but also includes a tissue having a lumen spanning multiple organs. An example of such a tissue is, specifically, a part of the vascular tissue that extends from the upper part of the inferior vena cava, passes through the right atrium, and reaches the lower part of the superior vena cava.
 図2から図4、図6から図8、及び図10から図12において、画面80に、操作パネル81と、2次元画像58と、3次元画像53と、ポインタ86とが表示されている。 In FIGS. 2 to 4, 6 to 8, and 10 to 12, an operation panel 81, a two-dimensional image 58, a three-dimensional image 53, and a pointer 86 are displayed on a screen 80.
 操作パネル81は、切断領域62を設定するためのGUIコンポーネントである。「GUI」は、graphical user interfaceの略語である。操作パネル81には、切断領域62の設定をアクティブにするかどうかを選択するためのチェックボックス82と、ベース角度を設定するためのスライダー83と、開き角度を設定するためのスライダー84と、重心を利用するかどうかを選択するためのチェックボックス85とが設けられている。 The operation panel 81 is a GUI component for setting the cutting area 62. “GUI” is an abbreviation for graphical user interface. The operation panel 81 includes a check box 82 for selecting whether to activate the setting of the cutting area 62, a slider 83 for setting the base angle, a slider 84 for setting the opening angle, and a center of gravity. A check box 85 is provided for selecting whether or not to use the .
 ベース角度とは、生体組織60の断面64を表す断面画像において1つの点Mから延びる2本の直線L1,L2のうち一方の直線L1の回転角度のことである。よって、ベース角度を設定することは、直線L1の向きを設定することに相当する。開き角度とは、2本の直線L1,L2間の角度のことである。よって、開き角度を設定することは、2本の直線L1,L2のなす角を設定することに相当する。点Mは、断面64の重心である。点Mは、重心を利用しないことが選択されている場合は、断面64上の重心以外の点に設定されてもよい。 The base angle is the rotation angle of one of the two straight lines L1 and L2 extending from one point M in the cross-sectional image representing the cross-section 64 of the biological tissue 60. Therefore, setting the base angle corresponds to setting the direction of the straight line L1. The opening angle is the angle between the two straight lines L1 and L2. Therefore, setting the opening angle corresponds to setting the angle formed by the two straight lines L1 and L2. Point M is the center of gravity of cross section 64. Point M may be set at a point other than the center of gravity on the cross section 64 if it is selected not to use the center of gravity.
 2次元画像58は、断面画像を加工して得られた画像である。2次元画像58では、断面64のどの部分が切り取られているかを明示するために、切断領域62に相当する領域65の色が変えられている。 The two-dimensional image 58 is an image obtained by processing a cross-sectional image. In the two-dimensional image 58, the color of an area 65 corresponding to the cutting area 62 is changed to clearly indicate which part of the cross section 64 has been cut.
 本実施形態では、切断領域62の位置に応じて、3次元画像53を画面80に表示する際の視点V0が調整される。 In this embodiment, the viewpoint V0 when displaying the three-dimensional image 53 on the screen 80 is adjusted according to the position of the cutting region 62.
 本実施形態では、2次元画像58を使用して切断領域62を決定することができる。具体的には、図13に示すように、ベース角度又は開き角度を調整して、2次元画像58において2本の直線L1,L2で区切られた領域65の位置又は大きさを設定することで、切断領域62の位置又は大きさを設定することができる。例えば、直線L1が反時計回りに約90度回転するようにベース角度を変更すると、2次元画像58aにおいて、ベース角度の変更に応じて移動した領域65aが得られる。そして、領域65aの位置に応じて、切断領域62の位置が調整される。あるいは、2本の直線L1,L2間の角度が大きくなるように開き角度を変更すると、2次元画像58bにおいて、開き角度の変更に応じて拡大した領域65bが得られる。そして、領域65bの大きさに応じて、切断領域62の大きさが調整される。ベース角度及び開き角度の両方を調整して、2次元画像58において領域65の位置及び大きさの両方を設定することで、切断領域62の位置及び大きさの両方を設定することもできる。カメラ71の位置は、切断領域62の位置又は大きさに応じて適宜調整されてもよい。 In this embodiment, the cutting area 62 can be determined using the two-dimensional image 58. Specifically, as shown in FIG. 13, by adjusting the base angle or opening angle and setting the position or size of the area 65 separated by two straight lines L1 and L2 in the two-dimensional image 58. , the position or size of the cutting area 62 can be set. For example, if the base angle is changed so that the straight line L1 is rotated approximately 90 degrees counterclockwise, a region 65a that is moved in accordance with the change in the base angle is obtained in the two-dimensional image 58a. Then, the position of the cutting area 62 is adjusted according to the position of the area 65a. Alternatively, if the opening angle is changed so that the angle between the two straight lines L1 and L2 becomes larger, a region 65b that is enlarged according to the change in the opening angle is obtained in the two-dimensional image 58b. Then, the size of the cutting area 62 is adjusted according to the size of the area 65b. By adjusting both the base angle and the opening angle and setting both the position and size of the area 65 in the two-dimensional image 58, it is also possible to set both the position and size of the cutting area 62. The position of the camera 71 may be adjusted as appropriate depending on the position or size of the cutting area 62.
 本実施形態では、常に、センサの現在位置に対応する画像、すなわち、最新の画像が2次元画像58として表示される。 In this embodiment, the image corresponding to the current position of the sensor, that is, the latest image, is always displayed as the two-dimensional image 58.
 本実施形態の一変形例として、ベース角度は、スライダー83を操作することで設定される代わりに、直線L1をドラッグすることで設定されてもよいし、又は数値を入力することで設定されてもよい。同様に、開き角度は、スライダー84を操作することで設定される代わりに、直線L2をドラッグすることで設定されてもよいし、又は数値を入力することで設定されてもよい。 As a modified example of this embodiment, the base angle may be set by dragging the straight line L1 instead of being set by operating the slider 83, or by inputting a numerical value. Good too. Similarly, instead of being set by operating the slider 84, the opening angle may be set by dragging the straight line L2 or by inputting a numerical value.
 3次元画像53では、2次元画像58を使用して決定された切断領域62が非表示又は透明になっている。 In the three-dimensional image 53, the cutting area 62 determined using the two-dimensional image 58 is hidden or transparent.
 図14において、X方向、及びX方向に直交するY方向は、それぞれ生体組織60の内腔63の短手方向に相当する。X方向及びY方向に直交するZ方向は、生体組織60の内腔63の長手方向に相当する。 In FIG. 14, the X direction and the Y direction perpendicular to the X direction each correspond to the lateral direction of the lumen 63 of the living tissue 60. The Z direction orthogonal to the X direction and the Y direction corresponds to the longitudinal direction of the lumen 63 of the living tissue 60.
 図14の例では、操作パネル81のチェックボックス85がチェック状態、すなわち、重心を利用することが選択されているとする。画像処理装置11は、3次元データ52を用いて生体組織60の断面C1,C2,C3,C4それぞれの重心B1,B2,B3,B4の位置を算出する。画像処理装置11は、重心B1,B2,B3,B4の位置を通る1本の線Lbで交わり、かつ2本の直線L1,L2をそれぞれ含む2つの平面を切断面P1,P2として設定する。例えば、図2から図4、図6から図8、及び図10から図12に示した点Mが点B3であるとすると、直線L1は断面C3と切断面P1との交線、直線L2は断面C3と切断面P2との交線となる。画像処理装置11は、3次元画像53において切断面P1,P2に挟まれ、生体組織60の内腔63を露出させる領域を切断領域62として3次元データ52に形成する。 In the example of FIG. 14, it is assumed that the check box 85 on the operation panel 81 is checked, that is, the use of the center of gravity is selected. The image processing device 11 uses the three-dimensional data 52 to calculate the positions of the centers of gravity B1, B2, B3, and B4 of the cross sections C1, C2, C3, and C4 of the living tissue 60, respectively. The image processing device 11 sets two planes that intersect at a line Lb passing through the positions of the centers of gravity B1, B2, B3, and B4 and include two straight lines L1 and L2, respectively, as cutting planes P1 and P2. For example, if point M shown in FIGS. 2 to 4, 6 to 8, and 10 to 12 is point B3, straight line L1 is the intersection of cross section C3 and cut plane P1, and straight line L2 is This is the intersection line between the cross section C3 and the cut surface P2. The image processing device 11 forms a region in the three-dimensional image 53 between the cutting planes P1 and P2 and exposing the inner cavity 63 of the living tissue 60 as a cutting region 62 in the three-dimensional data 52 .
 図14のように屈曲した生体組織60の3次元モデルの場合、1つの平面をもって3次元モデルを切断して内腔63を表示すると、正しく生体組織60内を表示し得ないケースがある。本実施形態では、図14のように、生体組織60の各断面の重心を捕捉し続けることにより、確実に生体組織60の中を表示し得るように3次元モデルを切断することが可能となる。 In the case of a three-dimensional model of a bent biological tissue 60 as shown in FIG. 14, if the three-dimensional model is cut along one plane to display the lumen 63, the inside of the biological tissue 60 may not be displayed correctly in some cases. In this embodiment, as shown in FIG. 14, by continuing to capture the center of gravity of each cross section of the living tissue 60, it is possible to cut the three-dimensional model so that the inside of the living tissue 60 can be reliably displayed. .
 図14では、便宜上、生体組織60の内腔63の短手方向の複数断面として、4つの断面C1,C2,C3,C4を示しているが、重心位置の算出対象となる断面の数は4つに限らず、好適にはIVUSで取得される断面画像の数と同数である。 In FIG. 14, for convenience, four cross sections C1, C2, C3, and C4 are shown as multiple cross sections in the transverse direction of the internal cavity 63 of the living tissue 60, but the number of cross sections for which the center of gravity position is calculated is four. However, the number is preferably the same as the number of cross-sectional images obtained by IVUS.
 図14とは別の例として、操作パネル81のチェックボックス85がチェックされていない状態、すなわち、重心を利用しないことが選択されているとする。そのような例において、画像処理装置11は、点Mを通ってZ方向に延びる直線など、点Mを通る1本の任意の線で交わり、かつ2本の直線L1,L2をそれぞれ含む2つの平面を切断面P1,P2として設定する。 As an example different from that shown in FIG. 14, assume that the check box 85 on the operation panel 81 is not checked, that is, it is selected not to use the center of gravity. In such an example, the image processing device 11 intersects at an arbitrary line passing through point M, such as a straight line extending in the Z direction through point M, and includes two straight lines L1 and L2, respectively. The planes are set as cutting planes P1 and P2.
 図1を参照して、本実施形態に係る画像処理システム10の構成を説明する。 With reference to FIG. 1, the configuration of an image processing system 10 according to the present embodiment will be described.
 画像処理システム10は、画像処理装置11、ケーブル12、駆動ユニット13、キーボード14、マウス15、及びディスプレイ16を備える。 The image processing system 10 includes an image processing device 11, a cable 12, a drive unit 13, a keyboard 14, a mouse 15, and a display 16.
 画像処理装置11は、本実施形態では画像診断に特化した専用のコンピュータであるが、PCなどの汎用のコンピュータでもよい。「PC」は、personal computerの略語である。 Although the image processing device 11 is a dedicated computer specialized for image diagnosis in this embodiment, it may be a general-purpose computer such as a PC. "PC" is an abbreviation for personal computer.
 ケーブル12は、画像処理装置11と駆動ユニット13とを接続するために用いられる。 The cable 12 is used to connect the image processing device 11 and the drive unit 13.
 駆動ユニット13は、図16に示すプローブ20に接続して用いられ、プローブ20を駆動する装置である。駆動ユニット13は、MDUとも呼ばれる。「MDU」は、motor drive unitの略語である。プローブ20は、IVUSに適用される。プローブ20は、IVUSカテーテル又は画像診断用カテーテルとも呼ばれる。 The drive unit 13 is a device that is used by being connected to the probe 20 shown in FIG. 16 and drives the probe 20. Drive unit 13 is also called MDU. "MDU" is an abbreviation for motor drive unit. The probe 20 is applied to IVUS. The probe 20 is also called an IVUS catheter or an imaging catheter.
 キーボード14、マウス15、及びディスプレイ16は、任意のケーブルを介して、又は無線で画像処理装置11と接続される。ディスプレイ16は、例えば、LCD、有機ELディスプレイ、又はHMDである。「LCD」は、liquid crystal displayの略語である。「EL」は、electro luminescenceの略語である。「HMD」は、head-mounted displayの略語である。 The keyboard 14, mouse 15, and display 16 are connected to the image processing device 11 via any cable or wirelessly. The display 16 is, for example, an LCD, an organic EL display, or an HMD. "LCD" is an abbreviation for liquid crystal display. "EL" is an abbreviation for electro luminescence. "HMD" is an abbreviation for head-mounted display.
 画像処理システム10は、オプションとして、接続端子17及びカートユニット18を更に備える。 The image processing system 10 further includes a connection terminal 17 and a cart unit 18 as options.
 接続端子17は、画像処理装置11と外部機器とを接続するために用いられる。接続端子17は、例えば、USB端子である。「USB」は、Universal Serial Busの略語である。外部機器は、例えば、磁気ディスクドライブ、光磁気ディスクドライブ、又は光ディスクドライブなどの記録媒体である。 The connection terminal 17 is used to connect the image processing device 11 and external equipment. The connection terminal 17 is, for example, a USB terminal. "USB" is an abbreviation for Universal Serial Bus. The external device is, for example, a recording medium such as a magnetic disk drive, a magneto-optical disk drive, or an optical disk drive.
 カートユニット18は、移動用のキャスタ付きのカートである。カートユニット18のカート本体には、画像処理装置11、ケーブル12、及び駆動ユニット13が設置される。カートユニット18の最上部のテーブルには、キーボード14、マウス15、及びディスプレイ16が設置される。 The cart unit 18 is a cart with casters for movement. An image processing device 11, a cable 12, and a drive unit 13 are installed in the cart body of the cart unit 18. A keyboard 14, a mouse 15, and a display 16 are installed on the top table of the cart unit 18.
 図16を参照して、本実施形態に係るプローブ20及び駆動ユニット13の構成を説明する。 With reference to FIG. 16, the configurations of the probe 20 and drive unit 13 according to this embodiment will be described.
 プローブ20は、駆動シャフト21、ハブ22、シース23、外管24、超音波振動子25、及び中継コネクタ26を備える。 The probe 20 includes a drive shaft 21, a hub 22, a sheath 23, an outer tube 24, an ultrasonic transducer 25, and a relay connector 26.
 駆動シャフト21は、生体の体腔内に挿入されるシース23と、シース23の基端に接続した外管24とを通り、プローブ20の基端に設けられたハブ22の内部まで延びている。駆動シャフト21は、信号を送受信する超音波振動子25を先端に有してシース23及び外管24内に回転可能に設けられる。中継コネクタ26は、シース23及び外管24を接続する。 The drive shaft 21 passes through a sheath 23 inserted into the body cavity of a living body and an outer tube 24 connected to the proximal end of the sheath 23, and extends to the inside of the hub 22 provided at the proximal end of the probe 20. The drive shaft 21 has an ultrasonic transducer 25 at its tip that transmits and receives signals, and is rotatably provided within the sheath 23 and the outer tube 24 . Relay connector 26 connects sheath 23 and outer tube 24.
 ハブ22、駆動シャフト21、及び超音波振動子25は、それぞれが一体的に軸方向に進退移動するように互いに接続される。そのため、例えば、ハブ22が先端側に向けて押される操作がなされると、駆動シャフト21及び超音波振動子25がシース23の内部を先端側へ移動する。例えば、ハブ22が基端側に引かれる操作がなされると、駆動シャフト21及び超音波振動子25は、矢印で示すように、シース23の内部を基端側へ移動する。 The hub 22, the drive shaft 21, and the ultrasonic transducer 25 are connected to each other so that they each move forward and backward in the axial direction. Therefore, for example, when the hub 22 is pushed toward the distal end, the drive shaft 21 and the ultrasonic transducer 25 move inside the sheath 23 toward the distal end. For example, when the hub 22 is pulled toward the proximal end, the drive shaft 21 and the ultrasonic transducer 25 move inside the sheath 23 toward the proximal end, as shown by the arrows.
 駆動ユニット13は、スキャナユニット31、スライドユニット32、及びボトムカバー33を備える。 The drive unit 13 includes a scanner unit 31, a slide unit 32, and a bottom cover 33.
 スキャナユニット31は、プルバックユニットとも呼ばれる。スキャナユニット31は、ケーブル12を介して画像処理装置11と接続する。スキャナユニット31は、プローブ20と接続するプローブ接続部34と、駆動シャフト21を回転させる駆動源であるスキャナモータ35とを備える。 The scanner unit 31 is also called a pullback unit. Scanner unit 31 is connected to image processing device 11 via cable 12 . The scanner unit 31 includes a probe connection section 34 that connects to the probe 20 and a scanner motor 35 that is a drive source that rotates the drive shaft 21 .
 プローブ接続部34は、プローブ20の基端に設けられたハブ22の差込口36を介して、プローブ20と着脱自在に接続する。ハブ22の内部では、駆動シャフト21の基端が回転自在に支持されており、スキャナモータ35の回転力が駆動シャフト21に伝えられる。また、ケーブル12を介して駆動シャフト21と画像処理装置11との間で信号が送受信される。画像処理装置11では、駆動シャフト21から伝わる信号に基づき、生体管腔の断層画像の生成、及び画像処理が行われる。 The probe connecting portion 34 is detachably connected to the probe 20 via the insertion port 36 of the hub 22 provided at the base end of the probe 20. Inside the hub 22, the base end of the drive shaft 21 is rotatably supported, and the rotational force of the scanner motor 35 is transmitted to the drive shaft 21. Further, signals are transmitted and received between the drive shaft 21 and the image processing device 11 via the cable 12. The image processing device 11 generates a tomographic image of a living body lumen and performs image processing based on signals transmitted from the drive shaft 21 .
 スライドユニット32は、スキャナユニット31を進退自在に載せており、スキャナユニット31と機械的かつ電気的に接続している。スライドユニット32は、プローブクランプ部37、スライドモータ38、及びスイッチ群39を備える。 The slide unit 32 carries the scanner unit 31 so that it can move forward and backward, and is mechanically and electrically connected to the scanner unit 31. The slide unit 32 includes a probe clamp section 37, a slide motor 38, and a switch group 39.
 プローブクランプ部37は、プローブ接続部34よりも先端側でこれと同軸的に配置して設けられており、プローブ接続部34に接続されるプローブ20を支持する。 The probe clamp section 37 is disposed coaxially with the probe connection section 34 on the distal side thereof, and supports the probe 20 connected to the probe connection section 34 .
 スライドモータ38は、軸方向の駆動力を生じさせる駆動源である。スライドモータ38の駆動によってスキャナユニット31が進退動し、それに伴って駆動シャフト21が軸方向に進退動する。スライドモータ38は、例えば、サーボモータである。 The slide motor 38 is a drive source that generates axial driving force. The scanner unit 31 is moved forward and backward by the drive of the slide motor 38, and the drive shaft 21 is accordingly moved forward and backward in the axial direction. The slide motor 38 is, for example, a servo motor.
 スイッチ群39には、例えば、スキャナユニット31の進退操作の際に押されるフォワードスイッチ及びプルバックスイッチ、並びに画像描写の開始及び終了の際に押されるスキャンスイッチが含まれる。ここでの例に限定されず、必要に応じて種々のスイッチがスイッチ群39に含まれる。 The switch group 39 includes, for example, a forward switch and a pullback switch that are pressed when moving the scanner unit 31 forward or backward, and a scan switch that is pressed when starting and ending image depiction. The switch group 39 is not limited to this example, and various switches may be included in the switch group 39 as necessary.
 フォワードスイッチが押されると、スライドモータ38が正回転し、スキャナユニット31が前進する。一方、プルバックスイッチが押されると、スライドモータ38が逆回転し、スキャナユニット31が後退する。 When the forward switch is pressed, the slide motor 38 rotates forward and the scanner unit 31 moves forward. On the other hand, when the pullback switch is pressed, the slide motor 38 rotates in the reverse direction, and the scanner unit 31 moves backward.
 スキャンスイッチが押されると画像描写が開始され、スキャナモータ35が駆動するとともに、スライドモータ38が駆動してスキャナユニット31を後退させていく。術者などのユーザは、事前にプローブ20をスキャナユニット31に接続しておき、画像描写開始とともに駆動シャフト21が回転しつつ軸方向基端側に移動するようにする。スキャナモータ35及びスライドモータ38は、スキャンスイッチが再度押されると停止し、画像描写が終了する。 When the scan switch is pressed, image depiction is started, and the scanner motor 35 is driven, and the slide motor 38 is driven to move the scanner unit 31 backward. A user such as a surgeon connects the probe 20 to the scanner unit 31 in advance so that the drive shaft 21 rotates and moves toward the proximal end side in the axial direction when image depiction starts. The scanner motor 35 and the slide motor 38 are stopped when the scan switch is pressed again, and image rendering is completed.
 ボトムカバー33は、スライドユニット32の底面及び底面側の側面全周を覆っており、スライドユニット32の底面に対して近接離間自在である。 The bottom cover 33 covers the bottom surface of the slide unit 32 and the entire circumference of the side surface on the bottom side, and is movable toward and away from the bottom surface of the slide unit 32.
 図15を参照して、画像処理装置11の構成を説明する。 The configuration of the image processing device 11 will be described with reference to FIG. 15.
 画像処理装置11は、制御部41と、記憶部42と、通信部43と、入力部44と、出力部45とを備える。 The image processing device 11 includes a control section 41, a storage section 42, a communication section 43, an input section 44, and an output section 45.
 制御部41は、少なくとも1つのプロセッサ、少なくとも1つのプログラマブル回路、少なくとも1つの専用回路、又はこれらの任意の組合せを含む。プロセッサは、CPU若しくはGPUなどの汎用プロセッサ、又は特定の処理に特化した専用プロセッサである。「CPU」は、central processing unitの略語である。「GPU」は、graphics processing unitの略語である。プログラマブル回路は、例えば、FPGAである。「FPGA」は、field-programmable gate arrayの略語である。専用回路は、例えば、ASICである。「ASIC」は、application specific integrated circuitの略語である。制御部41は、画像処理装置11を含む画像処理システム10の各部を制御しながら、画像処理装置11の動作に関わる処理を実行する。 The control unit 41 includes at least one processor, at least one programmable circuit, at least one dedicated circuit, or any combination thereof. The processor is a general-purpose processor such as a CPU or GPU, or a dedicated processor specialized for specific processing. "CPU" is an abbreviation for central processing unit. “GPU” is an abbreviation for graphics processing unit. The programmable circuit is, for example, an FPGA. "FPGA" is an abbreviation for field-programmable gate array. The dedicated circuit is, for example, an ASIC. “ASIC” is an abbreviation for application specific integrated circuit. The control unit 41 executes processing related to the operation of the image processing device 11 while controlling each part of the image processing system 10 including the image processing device 11.
 記憶部42は、少なくとも1つの半導体メモリ、少なくとも1つの磁気メモリ、少なくとも1つの光メモリ、又はこれらの任意の組合せを含む。半導体メモリは、例えば、RAM又はROMである。「RAM」は、random access memoryの略語である。「ROM」は、read only memoryの略語である。RAMは、例えば、SRAM又はDRAMである。「SRAM」は、static random access memoryの略語である。「DRAM」は、dynamic random access memoryの略語である。ROMは、例えば、EEPROMである。「EEPROM」は、electrically erasable programmable read only memoryの略語である。記憶部42は、例えば、主記憶装置、補助記憶装置、又はキャッシュメモリとして機能する。記憶部42には、断層データ51など、画像処理装置11の動作に用いられるデータと、3次元データ52及び3次元画像53など、画像処理装置11の動作によって得られたデータとが記憶される。 The storage unit 42 includes at least one semiconductor memory, at least one magnetic memory, at least one optical memory, or any combination thereof. The semiconductor memory is, for example, RAM or ROM. "RAM" is an abbreviation for random access memory. "ROM" is an abbreviation for read only memory. The RAM is, for example, SRAM or DRAM. “SRAM” is an abbreviation for static random access memory. "DRAM" is an abbreviation for dynamic random access memory. The ROM is, for example, an EEPROM. "EEPROM" is an abbreviation for electrically erasable programmable read only memory. The storage unit 42 functions as, for example, a main storage device, an auxiliary storage device, or a cache memory. The storage unit 42 stores data used for the operation of the image processing device 11, such as tomographic data 51, and data obtained by the operation of the image processing device 11, such as 3D data 52 and 3D images 53. .
 通信部43は、少なくとも1つの通信用インタフェースを含む。通信用インタフェースは、例えば、有線LANインタフェース、無線LANインタフェース、又はIVUSの信号を受信及びA/D変換する画像診断用インタフェースである。「LAN」は、local area networkの略語である。「A/D」は、analog to digitalの略語である。通信部43は、画像処理装置11の動作に用いられるデータを受信し、また画像処理装置11の動作によって得られるデータを送信する。本実施形態では、通信部43に含まれる画像診断用インタフェースに駆動ユニット13が接続される。 The communication unit 43 includes at least one communication interface. The communication interface is, for example, a wired LAN interface, a wireless LAN interface, or an image diagnosis interface that receives and A/D converts IVUS signals. "LAN" is an abbreviation for local area network. "A/D" is an abbreviation for analog to digital. The communication unit 43 receives data used for the operation of the image processing device 11 and transmits data obtained by the operation of the image processing device 11. In this embodiment, the drive unit 13 is connected to an image diagnosis interface included in the communication section 43.
 入力部44は、少なくとも1つの入力用インタフェースを含む。入力用インタフェースは、例えば、USBインタフェース、HDMI(登録商標)インタフェース、又はBluetooth(登録商標)などの近距離無線通信規格に対応したインタフェースである。「HDMI(登録商標)」は、High-Definition Multimedia Interfaceの略語である。入力部44は、画像処理装置11の動作に用いられるデータを入力する操作などのユーザの操作を受け付ける。本実施形態では、入力部44に含まれるUSBインタフェース、又は近距離無線通信に対応したインタフェースにキーボード14及びマウス15が接続される。タッチスクリーンがディスプレイ16と一体的に設けられている場合、入力部44に含まれるUSBインタフェース又はHDMI(登録商標)インタフェースにディスプレイ16が接続されてもよい。 The input unit 44 includes at least one input interface. The input interface is, for example, a USB interface, an HDMI (registered trademark) interface, or an interface compatible with a short-range wireless communication standard such as Bluetooth (registered trademark). "HDMI (registered trademark)" is an abbreviation for High-Definition Multimedia Interface. The input unit 44 accepts user operations such as inputting data used for the operation of the image processing device 11 . In this embodiment, the keyboard 14 and mouse 15 are connected to a USB interface included in the input unit 44 or an interface compatible with near field communication. If the touch screen is provided integrally with the display 16, the display 16 may be connected to a USB interface or an HDMI (registered trademark) interface included in the input section 44.
 出力部45は、少なくとも1つの出力用インタフェースを含む。出力用インタフェースは、例えば、USBインタフェース、HDMI(登録商標)インタフェース、又はBluetooth(登録商標)などの近距離無線通信規格に対応したインタフェースである。出力部45は、画像処理装置11の動作によって得られるデータを出力する。本実施形態では、出力部45に含まれるUSBインタフェース又はHDMI(登録商標)インタフェースにディスプレイ16が接続される。 The output unit 45 includes at least one output interface. The output interface is, for example, a USB interface, an HDMI (registered trademark) interface, or an interface compatible with a short-range wireless communication standard such as Bluetooth (registered trademark). The output unit 45 outputs data obtained by the operation of the image processing device 11. In this embodiment, the display 16 is connected to a USB interface or an HDMI (registered trademark) interface included in the output unit 45.
 画像処理装置11の機能は、本実施形態に係る画像処理プログラムを、制御部41としてのプロセッサで実行することにより実現される。すなわち、画像処理装置11の機能は、ソフトウェアにより実現される。画像処理プログラムは、画像処理装置11の動作をコンピュータに実行させることで、コンピュータを画像処理装置11として機能させる。すなわち、コンピュータは、画像処理プログラムに従って画像処理装置11の動作を実行することにより画像処理装置11として機能する。 The functions of the image processing device 11 are realized by executing the image processing program according to the present embodiment by a processor serving as the control unit 41. That is, the functions of the image processing device 11 are realized by software. The image processing program causes the computer to function as the image processing apparatus 11 by causing the computer to execute the operations of the image processing apparatus 11 . That is, the computer functions as the image processing device 11 by executing the operations of the image processing device 11 according to the image processing program.
 プログラムは、非一時的なコンピュータ読取り可能な媒体に記憶しておくことができる。非一時的なコンピュータ読取り可能な媒体は、例えば、フラッシュメモリ、磁気記録装置、光ディスク、光磁気記録媒体、又はROMである。プログラムの流通は、例えば、プログラムを記憶したSDカード、DVD、又はCD-ROMなどの可搬型媒体を販売、譲渡、又は貸与することによって行う。「SD」は、Secure Digitalの略語である。「DVD」は、digital versatile discの略語である。「CD-ROM」は、compact disc read only memoryの略語である。プログラムをサーバのストレージに格納しておき、サーバから他のコンピュータにプログラムを転送することにより、プログラムを流通させてもよい。プログラムをプログラムプロダクトとして提供してもよい。 The program may be stored on a non-transitory computer-readable medium. The non-transitory computer-readable medium is, for example, a flash memory, a magnetic recording device, an optical disk, a magneto-optical recording medium, or a ROM. Distribution of the program is performed, for example, by selling, transferring, or lending a portable medium such as an SD card, DVD, or CD-ROM that stores the program. "SD" is an abbreviation for Secure Digital. "DVD" is an abbreviation for digital versatile disc. "CD-ROM" is an abbreviation for compact disc read only memory. The program may be distributed by storing the program in the storage of a server and transferring the program from the server to another computer. The program may be provided as a program product.
 コンピュータは、例えば、可搬型媒体に記憶されたプログラム又はサーバから転送されたプログラムを、一旦、主記憶装置に格納する。そして、コンピュータは、主記憶装置に格納されたプログラムをプロセッサで読み取り、読み取ったプログラムに従った処理をプロセッサで実行する。コンピュータは、可搬型媒体から直接プログラムを読み取り、プログラムに従った処理を実行してもよい。コンピュータは、コンピュータにサーバからプログラムが転送される度に、逐次、受け取ったプログラムに従った処理を実行してもよい。サーバからコンピュータへのプログラムの転送は行わず、実行指示及び結果取得のみによって機能を実現する、いわゆるASP型のサービスによって処理を実行してもよい。「ASP」は、application service providerの略語である。プログラムには、電子計算機による処理の用に供する情報であってプログラムに準ずるものを含む。例えば、コンピュータに対する直接の指令ではないがコンピュータの処理を規定する性質を有するデータは、「プログラムに準ずるもの」に該当する。 For example, a computer temporarily stores a program stored on a portable medium or a program transferred from a server in its main storage device. Then, the computer uses a processor to read a program stored in the main memory, and causes the processor to execute processing according to the read program. A computer may read a program directly from a portable medium and execute processing according to the program. The computer may sequentially execute processing according to the received program each time the program is transferred to the computer from the server. Processing may be performed using a so-called ASP type service that implements functions only by issuing execution instructions and obtaining results without transferring programs from the server to the computer. “ASP” is an abbreviation for application service provider. The program includes information that is used for processing by an electronic computer and is equivalent to a program. For example, data that is not a direct command to a computer but has the property of regulating computer processing falls under "something similar to a program."
 画像処理装置11の一部又は全ての機能が、制御部41としてのプログラマブル回路又は専用回路により実現されてもよい。すなわち、画像処理装置11の一部又は全ての機能が、ハードウェアにより実現されてもよい。 A part or all of the functions of the image processing device 11 may be realized by a programmable circuit or a dedicated circuit as the control unit 41. That is, some or all of the functions of the image processing device 11 may be realized by hardware.
 図17及び図18を参照して、本実施形態に係る画像処理システム10の動作を説明する。画像処理システム10の動作は、本実施形態に係る画像表示方法に相当する。 The operation of the image processing system 10 according to this embodiment will be described with reference to FIGS. 17 and 18. The operation of the image processing system 10 corresponds to the image display method according to this embodiment.
 図17のフローの開始前に、ユーザによって、プローブ20がプライミングされる。その後、プローブ20が駆動ユニット13のプローブ接続部34及びプローブクランプ部37に嵌め込まれ、駆動ユニット13に接続及び固定される。そして、プローブ20が血管又は心臓などの生体組織60内の目的部位まで挿入される。 Before starting the flow in FIG. 17, the probe 20 is primed by the user. Thereafter, the probe 20 is fitted into the probe connection part 34 and probe clamp part 37 of the drive unit 13, and is connected and fixed to the drive unit 13. The probe 20 is then inserted to a target site within the living tissue 60 such as a blood vessel or heart.
 ステップS101において、スイッチ群39に含まれるスキャンスイッチが押され、更にスイッチ群39に含まれるプルバックスイッチが押されることで、いわゆるプルバック操作が行われる。プローブ20は、生体組織60の内部で、プルバック操作によって軸方向に後退する超音波振動子25により超音波を送信する。超音波振動子25は、生体組織60の内部を移動しながら放射線状に超音波を送信する。超音波振動子25は、送信した超音波の反射波を受信する。プローブ20は、超音波振動子25により受信した反射波の信号を画像処理装置11に入力する。画像処理装置11の制御部41は、入力された信号を処理して生体組織60の断面画像を順次生成することで、複数の断面画像を含む断層データ51を取得する。 In step S101, the scan switch included in the switch group 39 is pressed, and the pullback switch included in the switch group 39 is further pressed, thereby performing a so-called pullback operation. The probe 20 transmits ultrasonic waves inside the living tissue 60 by using the ultrasonic transducer 25 that retreats in the axial direction by a pullback operation. The ultrasonic transducer 25 transmits ultrasonic waves in a radial manner while moving inside the living tissue 60 . The ultrasonic transducer 25 receives reflected waves of the transmitted ultrasonic waves. The probe 20 inputs the signal of the reflected wave received by the ultrasound transducer 25 to the image processing device 11 . The control unit 41 of the image processing device 11 acquires tomographic data 51 including a plurality of cross-sectional images by processing the input signals and sequentially generating cross-sectional images of the biological tissue 60.
 具体的には、プローブ20は、生体組織60の内部で超音波振動子25を周方向に回転させながら、かつ軸方向に移動させながら、超音波振動子25により、回転中心から外側に向かう複数方向に超音波を送信する。プローブ20は、生体組織60の内部で複数方向のそれぞれに存在する反射物からの反射波を超音波振動子25により受信する。プローブ20は、受信した反射波の信号を、駆動ユニット13及びケーブル12を介して画像処理装置11に送信する。画像処理装置11の通信部43は、プローブ20から送信された信号を受信する。通信部43は、受信した信号をA/D変換する。通信部43は、A/D変換した信号を制御部41に入力する。制御部41は、入力された信号を処理して、超音波振動子25の超音波の送信方向に存在する反射物からの反射波の強度値分布を算出する。制御部41は、算出した強度値分布に相当する輝度値分布を持つ2次元画像を生体組織60の断面画像として順次生成することで、断面画像のデータセットである断層データ51を取得する。制御部41は、取得した断層データ51を記憶部42に記憶させる。 Specifically, the probe 20 rotates the ultrasonic transducer 25 in the circumferential direction and moves it in the axial direction inside the living tissue 60, and the ultrasonic transducer 25 causes a plurality of waves outward from the center of rotation. Send ultrasound waves in the direction. The probe 20 uses the ultrasonic transducer 25 to receive reflected waves from reflective objects existing in multiple directions inside the living tissue 60 . The probe 20 transmits the received reflected wave signal to the image processing device 11 via the drive unit 13 and the cable 12. The communication unit 43 of the image processing device 11 receives the signal transmitted from the probe 20. The communication unit 43 performs A/D conversion on the received signal. The communication section 43 inputs the A/D converted signal to the control section 41 . The control unit 41 processes the input signal and calculates the intensity value distribution of reflected waves from a reflecting object existing in the ultrasonic wave transmission direction of the ultrasonic transducer 25 . The control unit 41 acquires tomographic data 51, which is a data set of cross-sectional images, by sequentially generating two-dimensional images having a luminance value distribution corresponding to the calculated intensity value distribution as cross-sectional images of the biological tissue 60. The control unit 41 causes the storage unit 42 to store the acquired tomographic data 51.
 本実施形態において、超音波振動子25が受信する反射波の信号は、断層データ51の生データに相当し、画像処理装置11が反射波の信号を処理して生成する断面画像は、断層データ51の加工データに相当する。 In this embodiment, the reflected wave signal received by the ultrasound transducer 25 corresponds to the raw data of the tomographic data 51, and the cross-sectional image that the image processing device 11 generates by processing the reflected wave signal corresponds to the tomographic data 51. This corresponds to 51 processed data.
 本実施形態の一変形例として、画像処理装置11の制御部41は、プローブ20から入力された信号をそのまま断層データ51として記憶部42に記憶させてもよい。あるいは、制御部41は、プローブ20から入力された信号を処理して算出した反射波の強度値分布を示すデータを断層データ51として記憶部42に記憶させてもよい。すなわち、断層データ51は、生体組織60の断面画像のデータセットに限られず、超音波振動子25の各移動位置における生体組織60の断面を何らかの形式で表すデータであればよい。 As a modification of the present embodiment, the control unit 41 of the image processing device 11 may store the signal input from the probe 20 as it is in the storage unit 42 as the tomographic data 51. Alternatively, the control unit 41 may cause the storage unit 42 to store data indicating the intensity value distribution of reflected waves calculated by processing the signal input from the probe 20 as the tomographic data 51. That is, the tomographic data 51 is not limited to a data set of cross-sectional images of the living tissue 60, but may be any data that represents the cross-section of the living tissue 60 at each movement position of the ultrasound transducer 25 in some format.
 本実施形態の一変形例として、周方向に回転しながら複数方向に超音波を送信する超音波振動子25の代わりに、回転することなく複数方向に超音波を送信する超音波振動子を用いてもよい。 As a modification of this embodiment, instead of the ultrasonic transducer 25 that transmits ultrasonic waves in multiple directions while rotating in the circumferential direction, an ultrasonic vibrator that transmits ultrasonic waves in multiple directions without rotating is used. It's okay.
 本実施形態の一変形例として、断層データ51は、IVUSを用いて取得される代わりに、OFDI又はOCTを用いて取得されてもよい。「OFDI」は、optical frequency domain imagingの略語である。「OCT」は、optical coherence tomographyの略語である。OFDI又はOCTが用いられる場合、生体組織60の内腔63を移動しながら断層データ51を取得するセンサとして、生体組織60の内腔63で超音波を送信して断層データ51を取得する超音波振動子25の代わりに、生体組織60の内腔63で光を放射して断層データ51を取得するセンサが用いられる。 As a modification of this embodiment, the tomographic data 51 may be acquired using OFDI or OCT instead of being acquired using IVUS. "OFDI" is an abbreviation for optical frequency domain imaging. "OCT" is an abbreviation for optical coherence tomography. When OFDI or OCT is used, an ultrasound sensor that acquires tomographic data 51 by transmitting ultrasound in the lumen 63 of the living tissue 60 is used as a sensor that acquires the tomographic data 51 while moving through the lumen 63 of the living tissue 60. Instead of the vibrator 25, a sensor is used that emits light in the lumen 63 of the living tissue 60 to acquire the tomographic data 51.
 本実施形態の一変形例として、画像処理装置11が生体組織60の断面画像のデータセットを生成する代わりに、他の装置が同様のデータセットを生成し、画像処理装置11はそのデータセットを当該他の装置から取得してもよい。すなわち、画像処理装置11の制御部41が、IVUSの信号を処理して生体組織60の断面画像を生成する代わりに、他の装置が、IVUSの信号を処理して生体組織60の断面画像を生成し、生成した断面画像を画像処理装置11に入力してもよい。 As a modification of this embodiment, instead of the image processing device 11 generating a dataset of cross-sectional images of the living tissue 60, another device generates a similar dataset, and the image processing device 11 generates the dataset. It may also be acquired from the other device. That is, instead of the control unit 41 of the image processing device 11 processing the IVUS signal to generate a cross-sectional image of the biological tissue 60, another device processes the IVUS signal to generate a cross-sectional image of the biological tissue 60. The generated cross-sectional image may be input to the image processing device 11.
 ステップS102において、画像処理装置11の制御部41は、ステップS101で取得した断層データ51に基づいて生体組織60の3次元データ52を生成する。すなわち、制御部41は、センサによって取得された断層データ51に基づいて3次元データ52を生成する。ここで、既に生成済みの3次元データ52が存在する場合、全ての3次元データ52を一から生成し直すのではなく、更新された断層データ51が対応する箇所のデータのみを更新することが好ましい。その場合、3次元データ52を生成する際のデータ処理量を削減し、後のステップS103における3次元画像53のリアルタイム性を向上させることができる。 In step S102, the control unit 41 of the image processing device 11 generates three-dimensional data 52 of the biological tissue 60 based on the tomographic data 51 acquired in step S101. That is, the control unit 41 generates three-dimensional data 52 based on the tomographic data 51 acquired by the sensor. Here, if there is already generated 3D data 52, instead of regenerating all 3D data 52 from scratch, it is possible to update only the data at the location to which the updated tomographic data 51 corresponds. preferable. In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 in the subsequent step S103 can be improved.
 具体的には、画像処理装置11の制御部41は、記憶部42に記憶された断層データ51に含まれる生体組織60の断面画像を積層して3次元化することで、生体組織60の3次元データ52を生成する。3次元化の手法としては、サーフェスレンダリング又はボリュームレンダリングなどのレンダリング手法、並びにそれに付随した、環境マッピングを含むテクスチャマッピング、及びバンプマッピングなどの種々の処理のうち任意の手法が用いられる。制御部41は、生成した3次元データ52を記憶部42に記憶させる。 Specifically, the control unit 41 of the image processing device 11 stacks and three-dimensionalizes the cross-sectional images of the biological tissue 60 included in the tomographic data 51 stored in the storage unit 42, thereby creating three-dimensional images of the biological tissue 60. Generate dimensional data 52. As the three-dimensionalization method, any one of various processing methods such as rendering methods such as surface rendering or volume rendering, and accompanying texture mapping including environment mapping, and bump mapping may be used. The control unit 41 causes the storage unit 42 to store the generated three-dimensional data 52.
 IVUSカテーテルとは別のカテーテルなど、長尺状の医療器具が生体組織60に挿入されている場合、断層データ51には、生体組織60のデータと同じように、長尺状の医療器具のデータが含まれている。そのため、ステップS102において、制御部41により生成される3次元データ52にも、生体組織60のデータと同じように、長尺状の医療器具のデータが含まれる。 When a long medical device such as a catheter other than the IVUS catheter is inserted into the living tissue 60, the tomographic data 51 includes data on the long medical device in the same way as the data of the living tissue 60. It is included. Therefore, in step S102, the three-dimensional data 52 generated by the control unit 41 also includes data on the elongated medical instrument, similar to the data on the living tissue 60.
 画像処理装置11の制御部41は、ステップS101で取得した断層データ51に含まれる断面画像の画素群を2つ以上のクラスに分類する。これら2つ以上のクラスには、少なくとも生体組織60が属する「組織」のクラスと、長尺状の医療器具が属する「医療器具」のクラスとが含まれ、「血球」のクラス、留置ステントなどの「留置物」のクラス、又は石灰若しくはプラークなどの「病変」のクラスが更に含まれていてもよい。分類方法としては、任意の方法を用いてよいが、本実施形態では、学習済みモデルによって断面画像の画素群を分類する方法が用いられる。学習済みモデルは、事前に機械学習を行うことによって、サンプルとなるIVUSの断面画像から、各クラスに該当する領域を検出できるように調教されている。 The control unit 41 of the image processing device 11 classifies the pixel group of the cross-sectional image included in the cross-sectional data 51 acquired in step S101 into two or more classes. These two or more classes include at least a "tissue" class to which the biological tissue 60 belongs, a "medical device" class to which long medical devices belong, a "blood cell" class, an indwelling stent, etc. may further include a class of "indwellings", or a class of "lesions", such as lime or plaque. Although any method may be used as the classification method, in this embodiment, a method of classifying pixel groups of a cross-sectional image using a trained model is used. The trained model is trained to detect regions corresponding to each class from sample IVUS cross-sectional images by performing machine learning in advance.
 ステップS103において、画像処理装置11の制御部41は、ステップS102で生成した3次元データ52を3次元画像53としてディスプレイ16に表示させる。この時点では、制御部41は、3次元画像53を表示させる角度を任意の角度に設定してよい。制御部41は、ステップS101で取得した断層データ51に含まれる最新の断面画像を3次元画像53とともにディスプレイ16に表示させる。 In step S103, the control unit 41 of the image processing device 11 displays the three-dimensional data 52 generated in step S102 on the display 16 as a three-dimensional image 53. At this point, the control unit 41 may set the angle at which the three-dimensional image 53 is displayed to an arbitrary angle. The control unit 41 causes the display 16 to display the latest cross-sectional image included in the tomographic data 51 acquired in step S101 together with the three-dimensional image 53.
 具体的には、画像処理装置11の制御部41は、記憶部42に記憶された3次元データ52から3次元画像53を生成する。3次元画像53は、生体組織60を表す3次元オブジェクト、及び長尺状の医療器具を表す3次元オブジェクトなどの3次元オブジェクト群を含む。すなわち、制御部41は、記憶部42に記憶された生体組織60のデータから生体組織60の3次元オブジェクトを生成し、記憶部42に記憶された長尺状の医療器具のデータから長尺状の医療器具の3次元オブジェクトを生成する。制御部41は、記憶部42に記憶された断層データ51に含まれる生体組織60の断面画像のうち、最新の断面画像と、生成した3次元画像53とを、出力部45を介してディスプレイ16に表示させる。 Specifically, the control unit 41 of the image processing device 11 generates a three-dimensional image 53 from the three-dimensional data 52 stored in the storage unit 42. The three-dimensional image 53 includes a group of three-dimensional objects such as a three-dimensional object representing the biological tissue 60 and a three-dimensional object representing a long medical instrument. That is, the control unit 41 generates a three-dimensional object of the biological tissue 60 from the data of the biological tissue 60 stored in the storage unit 42, and generates a three-dimensional object of the biological tissue 60 from the data of the long medical instrument stored in the storage unit 42. A three-dimensional object of a medical device is generated. The control unit 41 displays the latest cross-sectional image of the biological tissue 60 included in the tomographic data 51 stored in the storage unit 42 and the generated three-dimensional image 53 on the display 16 via the output unit 45. to be displayed.
 ステップS104において、ユーザの変更操作として、3次元画像53を表示させる角度を設定する操作があれば、ステップS105の処理が実行される。ユーザの変更操作がなければ、ステップS106の処理が実行される。 In step S104, if the user performs a change operation to set the angle at which the three-dimensional image 53 is displayed, the process in step S105 is executed. If there is no change operation by the user, the process of step S106 is executed.
 ステップS105において、画像処理装置11の制御部41は、3次元画像53を表示させる角度を設定する操作を、入力部44を介して受け付ける。制御部41は、3次元画像53を表示させる角度を、設定された角度に調整する。そして、ステップS103において、制御部41は、ステップS105で設定された角度で3次元画像53をディスプレイ16に表示させる。 In step S105, the control unit 41 of the image processing device 11 receives, via the input unit 44, an operation to set the angle at which the three-dimensional image 53 is displayed. The control unit 41 adjusts the angle at which the three-dimensional image 53 is displayed to the set angle. Then, in step S103, the control unit 41 causes the display 16 to display the three-dimensional image 53 at the angle set in step S105.
 具体的には、画像処理装置11の制御部41は、ディスプレイ16に表示されている3次元画像53をユーザがキーボード14、マウス15、又はディスプレイ16と一体的に設けられたタッチスクリーンを用いて回転させる操作を、入力部44を介して受け付ける。制御部41は、3次元画像53をディスプレイ16に表示させる角度を、ユーザの操作に応じてインタラクティブに調整する。あるいは、制御部41は、3次元画像53を表示させる角度の数値をユーザがキーボード14、マウス15、又はディスプレイ16と一体的に設けられたタッチスクリーンを用いて入力する操作を、入力部44を介して受け付ける。制御部41は、3次元画像53をディスプレイ16に表示させる角度を、入力された数値に合わせて調整する。 Specifically, the control unit 41 of the image processing device 11 allows the user to control the three-dimensional image 53 displayed on the display 16 by using the keyboard 14, the mouse 15, or the touch screen provided integrally with the display 16. A rotation operation is accepted via the input unit 44. The control unit 41 interactively adjusts the angle at which the three-dimensional image 53 is displayed on the display 16 in accordance with a user's operation. Alternatively, the control unit 41 may control the input unit 44 to allow the user to input a numerical value of the angle at which the three-dimensional image 53 is to be displayed using the keyboard 14, the mouse 15, or a touch screen provided integrally with the display 16. Accepted through. The control unit 41 adjusts the angle at which the three-dimensional image 53 is displayed on the display 16 in accordance with the input numerical value.
 ステップS106において、断層データ51の更新があれば、ステップS107及びステップS108の処理が実行される。断層データ51の更新がなければ、ステップS104において、ユーザの変更操作の有無が再度確認される。 In step S106, if the tomographic data 51 is updated, the processes of step S107 and step S108 are executed. If the tomographic data 51 is not updated, in step S104, the presence or absence of the user's change operation is checked again.
 ステップS107において、画像処理装置11の制御部41は、ステップS101の処理と同様に、プローブ20から入力された信号を処理して生体組織60の断面画像を新たに生成することで、少なくとも1つの新たな断面画像を含む断層データ51を取得する。 In step S107, similarly to the process in step S101, the control unit 41 of the image processing device 11 processes the signal input from the probe 20 to newly generate a cross-sectional image of the biological tissue 60, thereby generating at least one cross-sectional image of the biological tissue 60. Tomographic data 51 including a new cross-sectional image is acquired.
 ステップS108において、画像処理装置11の制御部41は、ステップS107で取得した断層データ51に基づいて生体組織60の3次元データ52を更新する。すなわち、制御部41は、センサによって取得された断層データ51に基づいて3次元データ52を更新する。そして、ステップS103において、制御部41は、ステップS108で更新した3次元データ52を3次元画像53としてディスプレイ16に表示させる。制御部41は、ステップS107で取得した断層データ51に含まれる最新の断面画像を3次元画像53とともにディスプレイ16に表示させる。ステップS108においては、更新された断層データ51が対応する箇所のデータのみを更新することが好ましい。その場合、3次元データ52を生成する際のデータ処理量を削減し、ステップS108において、3次元画像53のリアルタイム性を向上させることができる。 In step S108, the control unit 41 of the image processing device 11 updates the three-dimensional data 52 of the biological tissue 60 based on the tomographic data 51 acquired in step S107. That is, the control unit 41 updates the three-dimensional data 52 based on the tomographic data 51 acquired by the sensor. Then, in step S103, the control unit 41 causes the display 16 to display the three-dimensional data 52 updated in step S108 as a three-dimensional image 53. The control unit 41 causes the display 16 to display the latest cross-sectional image included in the tomographic data 51 acquired in step S107 together with the three-dimensional image 53. In step S108, it is preferable to update only the data corresponding to the updated tomographic data 51. In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 can be improved in step S108.
 ステップS111において、ユーザの設定操作として、切断領域62を設定する操作があれば、ステップS112の処理が実行される。 In step S111, if the user performs a setting operation to set the cutting area 62, the process in step S112 is executed.
 ステップS112において、画像処理装置11の制御部41は、切断領域62を設定する操作を、入力部44を介して受け付ける。 In step S112, the control unit 41 of the image processing device 11 receives an operation to set the cutting area 62 via the input unit 44.
 具体的には、画像処理装置11の制御部41は、ステップS103でディスプレイ16に表示させた断面画像に対して、切断領域62に相当する領域65を設定する操作を、入力部44を介して受け付ける。本実施形態では、制御部41は、切断領域62に相当する領域65を設定する操作として、断面画像において1つの点Mから延びる2本の直線L1,L2を設定する操作を受け付ける。 Specifically, the control unit 41 of the image processing device 11 performs an operation to set an area 65 corresponding to the cutting area 62 on the cross-sectional image displayed on the display 16 in step S103 via the input unit 44. accept. In this embodiment, the control unit 41 receives an operation for setting two straight lines L1 and L2 extending from one point M in the cross-sectional image as an operation for setting a region 65 corresponding to the cutting region 62.
 より具体的には、画像処理装置11の制御部41は、図2から図4、図6から図8、及び図10から図12に示したような操作パネル81上で、ベース角度及び開き角度をユーザがキーボード14、マウス15、又はディスプレイ16と一体的に設けられたタッチスクリーンを用いて指定する操作を、入力部44を介して受け付ける。すなわち、制御部41は、2本の直線L1,L2を設定する操作として、2本の直線L1,L2のうち一方の直線L1の向きと、2本の直線L1,L2のなす角とを指定する操作を受け付ける。ここでは、操作パネル81のチェックボックス85がチェック状態、すなわち、重心を利用することが選択されているものとする。 More specifically, the control unit 41 of the image processing device 11 controls the base angle and the opening angle on the operation panel 81 as shown in FIGS. 2 to 4, 6 to 8, and 10 to 12. An operation in which the user specifies using the keyboard 14 , mouse 15 , or touch screen provided integrally with the display 16 is accepted via the input unit 44 . That is, the control unit 41 specifies the direction of one of the two straight lines L1, L2 and the angle formed by the two straight lines L1, L2 as an operation for setting the two straight lines L1, L2. accepts operations to do. Here, it is assumed that the check box 85 on the operation panel 81 is checked, that is, the use of the center of gravity is selected.
 本実施形態の一変形例として、画像処理装置11の制御部41は、ディスプレイ16に表示された断面画像上で、2本の直線L1,L2をユーザがキーボード14、マウス15、又はディスプレイ16と一体的に設けられたタッチスクリーンを用いて描画する操作を、入力部44を介して受け付けてもよい。すなわち、制御部41は、2本の直線L1,L2を設定する操作として、断面画像に対して2本の直線L1,L2を描画する操作を受け付けてもよい。 As a modified example of the present embodiment, the control unit 41 of the image processing device 11 allows the user to move two straight lines L1 and L2 on the cross-sectional image displayed on the display 16 using the keyboard 14, the mouse 15, or the display 16. An operation for drawing using an integrally provided touch screen may be accepted via the input unit 44. That is, the control unit 41 may receive an operation to draw the two straight lines L1 and L2 on the cross-sectional image as an operation to set the two straight lines L1 and L2.
 ステップS113において、画像処理装置11の制御部41は、記憶部42に記憶された最新の3次元データ52を用いて生体組織60の内腔63の短手方向の複数断面の重心位置を算出する。最新の3次元データ52とは、ステップS108の処理が実行されていなければ、ステップS102で生成された3次元データ52のことであり、ステップS108の処理が実行されていれば、ステップS108で更新された3次元データ52のことである。ここで、既に生成済みの3次元データ52が存在する場合、3次元データ52を一から全て生成し直すのではなく、更新された断層データ51が対応する箇所のデータのみを更新することが好ましい。その場合、3次元データ52を生成する際のデータ処理量を削減し、後のステップS117における3次元画像53のリアルタイム性を向上させることができる。ステップS113の処理は、具体的には、国際公開第2021/200294号に開示されているものと同様の手順で実行することができる。 In step S113, the control unit 41 of the image processing device 11 uses the latest three-dimensional data 52 stored in the storage unit 42 to calculate the position of the center of gravity of a plurality of cross-sections in the transverse direction of the internal cavity 63 of the living tissue 60. . The latest three-dimensional data 52 is the three-dimensional data 52 generated in step S102 if the process of step S108 has not been executed, and the latest three-dimensional data 52 is the three-dimensional data 52 generated in step S102 if the process of step S108 has been executed. This refers to the three-dimensional data 52 that has been created. Here, if there is already generated three-dimensional data 52, it is preferable to update only the data at the location to which the updated tomographic data 51 corresponds, rather than re-generating all three-dimensional data 52 from scratch. . In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the three-dimensional image 53 in the subsequent step S117 can be improved. Specifically, the process in step S113 can be executed using a procedure similar to that disclosed in International Publication No. 2021/200294.
 ステップS114において、画像処理装置11の制御部41は、ステップS113の重心位置の算出結果に対してスムージングを実行する。ステップS114の処理は、具体的には、国際公開第2021/200294号に開示されているものと同様の手順で実行することができる。 In step S114, the control unit 41 of the image processing device 11 performs smoothing on the calculation result of the center of gravity position in step S113. Specifically, the process in step S114 can be executed using a procedure similar to that disclosed in International Publication No. 2021/200294.
 ステップS115において、画像処理装置11の制御部41は、図14に示したように、ステップS113で算出した重心位置を通る1本の線Lbで交わる2つの平面を切断面P1,P2として設定する。本実施形態では、制御部41は、ステップS114で重心位置の算出結果に対してスムージングを実行した上で切断面P1,P2を設定するが、ステップS114の処理は省略してもよい。 In step S115, the control unit 41 of the image processing device 11 sets two planes intersecting by a line Lb passing through the center of gravity position calculated in step S113 as cutting planes P1 and P2, as shown in FIG. . In this embodiment, the control unit 41 performs smoothing on the calculation result of the center of gravity position in step S114 and then sets the cutting planes P1 and P2, but the process in step S114 may be omitted.
 具体的には、画像処理装置11の制御部41は、ステップS114のスムージングの結果として得られた重心位置の曲線を線Lbとして設定する。制御部41は、設定した線Lbで交わり、かつステップS112で設定された2本の直線L1,L2をそれぞれ含む2つの平面を切断面P1,P2として設定する。制御部41は、記憶部42に記憶された最新の3次元データ52において、生体組織60の切断面P1,P2と交差する3次元座標を、3次元画像53において生体組織60の内腔63を露出させる開口の縁の3次元座標として特定する。制御部41は、特定した3次元座標を記憶部42に記憶させる。 Specifically, the control unit 41 of the image processing device 11 sets the curve of the center of gravity position obtained as a result of the smoothing in step S114 as the line Lb. The control unit 41 sets two planes that intersect at the set line Lb and include the two straight lines L1 and L2 set in step S112, respectively, as cutting planes P1 and P2. The control unit 41 determines the three-dimensional coordinates intersecting the cut planes P1 and P2 of the biological tissue 60 in the latest three-dimensional data 52 stored in the storage unit 42, and the internal cavity 63 of the biological tissue 60 in the three-dimensional image 53. Specify as the three-dimensional coordinates of the edge of the opening to be exposed. The control unit 41 causes the storage unit 42 to store the specified three-dimensional coordinates.
 ステップS116において、画像処理装置11の制御部41は、3次元画像53において切断面P1,P2に挟まれ、生体組織60の内腔63を露出させる領域を切断領域62として3次元データ52に形成する。 In step S116, the control unit 41 of the image processing device 11 forms a region in the three-dimensional data 52, which is sandwiched between the cutting planes P1 and P2 in the three-dimensional image 53 and exposes the inner cavity 63 of the biological tissue 60, as a cutting region 62. do.
 具体的には、画像処理装置11の制御部41は、記憶部42に記憶された最新の3次元データ52において、記憶部42に記憶された3次元座標で特定される部分を、3次元画像53をディスプレイ16に表示させる際に非表示又は透明になるように設定する。すなわち、制御部41は、ステップS112で設定された領域65に合わせて切断領域62を形成する。 Specifically, the control unit 41 of the image processing device 11 converts the portion specified by the three-dimensional coordinates stored in the storage unit 42 into a three-dimensional image in the latest three-dimensional data 52 stored in the storage unit 42. 53 is set to be hidden or transparent when displayed on the display 16. That is, the control unit 41 forms the cutting area 62 in accordance with the area 65 set in step S112.
 ステップS117において、画像処理装置11の制御部41は、ステップS116で切断領域62を形成した3次元データ52を3次元画像53としてディスプレイ16に表示させる。制御部41は、ステップS103でディスプレイ16に表示させた断面画像で表される、センサにより新たに取得された断層データ51で示される断面64と、断面64において切断領域62に相当する領域65とを表す2次元画像58を3次元画像53とともにディスプレイ16に表示させる。 In step S117, the control unit 41 of the image processing device 11 causes the display 16 to display the three-dimensional data 52 in which the cutting area 62 was formed in step S116 as the three-dimensional image 53. The control unit 41 displays a cross section 64 represented by the cross-sectional image displayed on the display 16 in step S103, which is indicated by the tomographic data 51 newly acquired by the sensor, and a region 65 corresponding to the cutting region 62 in the cross section 64. A two-dimensional image 58 representing the image is displayed on the display 16 together with the three-dimensional image 53.
 具体的には、画像処理装置11の制御部41は、記憶部42に記憶された断層データ51に含まれる生体組織60の断面画像のうち、最新の断面画像を加工して、図2から図4、図6から図8、及び図10から図12に示したような2次元画像58を生成する。制御部41は、記憶部42に記憶された3次元座標で特定される部分が非表示又は透明になっている、図2から図4、図6から図8、及び図10から図12に示したような3次元画像53を生成する。制御部41は、生成した2次元画像58及び3次元画像53を、出力部45を介してディスプレイ16に表示させる。 Specifically, the control unit 41 of the image processing device 11 processes the latest cross-sectional image of the biological tissue 60 included in the tomographic data 51 stored in the storage unit 42, and converts the images from FIG. 4. Generate a two-dimensional image 58 as shown in FIGS. 6 to 8 and 10 to 12. The control unit 41 is shown in FIGS. 2 to 4, 6 to 8, and 10 to 12, in which the portion specified by the three-dimensional coordinates stored in the storage unit 42 is hidden or transparent. A three-dimensional image 53 is generated. The control unit 41 displays the generated two-dimensional image 58 and three-dimensional image 53 on the display 16 via the output unit 45.
 本実施形態では、画像処理装置11の制御部41は、図2から図4、図6から図8、及び図10から図12に示したように、2次元画像58として、切断領域62に相当する領域65の色を、残りの領域とは異なる色で表す画像を生成する。例えば、一般的なIVUS画像における白色部分を、領域65では赤色に変えることが考えられる。 In this embodiment, the control unit 41 of the image processing device 11 generates a two-dimensional image 58 corresponding to the cutting area 62, as shown in FIGS. 2 to 4, 6 to 8, and 10 to 12. An image is generated in which the color of the region 65 is different from that of the remaining regions. For example, it is conceivable to change the white part in a general IVUS image to red in the region 65.
 ステップS118において、ユーザの変更操作として、切断領域62を設定する操作があれば、ステップS119の処理が実行される。ユーザの変更操作がなければ、ステップS120の処理が実行される。 In step S118, if there is an operation to set the cutting area 62 as a user change operation, the process of step S119 is executed. If there is no change operation by the user, the process of step S120 is executed.
 ステップS119において、画像処理装置11の制御部41は、ステップS112の処理と同様に、切断領域62を設定する操作を、入力部44を介して受け付ける。そして、ステップS115以降の処理が実行される。 In step S119, the control unit 41 of the image processing device 11 receives an operation to set the cutting area 62 via the input unit 44, similar to the process in step S112. Then, the processing from step S115 onwards is executed.
 ステップS120において、断層データ51の更新があれば、ステップS121及びステップS122の処理が実行される。断層データ51の更新がなければ、ステップS118において、ユーザの変更操作の有無が再度確認される。 In step S120, if the tomographic data 51 is updated, the processes of step S121 and step S122 are executed. If the tomographic data 51 is not updated, in step S118, the presence or absence of the user's change operation is checked again.
 ステップS121において、画像処理装置11の制御部41は、ステップS101又はステップS107の処理と同様に、プローブ20から入力された信号を処理して生体組織60の断面画像を新たに生成することで、少なくとも1つの新たな断面画像を含む断層データ51を取得する。 In step S121, the control unit 41 of the image processing device 11 processes the signal input from the probe 20 to newly generate a cross-sectional image of the living tissue 60, similarly to the process in step S101 or step S107. Tomographic data 51 including at least one new cross-sectional image is acquired.
 ステップS122において、画像処理装置11の制御部41は、ステップS121で取得した断層データ51に基づいて生体組織60の3次元データ52を更新する。その後、ステップS113以降の処理が実行される。ステップS122においては、更新された断層データ51が対応する箇所のデータのみを更新することが好ましい。その場合、3次元データ52を生成する際のデータ処理量を削減し、ステップS113以降のデータ処理のリアルタイム性を向上させることができる。 In step S122, the control unit 41 of the image processing device 11 updates the three-dimensional data 52 of the biological tissue 60 based on the tomographic data 51 acquired in step S121. After that, the processing from step S113 onwards is executed. In step S122, it is preferable to update only the data corresponding to the updated tomographic data 51. In that case, the amount of data processing when generating the three-dimensional data 52 can be reduced, and the real-time performance of the data processing after step S113 can be improved.
 図19を参照して、本実施形態に係る画像処理システム10の動作を更に説明する。 The operation of the image processing system 10 according to this embodiment will be further described with reference to FIG. 19.
 図19のフローは、ステップS103又はステップS117において、画像処理装置11の制御部41が、仮想の3次元空間内に設定された視点V0と、3次元空間内に配置された生体組織60のオブジェクト54との位置関係に基づき、オブジェクト54のレンダリングを行って、オブジェクト54を生体組織60の3次元画像53として画面80に既に表示しているときに実施される。 The flow of FIG. 19 is such that in step S103 or step S117, the control unit 41 of the image processing device 11 selects the viewpoint V0 set in the virtual three-dimensional space and the object of the biological tissue 60 arranged in the three-dimensional space. This is performed when the object 54 is rendered based on the positional relationship with the biological tissue 60 and the object 54 is already displayed on the screen 80 as the three-dimensional image 53 of the biological tissue 60 .
 ステップS201において、位置指定操作が行われると、画像処理装置11の制御部41は、位置指定操作を、入力部44を介して受け付ける。位置指定操作は、画面80上の2つの位置を指定する操作である。具体的には、位置指定操作は、マウス15のボタンを押す操作を第1操作として含むとともに、第1操作と、マウス15のボタンを押したままポインタ86を動かすドラッグ操作とに続けて行われる、マウス15のボタンを離す操作を第2操作として含む。第1操作は、キーボード14のCtrlキー又はShiftキーなどの第1キーを押しながらマウス15のボタンを押す操作であってもよい。第2操作は、キーボード14のCtrlキー又はShiftキーなどの第2キーを押しながらマウス15のボタンを離す操作であってもよい。 In step S201, when a position specifying operation is performed, the control unit 41 of the image processing device 11 receives the position specifying operation via the input unit 44. The position designation operation is an operation to designate two positions on the screen 80. Specifically, the position specifying operation includes an operation of pressing a button on the mouse 15 as the first operation, and is performed successively after the first operation and a drag operation of moving the pointer 86 while holding down the button of the mouse 15. , an operation of releasing a button on the mouse 15 is included as the second operation. The first operation may be an operation of pressing a button on the mouse 15 while holding down a first key such as the Ctrl key or the Shift key on the keyboard 14. The second operation may be an operation of releasing a button on the mouse 15 while pressing a second key such as the Ctrl key or the Shift key on the keyboard 14.
 ステップS202において、画像処理装置11の制御部41は、ステップS201で行われた位置指定操作に応じて、3次元空間内で画面80に対応する平面55上の第1対応点Q1及び第2対応点Q2を特定する。図5に示したように、第1対応点Q1は、平面55上の、位置指定操作で指定された2つの位置の一方に対応する点である。第2対応点Q2は、平面55上の、位置指定操作で指定された2つの位置の他方に対応する点である。制御部41は、3次元空間内の、第1交点R1及び第2交点R2の間の距離||R2-R1||を算出する。図5に示したように、第1交点R1は、3次元空間内で視点V0と第1対応点Q1とを結ぶ直線の延長線とオブジェクト54との交点である。第2交点R2は、3次元空間内で視点V0と第2対応点Q2とを結ぶ直線の延長線とオブジェクト54との交点である。具体的には、制御部41は、第1操作で指定された位置に対応する3次元座標(xq1,yq1,dq)を第1対応点Q1の座標として特定する。制御部41は、第2操作で指定された位置に対応する3次元座標(xq2,yq2,dq)を第2対応点Q2の座標として特定する。制御部41は、視点V0の座標(xv,yv,dv)と第1対応点Q1の座標(xq1,yq1,dq)とを通過する直線がオブジェクト54に到達する3次元座標(xr1,yr1,dr1)を第1交点R1の座標として特定する。制御部41は、視点V0の座標(xv,yv,dv)と第2対応点Q2の座標(xq2,yq2,dq)とを通過する直線がオブジェクト54に到達する3次元座標(xr2,yr2,dr2)を第2交点R2の座標として特定する。制御部41は、第1交点R1の座標(xr1,yr1,dr1)及び第2交点R2の座標(xr2,yr2,dr2)の間のユークリッド距離√((xr2-xr1)+(yr2-yr1)+(dr2-dr1))を算出する。 In step S202, the control unit 41 of the image processing device 11 selects a first corresponding point Q1 and a second corresponding point Q1 on the plane 55 corresponding to the screen 80 in the three-dimensional space in response to the position specifying operation performed in step S201. Identify point Q2. As shown in FIG. 5, the first corresponding point Q1 is a point on the plane 55 that corresponds to one of the two positions designated by the position designation operation. The second corresponding point Q2 is a point on the plane 55 that corresponds to the other of the two positions designated by the position designation operation. The control unit 41 calculates the distance ||R2−R1|| between the first intersection point R1 and the second intersection point R2 in the three-dimensional space. As shown in FIG. 5, the first intersection R1 is the intersection of the object 54 and an extension of the straight line connecting the viewpoint V0 and the first corresponding point Q1 in the three-dimensional space. The second intersection R2 is the intersection of the object 54 and an extension of the straight line connecting the viewpoint V0 and the second corresponding point Q2 in the three-dimensional space. Specifically, the control unit 41 specifies the three-dimensional coordinates (xq1, yq1, dq) corresponding to the position specified by the first operation as the coordinates of the first corresponding point Q1. The control unit 41 specifies the three-dimensional coordinates (xq2, yq2, dq) corresponding to the position specified by the second operation as the coordinates of the second corresponding point Q2. The control unit 41 determines the three-dimensional coordinates (xr1, yr1, dr1) is specified as the coordinates of the first intersection point R1. The control unit 41 determines the three-dimensional coordinates (xr2, yr2, dr2) is specified as the coordinates of the second intersection R2. The control unit 41 calculates the Euclidean distance √((xr2-xr1) 2 +(yr2-yr1) between the coordinates (xr1, yr1, dr1) of the first intersection R1 and the coordinates (xr2, yr2, dr2) of the second intersection R2. ) 2 + (dr2-dr1) 2 ) is calculated.
 ステップS203において、画像処理装置11の制御部41は、ステップS202で得られた算出結果を出力する。具体的には、制御部41は、図4に示したように、ステップS202で算出したユークリッド距離を表す数値を画面80上に出力する。 In step S203, the control unit 41 of the image processing device 11 outputs the calculation result obtained in step S202. Specifically, the control unit 41 outputs the numerical value representing the Euclidean distance calculated in step S202 on the screen 80, as shown in FIG.
 ステップS204において、画像処理装置11の制御部41は、画面80上の第1対応位置及び第2対応位置のそれぞれにマーク87,88を表示する。第1対応位置は、画面80上の、3次元空間内で視点V0と第1交点R1とを結ぶ直線と平面55との交点に対応する位置である。第2対応位置は、画面80上の、3次元空間内で視点V0と第2交点R2とを結ぶ直線と平面55との交点に対応する位置である。具体的には、制御部41は、図4に示したように、第1操作及び第2操作で指定された位置のそれぞれにマーク87,88を表示する。例えば、その後にステップS105、ステップS112、又はステップS119で操作が受け付けられた結果、視点V0の位置が変更されると、制御部41は、マーク87,88の位置も変更する。 In step S204, the control unit 41 of the image processing device 11 displays marks 87 and 88 at the first corresponding position and the second corresponding position on the screen 80, respectively. The first corresponding position is a position on the screen 80 that corresponds to the intersection of the plane 55 and a straight line connecting the viewpoint V0 and the first intersection R1 in the three-dimensional space. The second corresponding position is a position on the screen 80 that corresponds to the intersection of the plane 55 and a straight line connecting the viewpoint V0 and the second intersection R2 in the three-dimensional space. Specifically, as shown in FIG. 4, the control unit 41 displays marks 87 and 88 at the positions designated by the first operation and the second operation, respectively. For example, if the position of the viewpoint V0 is changed as a result of a subsequent operation accepted in step S105, step S112, or step S119, the control unit 41 also changes the positions of the marks 87 and 88.
 図19のフローは、任意の回数繰り返されてよい。例えば、Nを2以上の整数としたとき、位置指定操作がN回以上行われた結果、画面80上に2N個のマークが表示されてもよい。 The flow in FIG. 19 may be repeated any number of times. For example, when N is an integer greater than or equal to 2, 2N marks may be displayed on the screen 80 as a result of the position designation operation being performed N or more times.
 図20を参照して、本実施形態に係る画像処理システム10の動作を更に説明する。 The operation of the image processing system 10 according to this embodiment will be further described with reference to FIG. 20.
 図20のフローは、図19のフローが少なくとも1回実施された後に実施される。 The flow in FIG. 20 is executed after the flow in FIG. 19 is executed at least once.
 ステップS211において、範囲指定操作が行われると、画像処理装置11の制御部41は、範囲指定操作を、入力部44を介して受け付ける。範囲指定操作は、画面80上の範囲89を指定する操作である。具体的には、範囲指定操作は、マウス15のボタンを押す操作を第3操作として含むとともに、第3操作と、マウス15のボタンを押したままポインタ86を動かすドラッグ操作とに続けて行われる、マウス15のボタンを離す操作を第4操作として含む。第3操作は、キーボード14のCtrlキー又はShiftキーなどの第3キーを押しながらマウス15のボタンを押す操作であってもよい。第4操作は、キーボード14のCtrlキー又はShiftキーなどの第4キーを押しながらマウス15のボタンを離す操作であってもよい。 In step S211, when a range specification operation is performed, the control unit 41 of the image processing device 11 receives the range specification operation via the input unit 44. The range specification operation is an operation for specifying a range 89 on the screen 80. Specifically, the range specifying operation includes an operation of pressing a button on the mouse 15 as a third operation, and is performed following the third operation and a drag operation of moving the pointer 86 while holding down the button of the mouse 15. , an operation of releasing a button on the mouse 15 is included as the fourth operation. The third operation may be an operation of pressing a button on the mouse 15 while holding down a third key such as the Ctrl key or the Shift key on the keyboard 14. The fourth operation may be an operation of releasing a button on the mouse 15 while pressing a fourth key such as the Ctrl key or the Shift key on the keyboard 14.
 ステップS212において、画像処理装置11の制御部41は、ステップS211で行われた範囲指定操作に応じて、3次元空間内で画面80に対応する平面55上の対応範囲56を特定する。対応範囲56は、平面55上の、範囲指定操作で指定された範囲89に対応する範囲である。制御部41は、画面80上の、ステップS204で表示したマークのうち、図9に示したような3次元領域57内に存在する交点に対応する位置に表示しているマークの外観を変更する。3次元領域57は、3次元空間内で視点V0から対応範囲56の外縁を通って錐状に広がる領域である。具体的には、制御部41は、平面55上の、第3操作で指定された位置を基準として、第4操作で指定された位置まで広がる矩形範囲又は円形範囲などの定形範囲に対応する2次元範囲を対応範囲56として特定する。制御部41は、視点V0の座標(xv,yv,dv)と対応範囲56の外縁を通って錐状に広がる領域を3次元領域57として特定する。制御部41は、画面80上の、ステップS202で特定した交点のうち、3次元領域57内に存在する交点に対応する位置に表示しているマークの色を変更する。例えば、第1交点R1及び第2交点R2が3次元領域57内に存在するのであれば、制御部41は、図8又は図12に示したように、第1交点R1及び第2交点R2のそれぞれに紐付いているマーク87,88の色を変更する。あるいは、2N個の交点のうち、1個以上の交点が3次元領域57内に存在するのであれば、制御部41は、当該1個以上の交点に紐付いている1個以上のマークの色を変更してもよい。 In step S212, the control unit 41 of the image processing device 11 specifies the corresponding range 56 on the plane 55 corresponding to the screen 80 in the three-dimensional space in response to the range specifying operation performed in step S211. The corresponding range 56 is a range on the plane 55 that corresponds to the range 89 designated by the range designation operation. The control unit 41 changes the appearance of the mark displayed on the screen 80 at a position corresponding to an intersection existing in the three-dimensional area 57 as shown in FIG. 9 among the marks displayed in step S204. . The three-dimensional area 57 is an area that extends in a conical shape from the viewpoint V0 through the outer edge of the corresponding range 56 in the three-dimensional space. Specifically, the control unit 41 controls the control unit 41 to set two areas on the plane 55 corresponding to a regular range such as a rectangular range or a circular range extending from the position specified in the third operation to the position specified in the fourth operation. The dimension range is specified as the corresponding range 56. The control unit 41 specifies, as a three-dimensional region 57, a region extending in a conical shape through the coordinates (xv, yv, dv) of the viewpoint V0 and the outer edge of the corresponding range 56. The control unit 41 changes the color of the mark displayed on the screen 80 at a position corresponding to an intersection located within the three-dimensional area 57 among the intersections identified in step S202. For example, if the first intersection point R1 and the second intersection point R2 exist within the three-dimensional area 57, the control unit 41 controls the first intersection point R1 and the second intersection point R2, as shown in FIG. 8 or FIG. The colors of the marks 87 and 88 linked to each are changed. Alternatively, if one or more of the 2N intersections exists within the three-dimensional area 57, the control unit 41 changes the color of one or more marks associated with the one or more intersections. May be changed.
 本開示は上述の実施形態に限定されるものではない。例えば、ブロック図に記載の2つ以上のブロックを統合してもよいし、又は1つのブロックを分割してもよい。フローチャートに記載の2つ以上のステップを記述に従って時系列に実行する代わりに、各ステップを実行する装置の処理能力に応じて、又は必要に応じて、並列的に又は異なる順序で実行してもよい。その他、本開示の趣旨を逸脱しない範囲での変更が可能である。 The present disclosure is not limited to the embodiments described above. For example, two or more blocks depicted in the block diagram may be combined, or one block may be divided. Instead of performing two or more steps described in a flowchart chronologically as described, they may also be performed in parallel or in a different order, depending on the processing power of the device performing each step or as necessary. good. Other changes are possible without departing from the spirit of the present disclosure.
 10 画像処理システム
 11 画像処理装置
 12 ケーブル
 13 駆動ユニット
 14 キーボード
 15 マウス
 16 ディスプレイ
 17 接続端子
 18 カートユニット
 20 プローブ
 21 駆動シャフト
 22 ハブ
 23 シース
 24 外管
 25 超音波振動子
 26 中継コネクタ
 31 スキャナユニット
 32 スライドユニット
 33 ボトムカバー
 34 プローブ接続部
 35 スキャナモータ
 36 差込口
 37 プローブクランプ部
 38 スライドモータ
 39 スイッチ群
 41 制御部
 42 記憶部
 43 通信部
 44 入力部
 45 出力部
 51 断層データ
 52 3次元データ
 53 3次元画像
 54 オブジェクト
 55 平面
 56 対応範囲
 57 3次元領域
 58,58a,58b 2次元画像
 60 生体組織
 61 内表面
 62 切断領域
 63 内腔
 64 断面
 65,65a,65b 領域
 66 卵円窩
 71 カメラ
 80 画面
 81 操作パネル
 82 チェックボックス
 83 スライダー
 84 スライダー
 85 チェックボックス
 86 ポインタ
 87,88 マーク
 89 範囲
10 Image processing system 11 Image processing device 12 Cable 13 Drive unit 14 Keyboard 15 Mouse 16 Display 17 Connection terminal 18 Cart unit 20 Probe 21 Drive shaft 22 Hub 23 Sheath 24 Outer tube 25 Ultrasonic transducer 26 Relay connector 31 Scanner unit 32 Slide Unit 33 Bottom cover 34 Probe connection section 35 Scanner motor 36 Inlet 37 Probe clamp section 38 Slide motor 39 Switch group 41 Control section 42 Storage section 43 Communication section 44 Input section 45 Output section 51 Fault data 52 3D data 53 3D Image 54 Object 55 Plane 56 Corresponding range 57 Three- dimensional area 58, 58a, 58b Two-dimensional image 60 Biological tissue 61 Inner surface 62 Cutting area 63 Lumen 64 Cross section 65, 65a, 65b Area 66 Oval fossa 71 Camera 80 Screen 81 Operation Panel 82 Checkbox 83 Slider 84 Slider 85 Checkbox 86 Pointer 87, 88 Mark 89 Range

Claims (12)

  1.  仮想の3次元空間内に設定された視点と、前記3次元空間内に配置された生体組織のオブジェクトとの位置関係に基づき、前記オブジェクトのレンダリングを行って、前記オブジェクトを前記生体組織の3次元画像として画面に表示する画像処理装置であって、
     前記画面上の2つの位置を指定する位置指定操作に応じて、前記3次元空間内で前記画面に対応する平面上の、指定された2つの位置の一方に対応する第1対応点及び他方に対応する第2対応点を特定し、前記3次元空間内で前記視点と当該第1対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第1交点、及び前記3次元空間内で前記視点と当該第2対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第2交点の間の距離を算出し、得られた算出結果を出力する制御部を備える画像処理装置。
    Based on the positional relationship between a viewpoint set in a virtual three-dimensional space and an object of the biological tissue placed in the three-dimensional space, rendering of the object is performed, and the object is converted into a three-dimensional object of the biological tissue. An image processing device that displays an image on a screen,
    In response to a position specification operation that specifies two positions on the screen, a first corresponding point corresponding to one of the two specified positions on a plane corresponding to the screen in the three-dimensional space, and a first corresponding point corresponding to the other of the two specified positions. A corresponding second corresponding point is identified, and a first intersection point is an intersection between the object and an extension of a straight line connecting the viewpoint and the first corresponding point in the three-dimensional space, and An image processing device including a control unit that calculates a distance between a second intersection point, which is an intersection point between an extension of a straight line connecting the viewpoint and the second corresponding point and the object, and outputs the obtained calculation result.
  2.  前記制御部は、前記算出結果として、前記距離を表す数値を前記画面上に出力する請求項1に記載の画像処理装置。 The image processing device according to claim 1, wherein the control unit outputs a numerical value representing the distance on the screen as the calculation result.
  3.  前記位置指定操作は、入力機器の押しボタンを押す操作を第1操作として含み、
     前記制御部は、前記平面上の、前記第1操作が行われたときの前記画面上のポインタの位置に対応する点を前記第1対応点として特定する請求項1又は請求項2に記載の画像処理装置。
    The position specifying operation includes an operation of pressing a push button of an input device as a first operation,
    3. The control unit specifies, as the first corresponding point, a point on the plane that corresponds to the position of the pointer on the screen when the first operation is performed. Image processing device.
  4.  前記第1操作は、予め定められた第1キーを押しながら前記押しボタンを押す操作である請求項3に記載の画像処理装置。 The image processing apparatus according to claim 3, wherein the first operation is an operation of pressing the push button while pressing a predetermined first key.
  5.  前記位置指定操作は、前記第1操作と、前記押しボタンを押したまま前記ポインタを動かすドラッグ操作とに続けて行われる、前記押しボタンを離す操作を第2操作として含み、
     前記制御部は、前記平面上の、前記第2操作が行われたときの前記ポインタの位置に対応する点を前記第2対応点として特定する請求項3又は請求項4に記載の画像処理装置。
    The position specifying operation includes, as a second operation, an operation of releasing the push button, which is performed following the first operation and a drag operation of moving the pointer while holding down the push button,
    The image processing device according to claim 3 or 4, wherein the control unit specifies a point on the plane corresponding to the position of the pointer when the second operation is performed as the second corresponding point. .
  6.  前記第2操作は、予め定められた第2キーを押しながら前記押しボタンを離す操作である請求項5に記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the second operation is an operation of releasing the push button while pressing a predetermined second key.
  7.  前記制御部は、前記画面上の、前記3次元空間内で前記視点と前記第1交点とを結ぶ直線と前記平面との交点に対応する第1対応位置、及び前記3次元空間内で前記視点と前記第2交点とを結ぶ直線と前記平面との交点に対応する第2対応位置のそれぞれにマークを表示する請求項1から請求項6のいずれか1項に記載の画像処理装置。 The control unit is configured to control a first corresponding position on the screen corresponding to an intersection between the plane and a straight line connecting the viewpoint and the first intersection in the three-dimensional space, and The image processing device according to any one of claims 1 to 6, wherein a mark is displayed at each of the second corresponding positions corresponding to the intersection of the plane and a straight line connecting the plane and the second intersection.
  8.  前記制御部は、前記画面上の範囲を指定する範囲指定操作に応じて、前記平面上の、指定された範囲に対応する対応範囲を特定し、前記画面上の、前記第1交点と前記第2交点とのうち、前記3次元空間内で前記視点から当該対応範囲の外縁を通って錐状に広がる3次元領域内に存在する交点に対応する位置に表示しているマークの外観を変更する請求項7に記載の画像処理装置。 In response to a range specification operation that specifies a range on the screen, the control unit specifies a corresponding range on the plane corresponding to the specified range, and connects the first intersection point and the first intersection on the screen. Among the two intersection points, the appearance of a mark displayed at a position corresponding to an intersection point that exists in a three-dimensional area extending conically from the viewpoint through the outer edge of the corresponding range in the three-dimensional space is changed. The image processing device according to claim 7.
  9.  前記制御部は、外観を変更したマークを一括削除する操作を受け付ける請求項8に記載の画像処理装置。 The image processing device according to claim 8, wherein the control unit accepts an operation to collectively delete marks whose appearance has been changed.
  10.  請求項1から請求項9のいずれか1項に記載の画像処理装置と、
     前記画面を表示するディスプレイと
    を備える画像処理システム。
    An image processing device according to any one of claims 1 to 9,
    An image processing system comprising: a display that displays the screen.
  11.  仮想の3次元空間内に設定された視点と、前記3次元空間内に配置された生体組織のオブジェクトとの位置関係に基づき、前記オブジェクトのレンダリングを行って、前記オブジェクトを前記生体組織の3次元画像として画面に表示する画像表示方法であって、
     前記画面上の2つの位置を指定する位置指定操作に応じて、前記3次元空間内で前記画面に対応する平面上の、指定された2つの位置の一方に対応する第1対応点及び他方に対応する第2対応点を特定し、
     前記3次元空間内で前記視点と当該第1対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第1交点、及び前記3次元空間内で前記視点と当該第2対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第2交点の間の距離を算出し、
     得られた算出結果を出力する画像表示方法。
    Based on the positional relationship between a viewpoint set in a virtual three-dimensional space and an object of the biological tissue placed in the three-dimensional space, rendering of the object is performed, and the object is converted into a three-dimensional object of the biological tissue. An image display method for displaying an image on a screen,
    In response to a position specification operation that specifies two positions on the screen, a first corresponding point corresponding to one of the two specified positions on a plane corresponding to the screen in the three-dimensional space, and a first corresponding point corresponding to the other of the two specified positions. Identify the corresponding second corresponding point,
    A first intersection point, which is an intersection between the object and an extension of a straight line connecting the viewpoint and the first corresponding point in the three-dimensional space, and a first intersection point between the viewpoint and the second corresponding point in the three-dimensional space. Calculate the distance between the second intersection point, which is the intersection of the connecting straight line extension line and the object,
    An image display method that outputs the obtained calculation results.
  12.  仮想の3次元空間内に設定された視点と、前記3次元空間内に配置された生体組織のオブジェクトとの位置関係に基づき、前記オブジェクトのレンダリングを行って、前記オブジェクトを前記生体組織の3次元画像として画面に表示するコンピュータに、
     前記画面上の2つの位置を指定する位置指定操作に応じて、前記3次元空間内で前記画面に対応する平面上の、指定された2つの位置の一方に対応する第1対応点及び他方に対応する第2対応点を特定する処理と、
     前記3次元空間内で前記視点と当該第1対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第1交点、及び前記3次元空間内で前記視点と当該第2対応点とを結ぶ直線の延長線と前記オブジェクトとの交点である第2交点の間の距離を算出する処理と、
     得られた算出結果を出力する処理と
    を実行させる画像処理プログラム。
    Based on the positional relationship between a viewpoint set in a virtual three-dimensional space and an object of the biological tissue placed in the three-dimensional space, rendering of the object is performed, and the object is converted into a three-dimensional object of the biological tissue. to the computer that displays it on the screen as an image,
    In response to a position specification operation that specifies two positions on the screen, a first corresponding point corresponding to one of the two specified positions on a plane corresponding to the screen in the three-dimensional space, and a first corresponding point corresponding to the other of the two specified positions. a process of identifying a corresponding second corresponding point;
    A first intersection point, which is an intersection between the object and an extension of a straight line connecting the viewpoint and the first corresponding point in the three-dimensional space, and a first intersection point between the viewpoint and the second corresponding point in the three-dimensional space. a process of calculating a distance between a second intersection point that is an intersection point between an extension line of the connecting straight line and the object;
    An image processing program that executes processing to output the obtained calculation results.
PCT/JP2023/009449 2022-03-16 2023-03-10 Image processing device, image processing system, image display method, and image processing program WO2023176741A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022041892A JP2023136332A (en) 2022-03-16 2022-03-16 Image processing device, image processing system, image display method, and image processing program
JP2022-041892 2022-03-16

Publications (1)

Publication Number Publication Date
WO2023176741A1 true WO2023176741A1 (en) 2023-09-21

Family

ID=88023688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/009449 WO2023176741A1 (en) 2022-03-16 2023-03-10 Image processing device, image processing system, image display method, and image processing program

Country Status (2)

Country Link
JP (1) JP2023136332A (en)
WO (1) WO2023176741A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10201755A (en) * 1997-01-24 1998-08-04 Hitachi Medical Corp Method for measuring three-dimensional size in pseudo-three-dimensional image and its system
JP2000105838A (en) * 1998-09-29 2000-04-11 Toshiba Corp Image display method and image processor
JP2002063564A (en) * 2000-08-17 2002-02-28 Aloka Co Ltd Image processor and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10201755A (en) * 1997-01-24 1998-08-04 Hitachi Medical Corp Method for measuring three-dimensional size in pseudo-three-dimensional image and its system
JP2000105838A (en) * 1998-09-29 2000-04-11 Toshiba Corp Image display method and image processor
JP2002063564A (en) * 2000-08-17 2002-02-28 Aloka Co Ltd Image processor and storage medium

Also Published As

Publication number Publication date
JP2023136332A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
JP2010508047A (en) Apparatus and method for rendering for display forward-view image data
JP7300352B2 (en) Diagnosis support device, diagnosis support system, and diagnosis support method
US20220218309A1 (en) Diagnostic assistance device, diagnostic assistance system, and diagnostic assistance method
WO2023176741A1 (en) Image processing device, image processing system, image display method, and image processing program
JP5498090B2 (en) Image processing apparatus and ultrasonic diagnostic apparatus
WO2023054001A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2023013601A1 (en) Image processing device, image processing system, image processing method, and image processing program
WO2022202200A1 (en) Image processing device, image processing system, image display method, and image processing program
CN114502079B (en) Diagnosis support device, diagnosis support system, and diagnosis support method
WO2022202203A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2022202202A1 (en) Image processing device, image processing system, image display method, and image processing program
JP2020140716A (en) Map of body cavity
WO2021065746A1 (en) Diagnostic support device, diagnostic support system, and diagnostic support method
WO2022202201A1 (en) Image processing device, image processing system, image displaying method, and image processing program
WO2022071251A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2022071250A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2024071054A1 (en) Image processing device, image display system, image display method, and image processing program
JP2023024072A (en) Image processing device, image processing system, image display method, and image processing program
WO2021200294A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2020217860A1 (en) Diagnostic assistance device and diagnostic assistance method
WO2021200296A1 (en) Image processing device, image processing system, image display method, and image processing program
WO2022085373A1 (en) Image processing device, image processing system, image displaying method, and image processing program
WO2021200295A1 (en) Image processing device, image processing system, image display method, and image processing program
JP7421548B2 (en) Diagnostic support device and diagnostic support system
WO2022202401A1 (en) Medical image processing device, endoscope system, medical image processing method, and medical image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23770696

Country of ref document: EP

Kind code of ref document: A1