WO2017034020A1 - Medical image processing device and medical image processing program - Google Patents

Medical image processing device and medical image processing program Download PDF

Info

Publication number
WO2017034020A1
WO2017034020A1 PCT/JP2016/074966 JP2016074966W WO2017034020A1 WO 2017034020 A1 WO2017034020 A1 WO 2017034020A1 JP 2016074966 W JP2016074966 W JP 2016074966W WO 2017034020 A1 WO2017034020 A1 WO 2017034020A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
display
image processing
processing apparatus
operator
Prior art date
Application number
PCT/JP2016/074966
Other languages
French (fr)
Japanese (ja)
Inventor
金吾 七戸
Original Assignee
株式会社根本杏林堂
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社根本杏林堂 filed Critical 株式会社根本杏林堂
Priority to JP2017536488A priority Critical patent/JPWO2017034020A1/en
Publication of WO2017034020A1 publication Critical patent/WO2017034020A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the present invention relates to a medical image processing apparatus and a medical image processing program.
  • the present invention particularly relates to a medical image processing apparatus and the like that can display a three-dimensional medical image obtained by imaging a patient and perform various display processes with good workability and while maintaining the cleanliness of an operator. .
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • PET PET
  • ultrasound diagnostic devices angiographic imaging devices, etc.
  • medical diagnostic imaging devices Yes.
  • a simulation may be performed before performing an operation on an organ such as a liver in which blood vessels are intertwined in a complicated manner.
  • the simulation is performed, for example, by performing a contrast CT examination, preparing a fluoroscopic image of a site to be treated, and confirming it on a display.
  • Such simulations are useful for studying treatment plans.
  • Patent Document 1 discloses a technique for simulating a surgical operation by displaying an organ such as a liver on a tablet terminal.
  • Patent Document 1 uses a tablet terminal, so it is useful in that it can be carried anywhere and can perform preoperative simulation. In addition, since it can be brought into the operating room, it is also useful in that some confirmation can be performed during the operation.
  • the functions provided may not be used.
  • the present invention has been made paying attention to such problems. Its purpose is to display a three-dimensional medical image obtained by imaging a patient, a medical image processing apparatus and an image processing program capable of performing various display processes with good workability and while keeping the operator clean. Is to provide.
  • a medical image processing apparatus for solving the above-described problems is as follows: Display, A control unit (processor) connected to the display; An audio input device; A motion sensor, A medical image processing apparatus comprising: The control unit (processor) a: an image display unit for displaying a three-dimensional medical image on the display; b: a mode selection unit for recognizing a voice input using the voice recognition device and switching a mode related to display of the three-dimensional medical image in accordance with the voice; c: a display processing unit for recognizing an operator's motion input via the motion sensor and changing the display of the three-dimensional medical image accordingly; A medical image processing apparatus.
  • control unit -Display a 3D medical image on the display; Recognizing speech input using the speech recognition device and switching modes relating to the display of the 3D medical image accordingly; -Configured to recognize an operator's motion input via the motion sensor and change the display of the 3D medical image accordingly.
  • Anatomical structure refers to an object (for example, an organ, bone, blood vessel, etc.) that can be recognized in a subject, and includes fat, lesions such as a tumor, and the like.
  • Terminal refers to an information processing apparatus that is connected to a network or used in a standalone manner to perform data processing. An arbitrary peripheral device may be connected to the information processing apparatus. Basically, it is also preferable that various functions are provided in one device such as a tablet terminal or a laptop computer. However, in some cases, a part of the functions may be arbitrarily set depending on, for example, a load. It can also be configured to be functionally or physically distributed in units.
  • Connected means that, in addition to the case where two elements are directly connected, one element and another element are indirectly connected via some intermediate element without departing from the spirit of the present invention. This includes cases where It also includes both wired and wireless connections.
  • the component expressed as “function” + “part” corresponds to a functional block that performs a predetermined function.
  • a functional block does not necessarily indicate a division between hardware circuits.
  • one or more functional blocks can be implemented with a single hardware, but can also be implemented with multiple hardware.
  • a medical image processing apparatus and the like that can display a three-dimensional medical image obtained by imaging a patient and perform various display processes with good workability and while maintaining the cleanliness of an operator are provided. can do.
  • FIG. 1 It is a figure which shows the medical image processing apparatus of one Embodiment. It is a figure which shows the example of the block diagram of the image processing apparatus of FIG. It is a figure which shows an example of the component of a hospital system.
  • 3 is a flowchart of an operation example of the image processing apparatus in FIG. 1. It is a figure which shows the example of the three-dimensional medical image which displayed the liver and the blood vessel of the circumference
  • the medical image processing apparatus 301 of the present embodiment is a portable computer device such as a tablet terminal as an example. Alternatively, it may be a laptop PC (notebook PC) having a touch panel display.
  • FIG. 1 shows an example of a tablet terminal, which may be configured by installing an image processing program according to one embodiment of the present invention in a commercially available tablet terminal.
  • the medical image processing apparatus will be described simply as an image processing apparatus.
  • a screen size is 9 inches or more or 10 inches or more.
  • the thickness is preferably 20 mm or less or 15 mm or less.
  • the mass is preferably within 2 kg or within 1.5 kg.
  • the image processing apparatus 301 has a thin casing 301a, and a touch panel display 360 is provided on one surface thereof.
  • the touch panel display 360 includes a display 361 (see FIG. 2) and a touch panel 363 (see FIG. 2).
  • Such an image processing apparatus 301 can be connected to a network of a hospital system, for example, as shown in FIG.
  • the hospital system of this example includes the following devices connected to the network 30: an imaging device 1, a chemical injection device 10, a hospital information system HIS (Hospital Information System) 21, and a radiology information system.
  • Each of the above elements may be only one or plural.
  • the connection to the network may be a wired connection or a wireless connection.
  • Examples of the imaging device 1 include imaging devices such as a CT device, an MR device, and an angiography device. Other types of imaging devices may be used, or a plurality of imaging devices of the same type or different types may be used.
  • a three-dimensional medical image which will be described later, may be created using a plurality of modality images, such as combining an image captured by a CT apparatus and an image captured by an MR apparatus.
  • the drug solution injection device 10 may be a contrast agent injection device that injects at least a contrast agent.
  • the drug solution injection device 10 includes a drive mechanism that pushes a drug solution from a container (for example, a syringe) filled with the drug solution, and an operation thereof. It may include a control circuit for controlling.
  • a contrast medium injection device including an injection head and a console can be used.
  • the drive mechanism may be a piston drive mechanism, a roller pump, or the like.
  • the image processing apparatus 301 includes a display 361, a touch panel 363, an input device 365, a communication unit 367, an interface 368, a slot 369, a control unit 350, a storage unit 359, and the like. Note that all of these are not essential, and some of them may be omitted.
  • Examples of the display 361 include devices such as liquid crystal panels and organic EL panels.
  • a touch panel display in which the touch panel 363 is integrally provided can also be used.
  • the touch panel a system such as a resistive film, capacitance, electromagnetic induction, surface acoustic wave, infrared ray, or the like can be used.
  • a multi-touch that is a touch at a plurality of positions, such as a capacitance method, may be detected.
  • the touch operation can be performed using a user's finger or a touch pen.
  • the touch panel may detect the start of a touch operation on the touch panel, the movement of the touch position, the end of the touch operation, and the like, and output the detected touch type and coordinate information.
  • the image processing apparatus 301 of the present embodiment can perform operations related to image display using voice input or motion input, as will be described later. Therefore, the touch panel 363 may be omitted depending on circumstances.
  • the input device 365 for example, a general device such as a keyboard or a mouse can be used.
  • the storage unit 359 includes a hard disk drive (HDD: Hard Disk). Drive (Solid State Drive) and / or memory, etc., and an OS (Operating System) program or a medical image processing program according to an embodiment of the present invention (Including algorithm data and graphical user interface data) may be stored.
  • HDD Hard Disk
  • Drive Solid State Drive
  • OS Operating System
  • a medical image processing program including algorithm data and graphical user interface data
  • the computer program may be downloaded in whole or in part from an external device when necessary via an arbitrary network.
  • the computer program may be stored in a computer-readable recording medium.
  • the “recording medium” is a memory card, USB memory, SD card (registered trademark), flexible disk, magneto-optical disk, ROM, EPROM, EEPROM, It shall include any “portable physical media” such as CD-ROM, MO, DVD, and Blu-ray (registered trademark) Disc.
  • the medical image processing apparatus according to the present embodiment may be provided with a slot 369 for reading the storage medium as described above.
  • the communication unit 367 is a unit for enabling communication with an external network or device in a wired or wireless manner.
  • the communication unit 367 may include a transmitter for sending data to the outside, a receiver for receiving data from the outside, and the like.
  • the interface 368 is for connecting various external devices and the like, and only one interface is shown in the drawing, but a plurality of interfaces may naturally be provided.
  • the slot 369 is a part for reading data from a computer readable medium.
  • the interface 368 is a part for connecting an external device or the like.
  • the image processing apparatus 301 of this embodiment includes a microphone 370 as an audio input device.
  • a microphone built in the housing may be used, or an external microphone that is separate from the housing and is connected to the terminal by wire or wirelessly. It is also possible to use a device in which a motion sensor 380 and a microphone described below are integrated.
  • voice recognition software is installed in the image processing apparatus 301, and thereby a voice recognition unit 351 is configured.
  • the motion sensor 380 is a sensor that three-dimensionally detects the movement of at least a part of the operator's body in a non-contact manner.
  • Motion recognition software is installed in the image processing apparatus 301, and thereby a motion recognition unit 353 is configured.
  • a leap motion controller manufactured by Leap Motion, “Leep Motion” is a registered trademark
  • the leap motion controller is an input device that can recognize the position, shape, and movement of the operator's finger and / or the position and movement of the palm in real time without contact.
  • the leap motion controller is configured as a sensor unit including an infrared irradiation unit and a CCD camera.
  • the upper area of the sensor unit is a recognition area. This sensor unit is used by connecting to a tablet terminal or a laptop PC by wire or wireless.
  • the motion sensor 380 for example, Kinect (manufactured by Microsoft Corporation, registered trademark) can be used.
  • the motion sensor 380 may include one or more cameras and one or more distance sensors. Or you may provide only one of them.
  • the unit of the motion sensor 380 may have a built-in microphone.
  • the accuracy of the detection target (for example, a hand) of the motion sensor 380 is preferably one that can be detected with an accuracy of 5 mm or less, and more preferably one that can be detected with an accuracy of 1 mm or less.
  • the detection principle of the motion sensor 380 is not limited to a specific one.
  • a system called Light Coding can be used. In this method, a large number of dot patterns are irradiated from the infrared light emitting unit, and the amount of change (distortion) when the dot pattern hits the detection target (person) is read by the camera.
  • a system called TOF Time Of Flight
  • the recognition accuracy is higher than that of the Light Coding method, and the deterioration of accuracy due to distance is small.
  • a system in which reflection of light irradiated on an object by an infrared LED is photographed by two cameras and movement is recognized may be used.
  • the control unit 350 includes hardware such as a central processing unit (CPU) and a memory, and a computer program is installed to perform various arithmetic processes.
  • the control unit 350 includes an image display unit 355a, an operation determination unit 355b, a display processing unit 355c, and a mode selection unit 355d. Further, as described above, the voice recognition unit 351 and the motion recognition unit 353 are included.
  • the image display unit 355a displays a medical three-dimensional medical image on the display 361.
  • the image display unit 355a displays each of anatomical structures such as the liver and blood vessels as independent objects.
  • each anatomical structure may be displayed in a different color.
  • the color in which each anatomical structure is displayed may be manually input and set by the operator, but is not necessarily limited thereto. As will be described later, in the case where color assignment or the like is performed in advance on a predetermined data server side (by a table or the like), the display may be performed in accordance therewith.
  • the operation determination unit 355b receives an input operation on the input device 365 and the touch panel 363.
  • the display processing unit 355c performs various image processing. For example, -Rotation of 3D image, -Translation of 3D images -3D display and enlargement / reduction of images, -Change the transparency of images displayed in 3D, -Switching the display / non-display of a given object, -Cut (split) function of a given object, -A function for specifying an area of a predetermined object, etc. It is. The specific contents of these functions will be described in detail in a series of operations described later.
  • the voice recognition unit 351 performs various kinds of voice recognition. For example, the following words are recognized.
  • the name of the anatomical structure for example, an organ name such as "liver” or a blood vessel name such as "portal vein” or "hepatic artery”).
  • the motion recognition unit 353 performs various motion recognition processes. For example, assuming that the motion sensor detects a hand, the position or movement of the hand (finger) in the detection space is detected.
  • processing performed by other computer means is not limited to a tablet terminal and a notebook PC, and can be an object of one embodiment of the present invention.
  • processing performed by other computer means is not limited to a tablet terminal and a notebook PC, and can be an object of one embodiment of the present invention.
  • the invention disclosed mainly as the description of “operation” can be understood by those skilled in the art as an invention of a product or an invention of a computer program with different category expressions. . Accordingly, the present specification also discloses such an invention.
  • This three-dimensional medical image includes a liver 371 and a blood vessel 375.
  • three-dimensional medical image data is acquired in step S11.
  • the “three-dimensional medical image data” may be created based on data obtained by tomographic imaging of a patient with an imaging device.
  • volume data by volume rendering may be used.
  • the data format of the three-dimensional image is not particularly limited, and various types can be used.
  • an STL (Standard Triangulated Language) file format can be used.
  • the image data may be stored in a predetermined data storage area such as a predetermined database server, PACS, DICOM server, or workstation.
  • a predetermined data storage area such as a predetermined database server, PACS, DICOM server, or workstation.
  • the image processing apparatus 301 reads the data from a predetermined data storage area on the network and stores it in the storage unit 359 in the apparatus.
  • the image processing apparatus 301 displays a three-dimensional medical image on the display 361 (step S12).
  • the creation of a three-dimensional medical image can basically be performed using a known method. An image creation flow according to an embodiment of the present invention will be described later with reference to the drawings.
  • the generated three-dimensional medical image data may be stored in the image processing apparatus 301 and / or stored in an external server (for example, a server on the cloud).
  • various display modes are prepared for the display of medical images. For example, -Display a given anatomical structure in a translucent state; -Display a given anatomical structure in an opaque state, -Color display of a given anatomical structure, -Display a given anatomical structure with a shadow, At least one of displaying an image of a three-dimensional coordinate axis (or an equivalent thereof, for example, a cube) on the screen, and the like.
  • the liver is displayed in a translucent state and the blood vessel is displayed in an opaque state.
  • the tumor may be displayed in an opaque state.
  • the ability to perform such confirmation is very useful, for example, in that the positional relationship between the liver, blood vessels, tumors, and the like can be well confirmed, particularly in surgery where a portion of the liver is removed by laparoscopic surgery.
  • the image processing apparatus 301 according to the present embodiment is portable, and therefore it is possible to check a three-dimensional medical image by operating the apparatus in the operating room.
  • the liver and blood vessels may be displayed in different colors. If there is a tumor, the tumor may be displayed in a different color. More specifically, regarding the blood vessels, the liver, portal vein, and hepatic artery may be displayed in different colors. When blood vessels are grouped, they may be displayed together in the same color.
  • step S13 the image processing device 301 detects the position of the hand so that the distance between the motion sensor 380 and the operator's hand is appropriate. Specifically, the operator places the hand above the motion sensor 380.
  • An appropriate distance (height from the sensor to the hand) between the sensor and the operator's hand is set in advance, for example, in a range of h1 (mm) to h2 (mm). The reason why the appropriate range is set in advance is that there is a possibility that the movement of the hand cannot be recognized well when the operator's hand is too close or too far from the sensor.
  • a reference circle (first circle) 391 having a predetermined size is displayed in the approximate center of the screen.
  • the first circle 391 is always displayed in a fixed size regardless of the position of the operator's hand.
  • a second circle 393 is also displayed on the screen.
  • the center of the second circle 393 corresponds to the position of the operator's hand. That is, when the operator's hand is directly above the motion sensor 380 (an example), the center of the second circle 393 is the same as the center of the circle 391 at the second position. That is, the two circles 391 and 393 are displayed as concentric circles.
  • the second circle 393 is also displaced in the same direction (as an example on the right) and displayed in real time. Is done.
  • the operator confirms whether his / her hand is in an appropriate position (horizontal position) with respect to the motion sensor 380 while viewing the positional relationship between the two circles 391 and 393 on the screen. can do.
  • the diameter of the second circle 393 corresponds to the height of the operator's hand.
  • the second circle 393 is formed so that the diameter of the second circle 393 is the same as the diameter of the first circle 391. Is displayed.
  • the size of the second circle 393 decreases accordingly, and conversely, as the hand position decreases, the size of the second circle 393 increases accordingly.
  • the operator can check whether the height of his / her hand is appropriate while viewing the magnitude relationship between the two circles 391 and 393 on the screen.
  • the second circle 393 may be displayed in a special display when the horizontal position, the height position of the hand, or a combination thereof enters a predetermined appropriate position.
  • the second circle 393 may be displayed in different colors depending on whether it is outside the proper range as shown in FIG. 6 (a) or within the proper range as shown in FIG. 6 (b). In one embodiment, it is preferable to switch between blinking display and lighting display.
  • step S13 the step for making the distance between the motion sensor and the operator's hand appropriate is completed.
  • step S14 a voice input for selecting a display mode is received.
  • speech input the following words may be recognized: -"Move” -"Stop” -"rotation” -"Multi"
  • the voice recognition function may be turned on as a trigger when the position of the operator's hand enters a predetermined appropriate range in step S3.
  • the voice recognition function is OFF and is ON only when it is within the appropriate range.
  • a display such as “During voice recognition” may be displayed on the screen.
  • a display such as “During voice recognition” may be displayed on the screen. preferable.
  • the step of detecting the hand position in S13 may be omitted.
  • the operator utters “move” with the voice recognition function ON.
  • the image processing apparatus 301 analyzes the voice input from the microphone 370 by the voice recognition unit 351 and recognizes the word “move”. In response to this, a transition is made to the “zoom / pan” mode (step S15).
  • the image processing apparatus 301 then waits for input of motion by the operator's hand.
  • the image processing apparatus 301 uses the motion sensor 380 and the motion recognition unit 353 to recognize the position and movement of the operator's hand in real time.
  • the image is gradually reduced in accordance with the movement.
  • the operator moves his / her hand downward (when moving from the initial height h 0 to a lower h L )
  • the medical image is gradually enlarged in accordance with the movement.
  • the three-dimensional medical image is panned (translated) in accordance with the moving direction and moving amount.
  • the image is reduced or enlarged by moving the hand up and down, and the image is moved in parallel by moving the hand horizontally.
  • zoom / pan of an image is performed using motion input capable of intuitive analog input. Rotation, etc.). Therefore, it is intuitive for the operator, excellent in operability, and contributes to practical use.
  • the motion sensor 380 can input without touching the device, the input can be performed while keeping the operator's hand clean.
  • Such a configuration is very advantageous in that, for example, an intraoperative doctor can check an image using an image processing apparatus in an operating room.
  • a plurality of blood vessels are present in the liver in a branch shape. Therefore, when part of the liver parenchyma is excised so as not to damage the blood vessel more than necessary, it is necessary to sufficiently confirm the positional relationship between the blood vessel and the tumor.
  • the positional relationship of blood vessels and the like can be confirmed while viewing a three-dimensional medical image during the operation.
  • a blood vessel may exist on the front side and a tumor may be hidden behind the blood vessel (the tumor is not shown in the figure, but refer to FIG. 5 for reference).
  • the image processing apparatus of the present embodiment can check the tumor by rotating the image in the “rotation” mode.
  • rotation is not performed at every predetermined angle, but it can be freely rotated (steplessly) by an arbitrary angle by motion input, so that good observation is performed. It becomes possible.
  • the “zoom / pan” mode is configured not to perform “rotation” (details below) of the three-dimensional medical image.
  • rotation it is often the case that only enlargement / reduction or parallel movement is desired. Therefore, it is easier for the operator to perform rotation, enlargement, reduction, and parallel movement while maintaining the desired posture.
  • ⁇ Rotation> If you want to rotate the displayed 3D medical image, do the following: First, the operator utters “stop” to cancel the “zoom / pan” mode. The image processing apparatus 301 accepts this by the voice recognition function, cancels the “zoom / pan” mode, and transitions to a state of accepting another mode.
  • the operator says “Rotate”.
  • the image processing apparatus 301 accepts this through the voice recognition function and transitions to the “rotation” mode. Next, the image processing apparatus 301 waits for input of motion by the operator's hand.
  • the image processing apparatus 301 recognizes the position and movement of the operator's hand in real time. Then, the image processing apparatus 301 rotates the three-dimensional medical image around a predetermined rotation axis (X axis, Y axis, Z axis) in accordance with the movement of the operator's hand. Specifically, the movement of the operator's hand in the horizontal direction or the movement of moving the hand along the surface of the virtual sphere is recognized. Then, the three-dimensional medical image is rotated by a predetermined angle corresponding to the moving direction, moving speed, and moving amount.
  • a predetermined rotation axis X axis, Y axis, Z axis
  • rotation mode it is preferable in one embodiment that only rotation is allowed and panning (parallel movement) and zooming (enlargement / reduction) are prohibited. Accordingly, for example, it is possible to rotate the image in a desired direction while maintaining a predetermined image size, and perform subsequent predetermined image processing and observation.
  • the image processing apparatus 301 recognizes this and shifts to the “multi” mode.
  • the three-dimensional medical image is rotated, moved, enlarged / reduced according to the input of the motion of the operator's hand.
  • a function capable of rotating a three-dimensional medical image only by voice input instead of motion input may be implemented.
  • the image processing apparatus 301 recognizes this by saying “rotation”, “left”, and “15 °”. Then, the image is rotated by 15 ° around a predetermined rotation axis (for example, the Z axis extending in the vertical direction of the screen). If it is desired to rotate 15 ° upward around the axis extending in the horizontal direction, for example, “rotation”, “up”, and “15 °” may be input as voices.
  • the image processing apparatus 301 can select an anatomical structure portion or change the transmittance of the selected one by voice input. This will be described again after explaining the operation through the touch panel.
  • the image processing apparatus 301 displays anatomical structures in the three-dimensional medical image as independent objects. Thereby, each can be selected individually or display can be switched on and off. For example, hepatic arteries, portal veins, and hepatic veins may be grouped and selected at once, or may be selected individually.
  • the rotation of the three-dimensional medical image can also be performed by an operation on the touch panel.
  • the image processing apparatus 301 rotates the three-dimensional medical image accordingly.
  • the image processing apparatus 301 also responds when the operator touches two points on the screen and performs an operation (a pinch-out operation or a pinch-in operation) to increase or decrease the distance between the two points.
  • the image may be enlarged or displayed.
  • the image processing apparatus 301 may change the display density when an operator touches any anatomical structure (for example, liver). Specifically, switching between two states of a normal opaque display state and a semi-transparent state may be performed. That is, as an example. It may be a semi-transparent state when touched once and return to a normal display state when touched again.
  • any anatomical structure for example, liver
  • the transparency is set to a plurality of stages such as 0%, 30%, 70%, and 100% (non-display), and the display density is sequentially switched in a loop shape every time the touch is performed. It may be. In this case, the transparency of 100% (that is, the non-display state) may be excluded from this loop. Naturally, the specific numerical value of the transmittance can be changed as appropriate. In short, it is only necessary that the transparency is set to at least a plurality of stages, and they are switched.
  • the display transparency switching function can be exhibited simply by touching an arbitrary anatomical structure. Therefore, the operation can be performed simply and intuitively as compared with a method in which some icon or the like is separately selected or a command needs to be selected in order to switch the transparency.
  • Gestures for changing the transparency are not limited to those described above.
  • the image processing apparatus accepts the input, and the transparency is changed accordingly. It is good also as a structure changed.
  • the transparency may be set in several steps, for example, 0%, 30%, 70%, 100% (non-display), or instead, in a stepless manner (continuously). It is good also as a structure which the transmittance
  • the image processing apparatus 301 sets the anatomical structure as “selected”.
  • the anatomical structure for example, the liver
  • the anatomical structure may be displayed in a color different from the initial state or may be blinked.
  • the fingertip one example
  • the selected anatomical structure for example, liver
  • FIG. 8 is a flowchart of a series of operations.
  • the image processing apparatus 301 first displays a three-dimensional medical image as shown in FIG. 7A as step S1. Then, when the operator touches two points on an arbitrary anatomical structure (here, the liver 71) as shown in FIG. 7B, the state, that is, the two points touch. Is determined (step S2). In addition, as timing, two points may be touched simultaneously or substantially simultaneously.
  • the image processing apparatus 301 determines whether or not the state where the two points are touched has continued for a certain period of time (step S3).
  • the image processing apparatus 301 displays on the screen in a predetermined display manner that the two touched points P1 and P2 are designated when it is determined in step S3 that the continuation is longer than a certain time.
  • Any "predetermined display mode" may be used. For example, (i) both the points P1 and P2 and the line L1 connecting them are displayed, or (ii) only the points P1 and P2 are displayed. Alternatively, only the line L1 may be displayed.
  • the points P1 and P2 not only a small dot but a slightly larger graphical image as shown in FIG. 7B (for example, any shape such as a circle, a rectangle, a polygon, a star, etc.) so that the designated position can be clearly understood. However, it is also possible to display such as a circle).
  • the image processing apparatus 301 may display the designated points P1 and P2 as they are as shown in FIG. 7C even after the operator releases the hand from the screen. Moreover, you may be comprised so that the fine adjustment of the position of the points P1 and P2 may be received. In this way, for example, the circular graphical image of P1 and P2 and / or the line L1 may be blinked so that it can be understood that the mode is a mode for accepting fine adjustment.
  • FIG. 7C illustrates a state in which the point P2 is slightly moved and finely adjusted to the point P2 ′ as an example.
  • This fine adjustment may be performed by the operator moving the graphical images of the points P1 and P2 with a finger, for example (operation on the touch panel).
  • motion input may be used to finely adjust the positions of the points P1 and P2 without contacting the device. Note that displaying the cutting reference line L1 by voice input will be described later again.
  • the cut function and the like of the present embodiment will be described first on the premise of touch panel operation.
  • step S4 the operator touches a predetermined icon (for example, an “OK” input icon) on the screen. Then, the liver is cut by the line L1 connecting the points P1 and P2 as shown in FIG. 7D by the cutting function (step S5).
  • a predetermined icon for example, an “OK” input icon
  • the first part 71-1 and the second part 71-2 divided into two with the line L1 in between can be operated as independent anatomical structures.
  • a method other than the above operation for example, (i) not touching an icon, but touching a predetermined area on the screen, or (ii) touching multiple times (double tap in one example)
  • the above functions may be executed by performing an external operation. It may be by voice input.
  • step S6 when the first part 71-1 is touched (step S6), the part as shown in FIG. Only selected. Then, the display density is switched. Specifically, only the first part 71-1 is translucently displayed. Touch again to return to the original display.
  • the part 71-1 when the first part 71-1 is long pressed and swiped or dragged to the periphery of the screen, the part is not displayed, and the second part 71-2 and the blood vessel 73, Only 75 remains.
  • the non-displayed image may be displayed as a thumbnail image 66 as illustrated in FIG. 7F.
  • FIG. 9A shows a state in which two points P1 and P2 are touched as in the operation described with reference to FIG. 7B (note that the operator's finger remains touching two points on the screen. (The illustration is omitted).
  • the image processing apparatus 301 moves.
  • the positions of the latter two points P1 ′ and P2 ′ are specified, and a substantially rectangular area is designated based on the positions. Specifically, a quadrangle surrounded by four points, two points P1 and P2 before movement and two points P1 ′ and P2 ′ after movement, is designated as an area.
  • the image processing apparatus 301 is configured so that fine adjustment can be performed.
  • the operator may touch a predetermined icon (for example, an “OK” input icon) on the screen.
  • the circles of the points P1, P2, P1 ′, and P2 ′ can be finely adjusted (an example). You may make it display the graphical image and the line which connects them blinking.
  • the designated area Sa1 (see FIG. 9B) is divided from other parts and can be operated as an independent object. Therefore, it is possible to change the display density of only the region or switch the display on and off. According to such a function, for example, by not displaying only the region Sa1, it is possible to observe the inner blood vessels 73 and 75 and to confirm the relationship between the blood vessels 73 and 75 and the liver 71.
  • the area designation is not necessarily performed with a quadrangle, and a triangle or a polygonal shape of a pentagon or more may be area designated.
  • medical images of the liver and its surroundings are taken as an example, but of course, the anatomical structure is not limited to a specific one in the present invention.
  • a medical image of the examiner's head may be displayed and various image processing may be performed on the medical image.
  • selection is performed when a target is uttered by voice instead of touching and recognized. For example, if the operator says “liver”, the image processing apparatus 301 recognizes it and sets the liver to the selected state. In order to indicate the “selected state”, the anatomical structure (for example, the liver) may be displayed in a color different from the initial state, or may be blinked.
  • the operator utters “Transparent”.
  • the image processing apparatus 301 recognizes the voice and switches the display to a semi-transparent display.
  • other anatomical structures in one example, blood vessels and tumors
  • the transparency may be changed according to the distance from the motion sensor to the operator's hand. That is, the transmittance gradually increases (or decreases) when the hand is brought closer to the motion sensor, and conversely, the transmittance gradually decreases (or increases) when the hand is moved away from the motion sensor. .
  • the mode is set to accept the motion input as described above. The device then detects the distance from the motion sensor to the operator's hand and changes the transparency accordingly.
  • Line cut / box cut When performing line cut, for example, the operator utters “line cut”.
  • the image processing apparatus 301 recognizes it and displays a cutting reference line on the screen. This reference line may be like the line L1 in FIG. 7B.
  • the image processing apparatus 301 waits for motion input.
  • the operator can change the position, length, orientation, etc. of the cutting reference line by motion input. Thereby, the reference line can be set at a predetermined position without contact.
  • FIG. 5 (b) shows an example of this, and the portion 371-2 on the right side of the reference line remains opaque and the portion 371-1 on the left side is semi-transparent.
  • the image processing apparatus 301 When performing box cut, for example, the operator utters “box cut”. Although detailed illustration is omitted, the image processing apparatus 301 recognizes the voice and displays a rectangle (one example) as a reference for excision on the screen.
  • the size of the rectangle may be only one predetermined size, or a plurality of sizes such as large, medium, and small may be prepared.
  • This box cut is to cut the target anatomical structure at a predetermined depth.
  • the image processing apparatus 301 waits for motion input while displaying a rectangle serving as a reference for excision on the screen.
  • the size and shape of the quadrangle may be fixed, but may be changeable. For example, the size and shape of the quadrangle can be changed by moving the corners of the default quadrangle displayed first. Motion input can be used to move the corner position.
  • a substantially rectangular parallelepiped hole having a predetermined depth corresponding to the moving distance of the hand is formed in the liver correspondingly to the rectangle as an outline. It will be done. As a result, it is possible to obtain a medical observation image such that a part of the liver is excised and an internal blood vessel is not excised.
  • the image processing apparatus 301 recognizes it and rotates the medical image in which the hole is formed by 15 °. .
  • the internal configuration of the hole for example, a part of the liver is excised but the blood vessel is displayed
  • the removal is performed using a rectangular outline, but it is needless to say that the outline may be defined by a triangular, polygonal, circular, elliptical or other arbitrary geometric shape.
  • the area designated as a box is cut out, but conversely, only the area designated as a box may be left and the other areas may be hidden.
  • image data is obtained by performing fluoroscopic imaging after a predetermined time after injecting the contrast agent, depending on the difference in the arrival time of the contrast agent, etc.
  • CT values signal values
  • threshold data are set and filtered for each blood vessel or organ to create volume data for each part.
  • the following operations are relatively time-consuming. May be required. That is, in the viewer function of a three-dimensional medical image, for example, it is possible to switch between displaying with a blood vessel emphasized and displaying without emphasis. By being able to do this, for example, the peripheral portion of the blood vessel (the CT value (signal value) is low) can be switched as necessary or not, and observation can be performed as necessary. (See FIG. 10).
  • the display mode of all blood vessels is uniformly changed for each blood vessel unless the display threshold is reset or filtered. Therefore, there is a problem that it is difficult to easily change the display.
  • the image processing apparatus has the following functions.
  • the image processing apparatus reads volume data based on information obtained by imaging a patient (see also step S1 in FIG. 3).
  • the signal value, CT value, and standard deviation (SD) in the volume are analyzed. For example, if the average CT value is 300 HU or more, it is automatically determined as an artery, and if the average CT value is 100 HU or less, it is automatically determined as an organ. In addition to the CT value, the histogram shape of the CT value is also recognized. Generally, arteries tend to have a high peak and a narrow width (distribution), and portal veins and veins have a low peak and a wide width (distribution). Therefore, automatic recognition of the blood vessel type can be realized based on such elements.
  • a histogram of blood vessels originally exists for each artery, vein, portal vein, etc., but each is integrated and normalized, and all blood vessels (in another embodiment, arbitrary blood vessels are normalized). It may be possible to represent two or more. Specifically, as an example, the average value and the center of gravity of each histogram may be calculated, and one histogram may be created by shifting the whole so that the lower one matches the higher one.
  • the image processing apparatus accepts a predetermined input from the operator and does not display a portion below a certain reference value (or below a certain reference value).
  • peripheral parts such as arteries, veins, and portal veins can be collectively hidden (see, for example, FIG. 10B).
  • the lower limit value of the CT value to be displayed may be set lower as shown in FIG. 10A (the threshold value is 130 HU here). .
  • predetermined input from the operator may be performed by operating an image button such as an icon, a cursor, or a slider on the screen, for example.
  • a predetermined gesture of an operator's finger on the touch panel may be recognized and performed based on it.
  • the blood vessel display may be changed by the operator touching the touch panel with several fingers and simultaneously moving the fingers in a predetermined direction. More specifically, when several fingers are simultaneously moved upward (in the first direction), the peripheral part of the blood vessel is displayed, and conversely, the fingers are moved downward (in the second direction). When moved, the peripheral portion of the blood vessel (more precisely, the vicinity of the outer edge of the thick portion of the blood vessel) may be removed.
  • the above-described operation can be performed by a motion input via the motion sensor 380 instead of an operation on the touch panel. That is, in this configuration, the display of blood vessels (for example, other anatomical structures may be used) can be switched only by voice input and motion input, so it is not necessary to touch the touch panel, and the cleanliness is maintained. The three-dimensional medical image can be observed as it is.
  • the display of blood vessels for example, other anatomical structures may be used
  • a pointer appears as an image on the screen and is positioned at a predetermined part
  • the part can be watched.
  • the pointer needs to be arranged at an arbitrary part in the three-dimensional space, not in the plane. Such an operation of moving the pointer to an arbitrary part in the dimensional space is relatively difficult to perform with an input interface such as a mouse or a touch panel.
  • the pointers may be arranged three-dimensionally using motion input.
  • the medical image processing apparatus 301 first receives an input of a “pointer” (an example) using a voice recognition function. Then, as an example, a three-dimensionally displayed pointer is displayed on the screen.
  • the pointer In the vertical direction and the horizontal direction on the screen, the pointer may be moved in accordance with the movement of the operator's hand in the horizontal plane. With respect to the depth direction, the pointer may move in the depth direction of the three-dimensional medical image when the operator brings his hand close to the motion sensor 380, and the pointer may move in the near direction when the operator moves away.
  • a display mode in which the display size gradually decreases as it moves in the back direction and gradually increases as it moves in the front direction may be adopted.
  • an image processing apparatus may include a schema image writing function.
  • the image processing apparatus when there is a predetermined input by the operator, the image processing apparatus creates a schema image corresponding to the data using data of a three-dimensional medical image (see, for example, FIG. 5).
  • the schema image may be any two-dimensional image such as a line drawing, a monochrome image, or a color image.
  • Examples of the predetermined input by the operator include various inputs such as an input by touching an icon on the screen, a voice input, or a predetermined gesture input via a motion sensor.
  • a medical image displayed in an orientation (example) as shown in FIG. 5 is currently displayed. It is also possible to write the data by converting the image into two dimensions as it is. At this time, a line drawing may be created by performing contour extraction processing.
  • the data format may be any format, but for example, a PDF (Portable Document Format) format or any other image format such as GIF, PNG, or JPEG can be used.
  • a doctor can write a sketch, a finding, or the like on the schema image created in this way, for example, with a touch pen or a finger.
  • the schema image created by the image processing apparatus can be sent out from the apparatus and stored in a predetermined storage area connected on the network (see FIG. 3). For example, it may be imported as a part of the electronic medical record.
  • the image processing apparatus is a portable type such as a tablet terminal, and can be taken out of the hospital and used in some cases. Such a configuration may be useful, for example, when performing a simulation of a procedure while confirming a three-dimensional medical image of a specific patient outside the hospital. However, it is necessary from the viewpoint of security that the internal information is kept secret when it is taken out of the hospital.
  • the image processing apparatus preferably has the following functions: (a) a function for recognizing the current position of the apparatus, and (b) the apparatus based on the outside of the hospital.
  • information including at least information that can identify a patient corresponds to this.
  • the target information may be encrypted, or access to the information may be prohibited.
  • determining whether the hospital is inside or outside may be based on whether it is within the range of the wireless network system in the hospital.
  • FIG. 12 is a diagram schematically illustrating a state in which concealment is performed. In this screen, all information that can identify the patient is concealed. An icon 441 is displayed on the screen.
  • a doctor who is viewing a three-dimensional medical image may want to temporarily confirm patient information, for example, when it is desired to confirm which patient the image belongs to.
  • the medical image processing apparatus of this example first determines that the icon 441 has been pressed (FIGS. 12 and 13), and then displays patient information.
  • the patient information may be data stored inside the apparatus, or may be data obtained by accessing an external server (a server in a hospital system, for example).
  • the conditions under which such patient information can be displayed are limited to certain conditions.
  • this is a case where the fingerprint authentication of the operator is performed on the apparatus.
  • the identity authentication is performed by another authentication method.
  • the communication with the external server is an example, and is configured to be possible only under secure communication conditions such as VPN (Virtual Private Network).
  • VPN Virtual Private Network
  • the information to be displayed for example, one, two, or three or more of the patient's initial, date of birth, sex, age, address, operation date, doctor in charge, etc. may be used. Patient ID, examination ID, etc. may be displayed.
  • the displayed patient information may be automatically hidden again after a certain period of time.
  • the display may be continued during the operation (in one example, the display is continued until the operation is completed (logout)).
  • the minimum patient information and / or operation information can be confirmed as necessary, so that the displayed 3D medical image is mistaken for the patient who is actually operated on. The possibility of occurrence of such problems can be reduced.
  • the medical image processing apparatus may have a function of displaying a stereo image as described below.
  • an image including the first image 431L and the second image 431R may be displayed so that the operator can view stereoscopically.
  • the first image 431L and the second image 431R display the same subject with a predetermined parallax. Sometimes referred to as a left-eye image and a right-eye image.
  • an operation pad area 433 may be displayed as shown in FIG.
  • the operation pad area 433 is an area for changing the display angle of the subject.
  • the medical image processing apparatus changes the display angle of the subject (two images) at the same time accordingly. That is, the direction of the subject can be changed in conjunction with the movement of the finger.
  • the subject image is not particularly limited, but may be a blood vessel contrast image.
  • the doctor can more accurately grasp the three-dimensional structure of the subject through stereoscopic vision.
  • the apparatus has a function of displaying a first image and a second image having different parallaxes. More specifically, it further has a function of displaying an operation pad area for operating those display angles. For example, one or a combination of gesture input, voice input, motion input, and the like can be used to change the display of these stereoscopic images.
  • this specification also discloses the invention of the method and program corresponding to the said content.
  • a fluoroscopic image of a patient is stored in a predetermined data server (for example, a DICOM server: one that stores data received from a modality in a predetermined format).
  • a predetermined data server for example, a DICOM server: one that stores data received from a modality in a predetermined format.
  • management may be performed by dividing into sections such as arteries, veins, bones, organs, and the like (further subdivided sections).
  • Information for selection by the voice recognition function for example, voice input of “aorta” for an aorta) may be set for each object.
  • automatic discrimination keys symbols, alphabets, numbers, and combinations thereof
  • symbols symbols, alphabets, numbers, and combinations thereof
  • an offset value may be set for each individual object.
  • the offset value (see “CombineOfs” in the table) is set to, for example, “+50” for the aorta, “+100” for the abdominal artery, “+400” for the vein, and the like. .
  • the offset value is used as follows as an example. For example, it is assumed that the central CT value of the artery is 350 HU and the central CT value of the vein is 200 HU. In this way, blood vessels have different degrees of contrast in arteries and veins (and portal veins). Therefore, in the above example in which the offset value is used to align them, the vein offset value is set to +150 HU, and the contrast effect is virtually increased so that the artery and vein are handled with the same threshold setting. It becomes possible.
  • the offset value may be set not only in the artery and vein but also in other blood vessels such as the portal vein.
  • the following usage may be performed with respect to the offset value of the real system instead of the vascular system.
  • the offset value of a real organ for example, liver
  • the offset value of a real organ is set to a large value such as +700 HU.
  • the real organs such as the liver changes accordingly. The shape will collapse.
  • the real organ may be displayed with a large offset value such as +700 HU.
  • the presence of the setting table as described above is preferable in that the labor of production can be saved as much as possible.
  • several tables are preferably prepared. This is because it is preferable that the A blood vessel (or A organ) can be clearly seen in one surgical procedure, but the B blood vessel (or B organ) can be clearly seen rather than the A blood vessel (or A organ) in another surgical procedure. This is because it may be preferable. That is, it is preferable in one embodiment that several tables each having an offset value corresponding to the technique are registered.
  • system or apparatus may be configured as follows. That is, (i) a plurality of techniques are displayed, (ii) an operator selects one of them, and (iii) a corresponding table is called (displayed as necessary). May be.
  • the display of the surgical method may be performed in accordance with a mode (part selection) for selecting which part of the body to perform the treatment in, for example, a user interface for setting injection conditions.
  • a mode for selecting which part of the body to perform the treatment in, for example, a user interface for setting injection conditions.
  • it may be configured to display a technique corresponding to a selected predetermined part (head, chest, abdomen, etc.) (see (i) above).
  • the embodiment described above uses voice input or motion input (input corresponding to human movement).
  • a predetermined physical switch may be used in combination with these inputs.
  • a switch for example, a foot switch
  • the “foot switch” is an example that is used by being placed on the floor, and includes a switch housing on which a sensor and a substrate are placed, and a pressing unit that can be stepped on with a foot or the like.
  • the pressing portion is not limited, but may be a portion configured to be movable so as to be pushed down when stepped on.
  • the detection signal from the foot switch may be supplied to the outside via a cable (wired), or may be configured to be supplied to the outside wirelessly.
  • the foot switch may be electrically connected to the medical image processing apparatus of the present invention (for example, the control unit 350, see FIG. 2), but is not limited thereto.
  • a configuration in which the foot switch is connected to another device may be used.
  • FIG. 15 shows an example of the arrangement of foot switches.
  • the chemical injection device includes an injection head 475 disposed near the imaging device 470, a first control unit (power supply unit) 478 connected thereto, and a console (second control unit) connected thereto. 476).
  • the foot switch 477d is connected to the power supply unit as an example.
  • a plurality of reception conditions may be set for voice operation.
  • a foot switch may be used as one of the reception conditions.
  • the following settings may be made (one or more). 1) Accept voice input only when there is motion input 2) Accept voice input only when the foot switch is ON 3) Only when there is motion input and the foot switch is ON Accept voice input 4) Always accept voice input
  • the input conditions as described above may be registered in the voice operation command table of the device.
  • the input 1) means that no voice input is accepted in the absence of motion input.
  • the following input method is used for voice input. That is, for example, when a combination of direction and angle is entered such as “45 ° left, 30 ° above”, a command input is accepted. This is because if the information is not combination information, the probability of misrecognition may increase. This will be described in detail below.
  • voice input is ON and only words such as “up” and “previous” can be recognized, in some cases, the word in the conversation being performed is recognized, and the intention is It is also assumed that no input is made. Therefore, a configuration may be adopted in which a command is accepted only when a combination of a plurality of words is recognized.
  • Such a voice input method for preventing erroneous recognition is not necessarily limited to the combination of direction and angle as described above.
  • the command is Accept.
  • an input such as “direction” + “front” may be accepted, but an input such as the opposite “front” + “direction” may not be accepted (that is, the order of combination is determined). ing).
  • the command may be received only when a combination such as “3D” + “enlarge” is recognized, for example, instead of simply recognizing the word “enlarge”. .
  • a combination such as “3D” + “enlarge” is recognized, for example, instead of simply recognizing the word “enlarge”.
  • a structure may be adopted in which the voice input is accepted only for a certain period of time (a configuration in which the voice input is not accepted other than the certain period).
  • the foot switch may be turned ON only while the pressing part is stepped on.
  • the subsequent process may be performed only when the foot switch is stepped for a certain period of time (so-called long press). According to this, it is possible to prevent unnecessary processing from being performed by unintentionally pressing the foot switch.
  • the input as described above may be performed for a process in which such processing is a concern.
  • the above operation may be performed when other switches (physical switches) are ON instead of the foot switch.
  • voice input is performed by voice input only in a predetermined mode suitable for voice input, and input is performed by another input method other than that. It shall be.
  • a predetermined input for example, “speech all ON” is entered as an example
  • voice input that is not defaulted by default (at least a part of them) is also voiced.
  • a function that enables input may be provided.
  • such expansion of voice input is just an example, and may return to the original state after a certain time has elapsed (timeout function).
  • timeout function a time-out period of about 1 minute, 3 minutes, or 5 minutes may be set in advance.
  • the input for designating the angle may be configured to always react only to voice.
  • a table for each word of voice input is prepared, and under what circumstances the input is permitted for each word (for example, the foot switch is ON for the word input “aaa”) Otherwise, the word input “bbb” is always accepted) may be set.
  • the image processing apparatus of the present invention may basically be configured such that a medical image viewed from a certain direction is always displayed by default regardless of the part.
  • the configuration is such that an image viewed from the front of the body is always displayed by default regardless of the site and / or technique.
  • the configuration is such that an image viewed from a preset angle is displayed by default, which is preferable depending on the type of the region and / or technique.
  • the lateral position is fundamental. Therefore, in the case of thoracoscopic surgery, the orientation of the lateral position may be set as the home position and the default display may be performed.
  • the display is defaulted. It may be.
  • the selection of the home position may be set manually or automatically according to at least one of the surgical method, the site, and the position of the tumor.
  • whether the lesion is in the left lung or the right lung determines whether to be in the left lateral position or the right lateral position.
  • a configuration is also useful in which the position of a lesion (tumor) is automatically recognized and one of them is automatically determined accordingly.
  • the screen displayed when the voice input is ON may be as shown in FIG. 16, for example.
  • This screen is displayed by adding several images to the display example shown in FIG. 6, but the images (see reference numerals 391 and 393) in FIG. 6 may be omitted. It will be easily understood by contractors.
  • voice recognition voice input
  • motion input there may be a motion input ON display 397a.
  • display unit 398 that displays the recognized voice as characters. This makes it possible to visually confirm what kind of voice input has been made.
  • a display unit 399 indicating whether or not the recognized voice is accepted as a command. If it is not accepted, a display such as “REJECTED” may be displayed, and this allows the operator to visually confirm that it has not been accepted.
  • a display (361); A control unit (350) connected to the display; An audio input device (370); A motion sensor (380), A medical image processing apparatus (301) comprising: The control unit (350) a: an image display unit for displaying a three-dimensional medical image on the display; b: a mode selection unit for recognizing a voice input using the voice recognition device (170) and switching a mode related to the display of the three-dimensional medical image accordingly; c: a display processing unit for recognizing an operator's motion input via the motion sensor and changing the display of the three-dimensional medical image accordingly; Having Medical image processing apparatus.
  • an apparatus or system having at least a control unit having the above-described characteristics.
  • Zoom mode to enlarge and reduce the image Pan mode to translate the image
  • Rotation mode to rotate the image
  • the three-dimensional medical image rotates in response to the movement of the operator's hand.
  • the medical image processing apparatus as described above.
  • the three-dimensional medical image includes at least liver and blood vessel image data.
  • a processing for displaying a three-dimensional medical image on a display
  • b processing for recognizing a voice input using the voice recognition device (170) and switching a mode relating to the display of the three-dimensional medical image accordingly
  • c processing for recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly
  • a medical image processing program processing for recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly.
  • Zoom mode to enlarge and reduce the image Pan mode to translate the image
  • Rotation mode to rotate the image
  • a computer displaying a three-dimensional medical image on a display;
  • a computer recognizing an input voice using a voice recognition device (170) and switching a mode relating to display of the three-dimensional medical image accordingly;
  • a computer recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly;
  • a method for operating a medical image processing apparatus comprising:
  • Zoom mode to enlarge and reduce the image Pan mode to translate the image
  • Rotation mode to rotate the image
  • the zoom mode when the operator's hand is moved in the first direction, the three-dimensional medical image is enlarged, and when the operator's hand is moved in the opposite second direction, the three-dimensional medical image is reduced.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Epidemiology (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Primary Health Care (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Image Analysis (AREA)

Abstract

This image processing device (301) is equipped with: a display (361); a control unit (350) which is connected to the display; a speech input device (370); and a motion sensor (380). The control unit (350) has: a: an image display part for displaying a three-dimensional medical image on the display; b: a mode selection part for switching, upon recognizing speech input through a speech recognition device (170), the mode of display related to the three-dimensional medical image in response to the speech; and c: a display processing part for changing, upon recognizing via the motion sensor a motion input by an operator, the display of the three-dimensional medical image in response to the motion.

Description

医用画像処理装置および医用画像処理プログラムMedical image processing apparatus and medical image processing program
 本発明は、医用画像処理装置および医用画像処理プログラムに関する。本発明は、特には、患者を撮像して得た3次元医療用画像を表示するとともに、種々の表示処理を作業性よくかつ操作者の清潔を保ちつつ行うことができる医用画像処理装置等に関する。 The present invention relates to a medical image processing apparatus and a medical image processing program. The present invention particularly relates to a medical image processing apparatus and the like that can display a three-dimensional medical image obtained by imaging a patient and perform various display processes with good workability and while maintaining the cleanliness of an operator. .
 現在、医用の画像診断装置として、CT(Computed Tomography)装置、MRI(Magnetic Resonance Imaging)装置、PET(Positron Emission Tomography)装置、超音波診断装置、血管造影(アンギオグラフィ)撮像装置等が知られている。 Currently, CT (Computed Tomography) devices, MRI (Magnetic Resonance Imaging) devices, PET (Positron Emission Tomography) devices, ultrasound diagnostic devices, angiographic imaging devices, etc. are known as medical diagnostic imaging devices. Yes.
 近年では、例えば、血管が複雑に絡みあう肝臓などの臓器に対する手術を行う前にシミュレーションが行われることもある。シミュレーションは、例えば、造影CT検査を実施し、施術対象となる部位の透視撮像画像を用意し、それをディスプレイで確認するなどして行われる。こうしたシミュレーションは治療計画の検討に有用である。例えば特許文献1には、タブレット端末で肝臓等の臓器を表示して手術動作をシミュレーションする技術も開示されている。 In recent years, for example, a simulation may be performed before performing an operation on an organ such as a liver in which blood vessels are intertwined in a complicated manner. The simulation is performed, for example, by performing a contrast CT examination, preparing a fluoroscopic image of a site to be treated, and confirming it on a display. Such simulations are useful for studying treatment plans. For example, Patent Document 1 discloses a technique for simulating a surgical operation by displaying an organ such as a liver on a tablet terminal.
特開2014-54358号公報JP 2014-54358 A
 特許文献1の技術は、タブレット端末を利用するものであるので場所を選ばずどこでも持ち運んで術前シミュレーションをできるという点で有用である。また、手術室に持ち込めるため、術中に何らかの確認を行うことできる点でも有用である。 The technique of Patent Document 1 uses a tablet terminal, so it is useful in that it can be carried anywhere and can perform preoperative simulation. In addition, since it can be brought into the operating room, it is also useful in that some confirmation can be performed during the operation.
 しかしながら、端末を利用する操作者にとって使い勝手が悪いものである場合には、備えられている機能が利用されない可能性がある。また、操作者の清潔を保ちながら操作を行えるものであることが望ましい。医療用の画像処理装置においては、術中に医師等が操作を行う可能性もあるものであるからである。 However, if the operator who uses the terminal is not easy to use, the functions provided may not be used. Moreover, it is desirable that the operation can be performed while keeping the operator clean. This is because a medical image processing apparatus may be operated by a doctor or the like during surgery.
 本発明は、こうした問題点に着目してなされたものである。その目的は、患者を撮像して得た3次元医療用画像を表示するとともに、種々の表示処理を作業性よくかつ操作者の清潔を保ちつつ行うことができる医用画像処理装置および画像処理プログラム等を提供することにある。 The present invention has been made paying attention to such problems. Its purpose is to display a three-dimensional medical image obtained by imaging a patient, a medical image processing apparatus and an image processing program capable of performing various display processes with good workability and while keeping the operator clean. Is to provide.
 上記課題を解決するための本発明の一形態の医用画像処理装置は下記の通りである:
 ディスプレイと、
 前記ディスプレイに接続された制御部(プロセッサ)と、
 音声入力デバイスと、
 モーションセンサと、
 を備える医用画像処理装置であって、
 前記制御部(プロセッサ)は、
a:前記ディスプレイに3次元医用画像を表示させる画像表示部と、
b:前記音声認識デバイスを用いて入力された音声を認識し、それに応じて、前記3次元医用画像の表示に関するモードを切り替えるモード選択部と、
c:前記モーションセンサを介して操作者のモーション入力を認識し、それに応じて、前記3次元医用画像の表示を変更する表示処理部と、
 を有する、医用画像処理装置。
A medical image processing apparatus according to an embodiment of the present invention for solving the above-described problems is as follows:
Display,
A control unit (processor) connected to the display;
An audio input device;
A motion sensor,
A medical image processing apparatus comprising:
The control unit (processor)
a: an image display unit for displaying a three-dimensional medical image on the display;
b: a mode selection unit for recognizing a voice input using the voice recognition device and switching a mode related to display of the three-dimensional medical image in accordance with the voice;
c: a display processing unit for recognizing an operator's motion input via the motion sensor and changing the display of the three-dimensional medical image accordingly;
A medical image processing apparatus.
 言い換えれば、本発明の一形態の装置は、制御部(プロセッサ)が、
-前記ディスプレイに3次元医用画像を表示させ、
-前記音声認識デバイスを用いて入力された音声を認識し、それに応じて、前記3次元医用画像の表示に関するモードを切り替え、
-前記モーションセンサを介して操作者のモーション入力を認識し、それに応じて、前記3次元医用画像の表示を変更する、ように構成されている。
In other words, in the apparatus according to an embodiment of the present invention, the control unit (processor)
-Display a 3D medical image on the display;
Recognizing speech input using the speech recognition device and switching modes relating to the display of the 3D medical image accordingly;
-Configured to recognize an operator's motion input via the motion sensor and change the display of the 3D medical image accordingly.
(用語の説明)
・「解剖学的構造物」とは、被写体内で認識可能な対象物(例えば臓器、骨、血管等)のことをいい、脂肪や、腫瘍等の病変等をも含む。
・「端末」とは、ネットワークに接続されまたはスダンドアロンで使用され、データ処理を行う情報処理装置をいう。該情報処理装置に任意の周辺装置を接続して構成してもよい。基本的にはタブレット端末やラップトップ型コンピュータのように各種機能が1つのデバイス内に設けられているものも好ましいが、場合によっては、その一部の機能を、例えば負荷に応じて、任意の単位で機能的または物理的に分散して構成することもできる。
・「接続」とは、2つの要素が直接接続されている場合に加え、本発明の趣旨を逸脱しない範囲で、ある要素と他の要素とが何らかの中間要素を介して間接的に接続されている場合をも含む。また、有線接続および無線接続の両方を含む。
(Explanation of terms)
“Anatomical structure” refers to an object (for example, an organ, bone, blood vessel, etc.) that can be recognized in a subject, and includes fat, lesions such as a tumor, and the like.
“Terminal” refers to an information processing apparatus that is connected to a network or used in a standalone manner to perform data processing. An arbitrary peripheral device may be connected to the information processing apparatus. Basically, it is also preferable that various functions are provided in one device such as a tablet terminal or a laptop computer. However, in some cases, a part of the functions may be arbitrarily set depending on, for example, a load. It can also be configured to be functionally or physically distributed in units.
“Connected” means that, in addition to the case where two elements are directly connected, one element and another element are indirectly connected via some intermediate element without departing from the spirit of the present invention. This includes cases where It also includes both wired and wireless connections.
 本明細書において「機能」+「部」で表現される構成要素は、所定の機能を果す機能ブロックに相当するものである。機能ブロックは、必ずしもハードウェア回路間の分割を示すものではない。したがって、例えば、1つまたは複数の機能ブロックは、単一のハードウェアで実施できるが、複数のハードウェアで実施することもできる。 In the present specification, the component expressed as “function” + “part” corresponds to a functional block that performs a predetermined function. A functional block does not necessarily indicate a division between hardware circuits. Thus, for example, one or more functional blocks can be implemented with a single hardware, but can also be implemented with multiple hardware.
 本発明によれば、患者を撮像して得た3次元医療用画像を表示するとともに、種々の表示処理を作業性よくかつ操作者の清潔を保ちつつ行うことができる医用画像処理装置等を提供することができる。 According to the present invention, a medical image processing apparatus and the like that can display a three-dimensional medical image obtained by imaging a patient and perform various display processes with good workability and while maintaining the cleanliness of an operator are provided. can do.
一実施形態の医用画像処理装置を示す図である。It is a figure which shows the medical image processing apparatus of one Embodiment. 図1の画像処理装置のブロック図の例を示す図である。It is a figure which shows the example of the block diagram of the image processing apparatus of FIG. 病院システムの構成要素の一例を示す図である。It is a figure which shows an example of the component of a hospital system. 図1の画像処理装置の動作例のフローチャートである。3 is a flowchart of an operation example of the image processing apparatus in FIG. 1. 肝臓およびその周辺の血管を表示した3次元医用画像の例を示す図である。It is a figure which shows the example of the three-dimensional medical image which displayed the liver and the blood vessel of the circumference | surroundings. 操作者の手の位置を認識する際のグラフィカル画像の一例である。It is an example of the graphical image at the time of recognizing the position of an operator's hand. 三次元医用画像の一例を示す図である。It is a figure which shows an example of a three-dimensional medical image. 三次元医用画像上の任意の対象に対して2点を指定する様子を説明するための図である。It is a figure for demonstrating a mode that 2 points | pieces are designated with respect to the arbitrary objects on a three-dimensional medical image. 指定された点の位置を微調整する様子を説明するための図である。It is a figure for demonstrating a mode that the position of the designated point is finely adjusted. 肝臓を分割する例を示す図である。It is a figure which shows the example which divides | segments a liver. 分割した肝臓の一部にタッチする様子を説明するための図である。It is a figure for demonstrating a mode that a part of divided | segmented liver is touched. 分割した肝臓の一部のみを非表示とした状態を示す図である。It is a figure which shows the state which hidden only the part of the divided | segmented liver. 一例としての操作手順の流れを示すフローチャートである。It is a flowchart which shows the flow of the operation procedure as an example. 領域指定の手順を説明するための図である。It is a figure for demonstrating the procedure of area | region designation. 領域指定の手順(一例)を説明するための図である。It is a figure for demonstrating the procedure (an example) of an area | region designation | designated. 異なる閾値で血管を表示した例を示す3次元医用画像である。It is a three-dimensional medical image which shows the example which displayed the blood vessel with a different threshold value. ステレオ画像を表示する画面の例を示す図である。It is a figure which shows the example of the screen which displays a stereo image. 患者情報の秘匿化に関する画面の例を示す図である。It is a figure which shows the example of the screen regarding concealment of patient information. 所定の場合に患者情報を表示する動作フローである。It is an operation | movement flow which displays patient information in a predetermined case. データサーバにおいて患者の透視画像の個々の部位が予め別途独立に管理されることを示すテーブルである。It is a table which shows that each site | part of a patient's fluoroscopic image is separately managed separately beforehand in a data server. フットスイッチの配置の一例を模式的に示す図である。It is a figure which shows an example of arrangement | positioning of a foot switch typically. 音声入力がONとなっている状態で表示される画面の一例である。It is an example of the screen displayed in the state where voice input is ON.
 本発明の実施の形態を図面を参照して以下に説明する。
〔セクションA:作業性よくかつ操作者の清潔を保ちながら種々の表示処理を実施可能な医用画像処理装置〕
1.構成
 本実施形態の医用画像処理装置301は、一例として、タブレット端末のような可搬のコンピュータ装置である。または、タッチパネルディプレイを有するラップトップ型PC(ノート型PC)であってもよい。図1はタブレット端末の例を示しており、市販のタブレット端末に本願発明の一形態に係る画像処理プログラムをインストールして構成されたものであってもよい。以下、医用画像処理装置のことを単に画像処理装置と称して説明する。タブレット端末やノート型PCに関し、特に限定されるものではないが、画面サイズが9インチ以上、または、10インチ以上のものであることが、一形態において好ましい。厚みは、20mm以下または15mm以下であることが、一形態において好ましい。質量は、2kg以内または1.5kg以内であることが、一形態において好ましい。
Embodiments of the present invention will be described below with reference to the drawings.
[Section A: Medical image processing apparatus capable of performing various display processes with good workability and keeping the operator clean]
1. Configuration The medical image processing apparatus 301 of the present embodiment is a portable computer device such as a tablet terminal as an example. Alternatively, it may be a laptop PC (notebook PC) having a touch panel display. FIG. 1 shows an example of a tablet terminal, which may be configured by installing an image processing program according to one embodiment of the present invention in a commercially available tablet terminal. Hereinafter, the medical image processing apparatus will be described simply as an image processing apparatus. Although it does not specifically limit regarding a tablet terminal or a notebook PC, It is preferable in one form that a screen size is 9 inches or more or 10 inches or more. In one embodiment, the thickness is preferably 20 mm or less or 15 mm or less. In one embodiment, the mass is preferably within 2 kg or within 1.5 kg.
 図1に示すように、この例では、画像処理装置301は、薄型の筐体301aを有しそのうちの1つの面にタッチパネル式ディスプレイ360が設けられている。タッチパネル式ディスプレイ360は、ディスプレイ361(図2参照)とタッチパネル363(図2参照)とで構成されている。 As shown in FIG. 1, in this example, the image processing apparatus 301 has a thin casing 301a, and a touch panel display 360 is provided on one surface thereof. The touch panel display 360 includes a display 361 (see FIG. 2) and a touch panel 363 (see FIG. 2).
 こうした画像処理装置301は、例えば、図3に示すように、病院システムのネットワークに接続可能である。この例の病院システムは、ネットワーク30に接続された次のような機器を備えている:撮像装置1、薬液注入装置10、病院情報システムであるHIS(Hospital Information System)21、放射線科情報システムであるRIS(Radiology Information System)22、画像保存通信システムであるPACS(Picture Archiving and Communication Systems)23、ワークステーション24およびプリンタ25等。なお、これらのすべてが必須の構成要素というわけではなく、一部を省略することもできる。上記各要素は1つのみでもよいし複数でもよい。ネットワークへの接続は当然ながら有線接続であっても無線接続であってもよい。 Such an image processing apparatus 301 can be connected to a network of a hospital system, for example, as shown in FIG. The hospital system of this example includes the following devices connected to the network 30: an imaging device 1, a chemical injection device 10, a hospital information system HIS (Hospital Information System) 21, and a radiology information system. A certain RIS (Radiology Information System) 22, a PACS (Picture Archiving and Communication Systems) 23, which is an image storage communication system, a workstation 24, a printer 25, and the like. Note that not all of these are essential components, and some of them can be omitted. Each of the above elements may be only one or plural. Of course, the connection to the network may be a wired connection or a wireless connection.
 撮像装置1としては、CT装置、MR装置、アンギオグラフィ装置といった撮像装置が挙げられる。その他の種類の撮像装置を利用してもよいし、同種のまたは異なる種類の複数台の撮像装置を利用するものであってもよい。CT装置で撮像した画像と、MR装置で撮像した画像とを組み合わせるといったように、複数のモダリティ画像を用いて、後述する3次元医用画像を作成してもよい。 Examples of the imaging device 1 include imaging devices such as a CT device, an MR device, and an angiography device. Other types of imaging devices may be used, or a plurality of imaging devices of the same type or different types may be used. A three-dimensional medical image, which will be described later, may be created using a plurality of modality images, such as combining an image captured by a CT apparatus and an image captured by an MR apparatus.
 薬液注入装置10としては、少なくとも造影剤を注入する造影剤注入装置であってもよく、具体的には、薬液が充填された容器(一例でシリンジ)から薬液を押し出す駆動機構と、その動作を制御する制御回路とを備えたものであってもよい。一例として、注入ヘッドとコンソールとを備えた造影剤注入装置を利用することができる。駆動機構は、ピストン駆動機構であってもよいし、ローラポンプ等であってもよい。 The drug solution injection device 10 may be a contrast agent injection device that injects at least a contrast agent. Specifically, the drug solution injection device 10 includes a drive mechanism that pushes a drug solution from a container (for example, a syringe) filled with the drug solution, and an operation thereof. It may include a control circuit for controlling. As an example, a contrast medium injection device including an injection head and a console can be used. The drive mechanism may be a piston drive mechanism, a roller pump, or the like.
 図2のブロック図を参照する。画像処理装置301は、ディスプレイ361、タッチパネル363、入力デバイス365、通信部367、インターフェース368、スロット369、制御部350、および記憶部359等を備えている。なお、これのすべてが必須という訳ではなく、一部が省略されてもよい。 Refer to the block diagram in FIG. The image processing apparatus 301 includes a display 361, a touch panel 363, an input device 365, a communication unit 367, an interface 368, a slot 369, a control unit 350, a storage unit 359, and the like. Note that all of these are not essential, and some of them may be omitted.
 ディスプレイ361としては、液晶パネルや有機ELパネル等のデバイスが挙げられる。タッチパネル363が一体的に設けられたタッチパネル式ディスプレイを利用することもできる。タッチパネルとしては、抵抗膜、静電容量、電磁誘導、表面弾性波、赤外線等の方式のものを用いることができる。具体的な一例として、静電容量方式等の複数の位置でのタッチであるマルチタッチを検出可能なものであってもよい。タッチ操作は、ユーザの指やタッチペン等を用いて行うことができる。タッチパネルは、該タッチパネルに対するタッチ操作の開始、タッチ位置の移動、タッチ操作の終了等を検出し、検出されたタッチの種類、および、座標情報を出力するものであってもよい。 Examples of the display 361 include devices such as liquid crystal panels and organic EL panels. A touch panel display in which the touch panel 363 is integrally provided can also be used. As the touch panel, a system such as a resistive film, capacitance, electromagnetic induction, surface acoustic wave, infrared ray, or the like can be used. As a specific example, a multi-touch that is a touch at a plurality of positions, such as a capacitance method, may be detected. The touch operation can be performed using a user's finger or a touch pen. The touch panel may detect the start of a touch operation on the touch panel, the movement of the touch position, the end of the touch operation, and the like, and output the detected touch type and coordinate information.
 なお、本実施形態の画像処理装置301は、後述する通り、音声入力やモーション入力を利用して画像表示に関する操作を行うことができるものである。したがって、場合によってはタッチパネル363を省略してもよい。 Note that the image processing apparatus 301 of the present embodiment can perform operations related to image display using voice input or motion input, as will be described later. Therefore, the touch panel 363 may be omitted depending on circumstances.
 入力デバイス365としては、例えばキーボードやマウス等といった一般的なデバイスが挙げられる。 As the input device 365, for example, a general device such as a keyboard or a mouse can be used.
 記憶部359は、ハードディスクドライブ(HDD:Hard Disk
Drive)、ソリッドステートドライブ(SDD:Solid State Drive)、および/またはメモリなどで構成されたものであってもよく、OS(Operating System)のプログラムや、本発明の一形態に係る医用画像処理プログラム(アルゴリズムデータやグラフィカルユーザインターフェースデータ等を含む)が格納されていてもよい。
The storage unit 359 includes a hard disk drive (HDD: Hard Disk).
Drive (Solid State Drive) and / or memory, etc., and an OS (Operating System) program or a medical image processing program according to an embodiment of the present invention (Including algorithm data and graphical user interface data) may be stored.
 また、各種処理に用いるその他のプログラムや、テーブル、データベース等を必要に応じ格納する。コンピュータプログラムは、制御部のメモリにロードされることによって実行され、CPUなどのハードウェアと協働し、これにより本実施形態のような機能を備えた制御部を構成するものであってもよい。 In addition, other programs used for various processes, tables, databases, etc. are stored as necessary. The computer program is executed by being loaded into the memory of the control unit, and cooperates with hardware such as a CPU, thereby constituting a control unit having functions as in the present embodiment. .
 コンピュータプログラムは、任意のネットワークを介して必要時に外部機器からその全部または一部がダウンロードされるものであってもよい。コンピュータプログラムは、コンピュータ読み取り可能な記録媒体に格納してもよく、「記録媒体」とは、メモリーカード、USBメモリ、SDカード(登録商標)、フレキシブルディスク、光磁気ディスク、ROM、EPROM、EEPROM、CD-ROM、MO、DVD、および、Blu-ray(登録商標)Disc等の任意の「可搬の物理媒体」を含むものとする。本実施形態の医用画像処理装置は、上記のような記憶媒体を読み込むためのスロット369が設けられていてもよい。 The computer program may be downloaded in whole or in part from an external device when necessary via an arbitrary network. The computer program may be stored in a computer-readable recording medium. The “recording medium” is a memory card, USB memory, SD card (registered trademark), flexible disk, magneto-optical disk, ROM, EPROM, EEPROM, It shall include any “portable physical media” such as CD-ROM, MO, DVD, and Blu-ray (registered trademark) Disc. The medical image processing apparatus according to the present embodiment may be provided with a slot 369 for reading the storage medium as described above.
 通信部367は、外部のネットワークまたは機器と、有線方式または無線方式で通信を行えるようにするためのユニットである。通信部367は、データを外部に送出するためのトランスミッタおよびデータを外部から受信するレシーバ等を有するものであってもよい。インターフェース368は外部の種々の機器等を接続するためのものであり、図では1つのみ示されているが、当然ながら複数設けられていてもよい。 The communication unit 367 is a unit for enabling communication with an external network or device in a wired or wireless manner. The communication unit 367 may include a transmitter for sending data to the outside, a receiver for receiving data from the outside, and the like. The interface 368 is for connecting various external devices and the like, and only one interface is shown in the drawing, but a plurality of interfaces may naturally be provided.
 スロット369は、コンピュータ読取り可能媒体からのデータを読み取るための部分である。インターフェース368は外部機器等を接続するための部分である。 The slot 369 is a part for reading data from a computer readable medium. The interface 368 is a part for connecting an external device or the like.
 本実施形態の画像処理装置301は、音声入力デバイスとしてマイク370を備えている。筐体に内蔵されたマイクであってもよいし、筐体とは別体であり有線または無線で端末に接続される外付けのマイクであってもよい。下記するモーションセンサ380とマイクとが一体化したようなデバイスを利用することも可能である。 The image processing apparatus 301 of this embodiment includes a microphone 370 as an audio input device. A microphone built in the housing may be used, or an external microphone that is separate from the housing and is connected to the terminal by wire or wirelessly. It is also possible to use a device in which a motion sensor 380 and a microphone described below are integrated.
 音声認識を行うために、画像処理装置301には音声認識ソフトウェアがインストールされ、これにより、音声認識部351が構成されている。 In order to perform voice recognition, voice recognition software is installed in the image processing apparatus 301, and thereby a voice recognition unit 351 is configured.
(モーションセンサ)
 モーションセンサ380は、操作者の体の少なくとも一部の動きを非接触で三次元的に検出するセンサである。画像処理装置301にはモーション認識ソフトウェアがインストールされており、これによりモーション認識部353が構成されている。
(Motion sensor)
The motion sensor 380 is a sensor that three-dimensionally detects the movement of at least a part of the operator's body in a non-contact manner. Motion recognition software is installed in the image processing apparatus 301, and thereby a motion recognition unit 353 is configured.
 モーションセンサ380としては、例えばリープモーションコントローラ(リープモーション社製、「リープモーション」は登録商標)等を用いることができる。このリープモーションコントローラは、操作者の指の位置や形状や動き、および/または、手のひらの位置や動き等を非接触でリアルタイムに認識できる入力デバイスである。リープモーションコントローラは、赤外線照射部とCCDカメラ等を内蔵したセンサユニットとして構成されている。センサユニットの上方領域が認識エリアとなっている。このセンサユニットは、タブレット端末やラップトップ型PCに有線または無線で接続して使用される。 As the motion sensor 380, for example, a leap motion controller (manufactured by Leap Motion, “Leep Motion” is a registered trademark) can be used. The leap motion controller is an input device that can recognize the position, shape, and movement of the operator's finger and / or the position and movement of the palm in real time without contact. The leap motion controller is configured as a sensor unit including an infrared irradiation unit and a CCD camera. The upper area of the sensor unit is a recognition area. This sensor unit is used by connecting to a tablet terminal or a laptop PC by wire or wireless.
 モーションセンサ380としては、他にも、例えばキネクト(マイクロソフト社製、登録商標)などを利用することもできる。モーションセンサ380は、1つまたは複数のカメラと1つまたは複数の距離センサとを備えるものであってもよい。または、それらの一方のみを備えるものであってもよい。モーションセンサ380のユニットにマイクが内蔵されたようなものであってもよい。モーションセンサ380の検出対象(例えば手)の精度としては、一例で5mm以下の精度で検出できるものであることが好ましく、1mm以下の精度で検出できるものであることがより好ましい。 As the motion sensor 380, for example, Kinect (manufactured by Microsoft Corporation, registered trademark) can be used. The motion sensor 380 may include one or more cameras and one or more distance sensors. Or you may provide only one of them. The unit of the motion sensor 380 may have a built-in microphone. For example, the accuracy of the detection target (for example, a hand) of the motion sensor 380 is preferably one that can be detected with an accuracy of 5 mm or less, and more preferably one that can be detected with an accuracy of 1 mm or less.
 モーションセンサ380の検出原理も特定の1つに限定されるものではない。1つの態様としては、Light Codingとよばれる方式のものを利用できる。この方式では、赤外線発光部から多数のドットパターンを照射し、ドットパターンが検出対象(人)に当たったときの変化量(ゆがみ)をカメラで読み取る。他の態様としては、TOF(Time Of Flight)と呼ばれる方式のものを利用できる。これは、認識範囲が比較的近距離で、手指の細かな動きをセンシングするような使い方に適している。TOF方式では、照射した赤外線が対象物に当たって戻ってくるまでの時間を解析して距離を測定する。一般に、上記Light Coding方式と比べて認識精度が高く、距離による精度の低下も少ないものとなる。さらに別の態様として、赤外LEDが対象物に照射した光の反射を2つのカメラで撮影し、動きを認識するような方式のものを利用してもよい。 The detection principle of the motion sensor 380 is not limited to a specific one. As one aspect, a system called Light Coding can be used. In this method, a large number of dot patterns are irradiated from the infrared light emitting unit, and the amount of change (distortion) when the dot pattern hits the detection target (person) is read by the camera. As another aspect, a system called TOF (Time Of Flight) can be used. This is suitable for a method in which the recognition range is relatively close and sensing fine movements of fingers. In the TOF method, the distance until the irradiated infrared rays hit the object and return is measured. In general, the recognition accuracy is higher than that of the Light Coding method, and the deterioration of accuracy due to distance is small. As yet another aspect, a system in which reflection of light irradiated on an object by an infrared LED is photographed by two cameras and movement is recognized may be used.
(制御部)
 再び図2を参照する。制御部350は、中央処理装置(CPU)およびメモリ等のハードウェアを有し、コンピュータプログラムがインストールされ、種々の演算処理を行う。制御部350は、概念的な構成として、画像表示部355a、操作判定部355b、表示処理部355c、モード選択部355dを有している。また、上記のとおり音声認識部351およびモーション認識部353を有している。
(Control part)
Refer to FIG. 2 again. The control unit 350 includes hardware such as a central processing unit (CPU) and a memory, and a computer program is installed to perform various arithmetic processes. As a conceptual configuration, the control unit 350 includes an image display unit 355a, an operation determination unit 355b, a display processing unit 355c, and a mode selection unit 355d. Further, as described above, the voice recognition unit 351 and the motion recognition unit 353 are included.
 画像表示部355aは、医療用の3次元医用画像をディスプレイ361に表示させる。画像表示部355aは、一例として、例えば肝臓や血管といった解剖学的構造部のそれぞれを独立のオブジェクトとして表示させる。また、それぞれの解剖学的構造部を異なる色で表示させるようなものであってもよい。それぞれの解剖学的構造部をどの色で表示するかは、操作者によって手動で入力、設定されるものであってもよいが、必ずしもそれに限定されない。後述するように、予め所定のデータサーバ側で色の割当等がなされている場合(テーブル等により)、それに従って表示を行う構成であってもよい。 The image display unit 355a displays a medical three-dimensional medical image on the display 361. For example, the image display unit 355a displays each of anatomical structures such as the liver and blood vessels as independent objects. In addition, each anatomical structure may be displayed in a different color. The color in which each anatomical structure is displayed may be manually input and set by the operator, but is not necessarily limited thereto. As will be described later, in the case where color assignment or the like is performed in advance on a predetermined data server side (by a table or the like), the display may be performed in accordance therewith.
 操作判定部355bは、入力デバイス365やタッチパネル363に対する入力操作等を受け付ける。 The operation determination unit 355b receives an input operation on the input device 365 and the touch panel 363.
 表示処理部355cは、種々の画像処理を行う。例えば、
-三次元表示された画像の回転、
-三次元表示された画像の平行移動、
-三次元表示され画像の拡大/縮小、
-三次元表示され画像の透過度変更、
-所定のオブジェクトの表示/非表示の切替え、
-所定のオブジェクトのカット(分割)機能、
-所定のオブジェクトの領域指定機能、等。
である。なお、これらの機能の具体的な内容については後述する一連の動作の中で詳しく説明する。
The display processing unit 355c performs various image processing. For example,
-Rotation of 3D image,
-Translation of 3D images
-3D display and enlargement / reduction of images,
-Change the transparency of images displayed in 3D,
-Switching the display / non-display of a given object,
-Cut (split) function of a given object,
-A function for specifying an area of a predetermined object, etc.
It is. The specific contents of these functions will be described in detail in a series of operations described later.
 音声認識部351は、種々の音声認識を行う。例えば次のような単語を認識する。
-表示モード変更のためのコマンドである「回転」
-表示モード変更のためのコマンドである「移動」
-表示モード変更のためのコマンドである「マルチ」
-表示モード変更のためのコマンドである「停止」
-加工のためのコマンドである「カット」
-加工のためのコマンドである「ボックス」
-解剖学的構造物の名称(例えば、「肝臓」のような臓器名、または、「門脈」「肝動脈」のような血管名)。
The voice recognition unit 351 performs various kinds of voice recognition. For example, the following words are recognized.
-"Rotate" command for changing display mode
-"Move" command for changing display mode
-"Multi" command for changing display mode
-"Stop" command for changing display mode
-"Cut" command for processing
-"Box" command for processing
-The name of the anatomical structure (for example, an organ name such as "liver" or a blood vessel name such as "portal vein" or "hepatic artery").
 モーション認識部353は、種々のモーション認識処理を行う。例えば、モーションセンサが手を検出するものであるとして、検出空間内における手(指)の位置や動きを検出する。 The motion recognition unit 353 performs various motion recognition processes. For example, assuming that the motion sensor detects a hand, the position or movement of the hand (finger) in the detection space is detected.
 なお、画像処理やその他データ処理に主要な特徴がある発明に関しては、ハードウェアの構成は、上述した実施形態に開示した具体的なものに限らず、種々の態様を利用可能である。したがって、例えば、タブレット端末やノート型PCに限らず、他のコンピュータ手段で行われる処理であっても本願発明の一形態の対象となり得る点に留意されたい。また、以下、主には「動作」の説明として開示する発明は、当業者であれば、カテゴリー表現を変えた、物の発明またはコンピュータプログラムの発明としても把握できるものである点は理解されよう。したがって、本明細書はそのような発明をも開示する。 It should be noted that regarding the invention having main features in image processing and other data processing, the hardware configuration is not limited to the specific one disclosed in the above-described embodiment, and various modes can be used. Therefore, it should be noted that, for example, processing performed by other computer means is not limited to a tablet terminal and a notebook PC, and can be an object of one embodiment of the present invention. In the following, it will be understood that the invention disclosed mainly as the description of “operation” can be understood by those skilled in the art as an invention of a product or an invention of a computer program with different category expressions. . Accordingly, the present specification also discloses such an invention.
2.動作
 続いて、本実施形態の画像処理装置301での画像表示の動作例について説明する。以下では、一例として図5に例示するような3次元医用画像を表示操作する例について説明する。この3次元医用画像には、肝臓371および血管375が含まれている。
2. Operation Next, an operation example of image display in the image processing apparatus 301 of the present embodiment will be described. Hereinafter, an example in which a display operation of a three-dimensional medical image as illustrated in FIG. 5 will be described as an example. This three-dimensional medical image includes a liver 371 and a blood vessel 375.
 まず、図4のフローチャートに示すように、ステップS11として3次元医用画像データの取得を行う。「3次元医用画像データ」としては、患者を撮像装置により断層撮影して得られたデータに基いて作成されたものであってもよい。特には、ボリュームレンダリングによるボリュームデータであってもよい。なお、3次元画像のデータ形式は特に限定されるものではなく種々のものを利用でき、一例でSTL(Standard Triangulated Language)ファイルフォーマット等も利用可能である。 First, as shown in the flowchart of FIG. 4, three-dimensional medical image data is acquired in step S11. The “three-dimensional medical image data” may be created based on data obtained by tomographic imaging of a patient with an imaging device. In particular, volume data by volume rendering may be used. Note that the data format of the three-dimensional image is not particularly limited, and various types can be used. For example, an STL (Standard Triangulated Language) file format can be used.
 画像データは、所定のデータベースサーバ、PACS、DICOMサーバまたはワークステーションといった所定のデータ記憶領域に保存されていてもよい。一例として、画像処理装置301は、ネットワーク上の所定のデータ記憶領域から当該データを読み込み、装置内の記憶部359に保存する。 The image data may be stored in a predetermined data storage area such as a predetermined database server, PACS, DICOM server, or workstation. As an example, the image processing apparatus 301 reads the data from a predetermined data storage area on the network and stores it in the storage unit 359 in the apparatus.
 次いで、画像処理装置301は3次元医用画像をディスプレイ361に表示させる(ステップS12)。3次元医用画像の作成については、基本的には公知の方法を用いて行うことができる。本発明の一形態に係る画像作成フローについては、図面を参照して後述するものとする。作成した3次元医用画像のデータは、画像処理装置301内に保存されてもよいし、および/または、外部のサーバ(例えばクラウド上のサーバ)に保存されてもよい。 Next, the image processing apparatus 301 displays a three-dimensional medical image on the display 361 (step S12). The creation of a three-dimensional medical image can basically be performed using a known method. An image creation flow according to an embodiment of the present invention will be described later with reference to the drawings. The generated three-dimensional medical image data may be stored in the image processing apparatus 301 and / or stored in an external server (for example, a server on the cloud).
 ここで、医用画像の表示に関しては種々の表示態様が用意されている。例えば、
-所定の解剖学的構造物を半透明状態で表示する、
-所定の解剖学的構造物を不透明状態で表示する、
-所定の解剖学的構造物を色分けして表示する、
-所定の解剖学的構造物を影を付けて表示する、
-画面中に3次元座標軸(またはそれに相当するもの。例えば立方体。)の画像を表示する、等の少なくとも1つである。
Here, various display modes are prepared for the display of medical images. For example,
-Display a given anatomical structure in a translucent state;
-Display a given anatomical structure in an opaque state,
-Color display of a given anatomical structure,
-Display a given anatomical structure with a shadow,
At least one of displaying an image of a three-dimensional coordinate axis (or an equivalent thereof, for example, a cube) on the screen, and the like.
 透過に関し、例えば、肝臓を半透明状態で表示し、血管は不透明状態で表示する。腫瘍がある場合には、腫瘍も不透明状態で表示するものであってもよい。このような表示態様によれば、肝臓を半透明表示とすることで、本来肝臓に隠れて視認できない内部の血管の位置や走行状態を確認することが可能となる。こうした確認が行えることは、一例としては、特に腹腔鏡手術で肝臓の一部を切除するような手術において、肝臓、血管、腫瘍等の位置関係を良好に確認できる点で非常に有用である。本実施形態の画像処理装置301は可搬性のものであり、したがって、手術室内で該装置を操作して3次元医用画像の確認を行うことができる。 Regarding permeation, for example, the liver is displayed in a translucent state and the blood vessel is displayed in an opaque state. When there is a tumor, the tumor may be displayed in an opaque state. According to such a display mode, it is possible to confirm the position and running state of an internal blood vessel that is originally hidden behind the liver and cannot be visually recognized by making the liver translucent display. The ability to perform such confirmation is very useful, for example, in that the positional relationship between the liver, blood vessels, tumors, and the like can be well confirmed, particularly in surgery where a portion of the liver is removed by laparoscopic surgery. The image processing apparatus 301 according to the present embodiment is portable, and therefore it is possible to check a three-dimensional medical image by operating the apparatus in the operating room.
 色分けに関し、例えば肝臓と血管とを異なる色で表示してもよい。腫瘍がある場合には、腫瘍をさらに別の色で表示してもよい。血管に関してより具体的には、肝臓と門脈と肝動脈とをそれぞれ異なる色で表示してもよい。血管がグルーピングされている場合には、それらをまとめて同一色で表示するようにしてもよい。 Regarding color coding, for example, the liver and blood vessels may be displayed in different colors. If there is a tumor, the tumor may be displayed in a different color. More specifically, regarding the blood vessels, the liver, portal vein, and hepatic artery may be displayed in different colors. When blood vessels are grouped, they may be displayed together in the same color.
 次いで、ステップS13で、画像処理装置301は、モーションセンサ380と操作者の手との間の距離が適正となるように、手の位置の検出を行う。具体的には、操作者が手をモーションセンサ380の上方に位置させる。センサと操作者の手との適正間隔(センサから手までの高さ)は、例えばh1(mm)~h2(mm)の範囲と予め設定されている。このように適正範囲を予め設定しておく理由としては、操作者の手がセンサに近すぎたり、遠すぎたりする場合には良好に手の動きを認識できないおそれがあるためである。 Next, in step S13, the image processing device 301 detects the position of the hand so that the distance between the motion sensor 380 and the operator's hand is appropriate. Specifically, the operator places the hand above the motion sensor 380. An appropriate distance (height from the sensor to the hand) between the sensor and the operator's hand is set in advance, for example, in a range of h1 (mm) to h2 (mm). The reason why the appropriate range is set in advance is that there is a possibility that the movement of the hand cannot be recognized well when the operator's hand is too close or too far from the sensor.
 この手位置の検出に関して、画面にどのような画像を表示するかは特に限定されるものではないが、一例で図6のようなものであってもよい。この例では、画面の略中央に所定のサイズの基準円(第1の円)391が表示されている。第1の円391は、操作者の手の位置によらず、常に一定の大きさで固定的に表示される。一方、画面上には第2の円393も表示されている。 Regarding the detection of the hand position, what kind of image is displayed on the screen is not particularly limited, but it may be as shown in FIG. 6 as an example. In this example, a reference circle (first circle) 391 having a predetermined size is displayed in the approximate center of the screen. The first circle 391 is always displayed in a fixed size regardless of the position of the operator's hand. On the other hand, a second circle 393 is also displayed on the screen.
 第2の円393の中心は、操作者の手の位置に対応している。すなわち、操作者の手がモーションセンサ380の直上(一例)にある場合、第2の円393の中心は第2位置の円391の中心と同一となる。つまり、2つの円391、393が同心円で表示されることとなる。 The center of the second circle 393 corresponds to the position of the operator's hand. That is, when the operator's hand is directly above the motion sensor 380 (an example), the center of the second circle 393 is the same as the center of the circle 391 at the second position. That is, the two circles 391 and 393 are displayed as concentric circles.
 操作者の手がモーションセンサ380の直上位置よりも所定方向(一例として右側)にずれている場合、第2の円393もこれに対応して同じ方向(一例として右側)にずれてリアルタイムに表示される。 When the operator's hand is displaced in a predetermined direction (as an example on the right) from the position directly above the motion sensor 380, the second circle 393 is also displaced in the same direction (as an example on the right) and displayed in real time. Is done.
 このような表示態様により、操作者は、画面上の2つの円391、393の位置関係を見ながら、自分の手がモーションセンサ380に対し適正な位置(水平方向の位置)にあるかを確認することができる。 With such a display mode, the operator confirms whether his / her hand is in an appropriate position (horizontal position) with respect to the motion sensor 380 while viewing the positional relationship between the two circles 391 and 393 on the screen. can do.
 高さ方向の位置に関しては次のように確認することができる。すなわち、第2の円393の直径は、操作者の手の高さに対応している。例えば、手の高さが基準高さ(一例で(h1+h2)/2)の場合、第2の円393の直径が第1の円391の直径と同じになるように、第2の円393が表示される。手の位置がより高くなると、第2の円393のサイズもそれに応じて小さくなっていき、逆に手の位置がより低くなると、第2の円393のサイズもそれに応じて大きくなる。このような表示態様により、操作者は、画面上の2つの円391、393の大小関係を見ながら、自分の手の高さが適正であるかを確認することができる。 ∙ The position in the height direction can be confirmed as follows. That is, the diameter of the second circle 393 corresponds to the height of the operator's hand. For example, when the height of the hand is the reference height (in the example, (h1 + h2) / 2), the second circle 393 is formed so that the diameter of the second circle 393 is the same as the diameter of the first circle 391. Is displayed. As the hand position increases, the size of the second circle 393 decreases accordingly, and conversely, as the hand position decreases, the size of the second circle 393 increases accordingly. With such a display mode, the operator can check whether the height of his / her hand is appropriate while viewing the magnitude relationship between the two circles 391 and 393 on the screen.
 適正位置であることをより確認しやすくするために、次のような表示としてもよい。すなわち、手の水平位置、高さ位置、またはそれらの組合せが、所定の適正位置に入った場合には、第2の円393を特殊な表示で表示するようにしてもよい。例えば、図6(a)のように適正範囲外である場合と、図6(b)のように適正範囲内である場合とで、第2の円393の色を異ならせて表示したり、点滅表示と点灯表示とで切り替えるようにしたりすることが、一形態において好ましい。 In order to make it easier to confirm that the position is appropriate, the following display may be used. That is, the second circle 393 may be displayed in a special display when the horizontal position, the height position of the hand, or a combination thereof enters a predetermined appropriate position. For example, the second circle 393 may be displayed in different colors depending on whether it is outside the proper range as shown in FIG. 6 (a) or within the proper range as shown in FIG. 6 (b). In one embodiment, it is preferable to switch between blinking display and lighting display.
 以上のようにして、モーションセンサと操作者の手との間の距離が適正となるようにするためのステップが完了する(ステップS13)。 As described above, the step for making the distance between the motion sensor and the operator's hand appropriate is completed (step S13).
 次いで、ステップS14において表示モード選択のための音声入力を受け付ける。音声入力としては、一例で、次のような言葉を認識するものであってもよい:
-「移動」
-「停止」
-「回転」
-「マルチ」
Next, in step S14, a voice input for selecting a display mode is received. As an example of speech input, the following words may be recognized:
-"Move"
-"Stop"
-"rotation"
-"Multi"
 なお、音声認識機能は、ステップS3で操作者の手の位置が所定の適正範囲に入ったことをトリガとしてONとなるようなものであってもよい。この構成では、操作者の手の位置が所定の適正範囲に入っていない場合には、音声認識機能がOFFとなっており、適正範囲に入っているときのみONとなる。このように、所定の条件下となってはじめて音声認識機能がONとなる構成によれば、操作者が意図していない誤認識による音声入力を防止することが可能となる。 Note that the voice recognition function may be turned on as a trigger when the position of the operator's hand enters a predetermined appropriate range in step S3. In this configuration, when the position of the operator's hand is not within the predetermined appropriate range, the voice recognition function is OFF and is ON only when it is within the appropriate range. As described above, according to the configuration in which the voice recognition function is turned on only under a predetermined condition, it is possible to prevent voice input due to erroneous recognition that is not intended by the operator.
 図1に模式的に示すように、音声認識機能がONとなっていることを操作者に知らせるために、例えば「音声認識中」といった表示が画面上に表示されるようになっていることも好ましい。 As schematically shown in FIG. 1, in order to notify the operator that the voice recognition function is ON, for example, a display such as “During voice recognition” may be displayed on the screen. preferable.
 なお、本発明の1つの態様では、S13の手位置検出のステップが省略されてもよい。 Note that, in one aspect of the present invention, the step of detecting the hand position in S13 may be omitted.
<ズーム(拡大および縮小)/パン>
 表示されている3次元医用画像の大きさの変更や位置の変更を行いたい場合、次のような手順で行う。
<Zoom (enlarge and reduce) / pan>
To change the size or position of the displayed three-dimensional medical image, the following procedure is used.
 まず、操作者は音声認識機能ONの状態で「移動」と発声する。画像処理装置301は、マイク370から入力されたその音声を音声認識部351で解析し、「移動」という単語を認識する。それに応じて、「ズーム/パン」モードに遷移する(ステップS15)。 First, the operator utters “move” with the voice recognition function ON. The image processing apparatus 301 analyzes the voice input from the microphone 370 by the voice recognition unit 351 and recognizes the word “move”. In response to this, a transition is made to the “zoom / pan” mode (step S15).
 「ズーム/パン」モードでは、画像処理装置301は次いで操作者の手によるモーションの入力を待つ。画像処理装置301は、モーションセンサ380およびモーション認識部353を使用して、操作者の手の位置および動きをリアルタイムに認識する。そして、操作者が手を上方に移動させた場合(すなわち手が初期高さhからより高いhに移動した場合)には、その動きに合わせて画像が徐々に縮小させていく。一方、操作者が手を下方に移動させた場合(初期高さhからより低いhに移動した場合)には、その動きに合わせて医用画像を徐々に拡大させていく。 In the “zoom / pan” mode, the image processing apparatus 301 then waits for input of motion by the operator's hand. The image processing apparatus 301 uses the motion sensor 380 and the motion recognition unit 353 to recognize the position and movement of the operator's hand in real time. When the operator moves the hand upward (that is, when the hand moves from the initial height h 0 to a higher h h ), the image is gradually reduced in accordance with the movement. On the other hand, when the operator moves his / her hand downward (when moving from the initial height h 0 to a lower h L ), the medical image is gradually enlarged in accordance with the movement.
 また、手を水平移動させた場合には、その移動方向および移動量に対応して、3次元医用画像をパン(平行移動)させる。 Also, when the hand is moved horizontally, the three-dimensional medical image is panned (translated) in accordance with the moving direction and moving amount.
 上記のとおり、このモードでは、手を上下移動させることで画像が縮小または拡大するようになっており、手を水平移動させることにより画像が平行移動するようになっている。本実施形態の構成によれば、音声認識や数値入力などの入力方式と比較して、アナログ的な入力を直感的に行うことができるモーション入力を用いて画像のズーム/パン(さらには下記する回転等)を実施することができる。よって、操作者にとって直感的で操作性に優れ、実用にも資するものとなる。 As described above, in this mode, the image is reduced or enlarged by moving the hand up and down, and the image is moved in parallel by moving the hand horizontally. According to the configuration of the present embodiment, compared with input methods such as voice recognition and numerical input, zoom / pan of an image (and further described below) is performed using motion input capable of intuitive analog input. Rotation, etc.). Therefore, it is intuitive for the operator, excellent in operability, and contributes to practical use.
 また、モーションセンサ380は、機器に接触することなく入力を行えるものであるので、操作者の手の清潔状態を保ったまま入力を行うことができる。このような構成によれば、例えば、術中の医師が手術室内で画像処理装置を使用して、画像の確認を行えることができる点で非常に有利である。 Further, since the motion sensor 380 can input without touching the device, the input can be performed while keeping the operator's hand clean. Such a configuration is very advantageous in that, for example, an intraoperative doctor can check an image using an image processing apparatus in an operating room.
 特に肝臓の例で言えば、肝臓には複数の血管が枝状に存在している。したがって、必要以上に血管を損傷させないように、肝臓実質の一部を切除するような場合には、血管と腫瘍との位置関係を十分に確認しておく必要がある。この点、本実施形態の画像処理装置によれば、術中に3次元医用画像を見ながら血管等の位置関係を確認することができる。また、3次元の医用画像では、例えば手前側に血管が存在しその奥に腫瘍が隠れているような場合もある(腫瘍は図示されていないが参考として図5参照)。このような場合であっても、本実施形態の画像処理装置では「回転」モードで画像を回転させて、腫瘍を確認するといったことも可能である。とりわけ、本実施形態の構成では、予め決まった角度ごとに回転させるようなものではなく、モーション入力で任意の角度だけ自由に(無段階的に)回転させることができるので、良好な観察を行うことが可能となる。 Especially in the case of the liver, a plurality of blood vessels are present in the liver in a branch shape. Therefore, when part of the liver parenchyma is excised so as not to damage the blood vessel more than necessary, it is necessary to sufficiently confirm the positional relationship between the blood vessel and the tumor. In this regard, according to the image processing apparatus of the present embodiment, the positional relationship of blood vessels and the like can be confirmed while viewing a three-dimensional medical image during the operation. In a three-dimensional medical image, for example, a blood vessel may exist on the front side and a tumor may be hidden behind the blood vessel (the tumor is not shown in the figure, but refer to FIG. 5 for reference). Even in such a case, the image processing apparatus of the present embodiment can check the tumor by rotating the image in the “rotation” mode. In particular, in the configuration of the present embodiment, rotation is not performed at every predetermined angle, but it can be freely rotated (steplessly) by an arbitrary angle by motion input, so that good observation is performed. It becomes possible.
 なお、この「ズーム/パン」モード中は、3次元医用画像の「回転」(詳細下記)は行なわれないように構成されていることが一形態において好ましい。このモードを利用する際は、単に拡大縮小や平行移動のみを行いたい場合であることも多い。したがって、回転が禁止され、見たい姿勢を維持したまま、拡大、縮小、平行移動が行われる方が、操作者にとってより使いやすいものとなるためである。 In one embodiment, it is preferable that the “zoom / pan” mode is configured not to perform “rotation” (details below) of the three-dimensional medical image. When using this mode, it is often the case that only enlargement / reduction or parallel movement is desired. Therefore, it is easier for the operator to perform rotation, enlargement, reduction, and parallel movement while maintaining the desired posture.
 上記説明では、ズームとパンとの両方が行えるようなモードを説明したが、それに限定されるものではない。いずれか一方のみしか行えないモードとすることも可能である。 In the above description, a mode in which both zooming and panning can be performed has been described. However, the present invention is not limited to this. It is also possible to set a mode in which only one of them can be performed.
<回転>
 表示されている3次元医用画像を回転させたい場合、次のような手順で行う:
 まず、操作者は、上記の「ズーム/パン」モードを解除するため「停止」と発声する。画像処理装置301は、音声認識機能によりこれを受け付け、そして、「ズーム/パン」モードを解除して、他のモードを受け付ける状態に遷移する。
<Rotation>
If you want to rotate the displayed 3D medical image, do the following:
First, the operator utters “stop” to cancel the “zoom / pan” mode. The image processing apparatus 301 accepts this by the voice recognition function, cancels the “zoom / pan” mode, and transitions to a state of accepting another mode.
 この状態で、操作者が「回転」と発声する。画像処理装置301は、音声認識機能によりこれを受け付け、「回転」モードに遷移する。画像処理装置301は次いで操作者の手によるモーションの入力を待つ。 In this state, the operator says “Rotate”. The image processing apparatus 301 accepts this through the voice recognition function and transitions to the “rotation” mode. Next, the image processing apparatus 301 waits for input of motion by the operator's hand.
 「回転」モードでは、画像処理装置301は操作者の手の位置や動きをリアルタイムに認識する。そして、画像処理装置301は、操作者の手の動きに合わせて3次元医用画像を所定の回転軸(X軸、Y軸、Z軸)周りに回転させる。具体的には、操作者の手の水平方向の動き、または、仮想球体の表面に沿って手を移動させるような動きを認識する。そして、その移動方向、移動速度および移動量に対応して、3次元医用画像を所定角度回転させる。 In the “rotation” mode, the image processing apparatus 301 recognizes the position and movement of the operator's hand in real time. Then, the image processing apparatus 301 rotates the three-dimensional medical image around a predetermined rotation axis (X axis, Y axis, Z axis) in accordance with the movement of the operator's hand. Specifically, the movement of the operator's hand in the horizontal direction or the movement of moving the hand along the surface of the virtual sphere is recognized. Then, the three-dimensional medical image is rotated by a predetermined angle corresponding to the moving direction, moving speed, and moving amount.
 この回転モードの場合も、あくまで回転のみが許容され、パン(平行移動)やズーム(拡大縮小)は禁止されていることが、一形態において好ましい。これにより、例えば、所定の画像サイズのまま、画像を所望の向きに回転させ、それに続く所定の画像処理や観察を行うことが可能となる。 Also in this rotation mode, it is preferable in one embodiment that only rotation is allowed and panning (parallel movement) and zooming (enlargement / reduction) are prohibited. Accordingly, for example, it is possible to rotate the image in a desired direction while maintaining a predetermined image size, and perform subsequent predetermined image processing and observation.
 「回転モード」を解除する場合も、上記のように、操作者は「停止」と発声する。 * Even when canceling the “rotation mode”, the operator speaks “stop” as described above.
 なお、これまでの説明では、「移動」と「回転」との一方が行われる場合は他方は行なわれない態様を説明したが、両方の入力を同時に行うことができるように、「マルチ」モードが用意されていてもよい。このモードでは、操作者の手の動きに応じて、「ズーム」、「パン」、「回転」の全てが行われるようになっている。 In the above description, when one of “move” and “rotate” is performed, the other is not performed. However, the “multi” mode is used so that both inputs can be performed simultaneously. May be prepared. In this mode, “zoom”, “pan”, and “rotation” are all performed according to the movement of the hand of the operator.
 具体的には、操作者が「マルチ」と発声すると、画像処理装置301がこれを認識して「マルチ」モードに遷移する。操作者の手のモーションの入力に応じて、3次元医用画像の回転、移動、拡大・縮小を行う。 Specifically, when the operator utters “multi”, the image processing apparatus 301 recognizes this and shifts to the “multi” mode. The three-dimensional medical image is rotated, moved, enlarged / reduced according to the input of the motion of the operator's hand.
 なお、モーション入力ではなく、音声入力のみで3次元医用画像の回転を行える機能が実装されていてもよい。例えば、「回転」、「左」、「15°」と発声することで、画像処理装置301がこれを認識する。そして、画像を所定の回転軸(例えば画面の上下方向に延在するZ軸)周りに15°だけ回転させる。横方向に延在する軸周りに15°上向きに回転させたい場合には、例えば、「回転」、「上」、「15°」と音声入力すればよい。 It should be noted that a function capable of rotating a three-dimensional medical image only by voice input instead of motion input may be implemented. For example, the image processing apparatus 301 recognizes this by saying “rotation”, “left”, and “15 °”. Then, the image is rotated by 15 ° around a predetermined rotation axis (for example, the Z axis extending in the vertical direction of the screen). If it is desired to rotate 15 ° upward around the axis extending in the horizontal direction, for example, “rotation”, “up”, and “15 °” may be input as voices.
<その他の音声入力>
 本実施形態の画像処理装置301は、音声入力により、解剖学的構造部の選択をしたり、選択されたものの透過率を変更したりすることもできる。なお、これについては、タッチパネルを通じた操作の説明を行った後に再度説明するものとする。
<Other voice input>
The image processing apparatus 301 according to the present embodiment can select an anatomical structure portion or change the transmittance of the selected one by voice input. This will be described again after explaining the operation through the touch panel.
(タッチパネル入力を通じた各種機能について)
 画像処理装置301は、3次元医用画像中の解剖学的構造物を独立のオブジェクトとして表示している。これにより、それぞれを個別に選択したり、表示のオンオフを切替えたりできるようになっている。血管は、例えば、肝動脈、門脈、および肝静脈がグルーピングされ一括で選択できるようになっていてもよいし、個々に選択できるようになっていてもよい。
(About various functions through touch panel input)
The image processing apparatus 301 displays anatomical structures in the three-dimensional medical image as independent objects. Thereby, each can be selected individually or display can be switched on and off. For example, hepatic arteries, portal veins, and hepatic veins may be grouped and selected at once, or may be selected individually.
(回転機能)
 3次元医用画像の回転は、タッチパネル上の操作によっても行うことができる。画像処理装置301は、操作者が、タッチパネル上に触れてその指を移動させると、それに応じて、3次元医用画像を回転させる。
(Rotation function)
The rotation of the three-dimensional medical image can also be performed by an operation on the touch panel. When the operator touches the touch panel and moves his / her finger, the image processing apparatus 301 rotates the three-dimensional medical image accordingly.
(拡大/縮小機能)
 画像処理装置301は、また、操作者が、画面上の2点に触れ、その2点間の距離を離すようにまたは近づけるような操作(ピンチアウト操作、ピンチイン操作)をした場合、それに応じて、画像の拡大または表示を行うものであってもよい。
(Enlarge / reduce function)
The image processing apparatus 301 also responds when the operator touches two points on the screen and performs an operation (a pinch-out operation or a pinch-in operation) to increase or decrease the distance between the two points. The image may be enlarged or displayed.
(表示透過度切替機能)
 画像処理装置301は、操作者がいずれかの解剖学的構造物(例えば肝臓)をタッチした場合に、その表示濃度を変更するものであってもよい。具体的には、通常の不透明の表示状態と、半透明状態との2つの状態の間で切替えが行われるようなものであってもよい。すなわち、一例として。一回タッチすると半透明状態となり、もう一度タッチすると通常の表示状態に戻るようなものであってもよい。
(Display transparency switching function)
The image processing apparatus 301 may change the display density when an operator touches any anatomical structure (for example, liver). Specifically, switching between two states of a normal opaque display state and a semi-transparent state may be performed. That is, as an example. It may be a semi-transparent state when touched once and return to a normal display state when touched again.
 他の態様としては、例えば透過度0%、30%、70%、100%(非表示)のように複数の段階に設定されており、タッチする毎に順次表示濃度がループ状に切り替わるようになっていてもよい。この場合、透過度100%(つまり非表示状態)はこのループから除外してもよい。当然ながら、透過度の具体的な数値は適宜変更可能である。要するに、透過度が少なくとも複数段に設定されていて、それらが切り替わるようなっていればよい。 As another aspect, for example, the transparency is set to a plurality of stages such as 0%, 30%, 70%, and 100% (non-display), and the display density is sequentially switched in a loop shape every time the touch is performed. It may be. In this case, the transparency of 100% (that is, the non-display state) may be excluded from this loop. Naturally, the specific numerical value of the transmittance can be changed as appropriate. In short, it is only necessary that the transparency is set to at least a plurality of stages, and they are switched.
 このような表示透過度切替機能によれば、任意の解剖学的構造物をタッチするだけで表示透過度の切替え機能を発揮させることができる。したがって、透過度の切替を行うために何らかのアイコン等を別途選択したり、コマンドを選択したりする必要がある方式と比較して、簡便かつ直感的に操作を行うことができる。 According to such a display transparency switching function, the display transparency switching function can be exhibited simply by touching an arbitrary anatomical structure. Therefore, the operation can be performed simply and intuitively as compared with a method in which some icon or the like is separately selected or a command needs to be selected in order to switch the transparency.
 また、表示が上記のようにループ状に切り替わるようになっている場合も、元の表示状態に戻すのに例えば別途アイコン等を選択したりする必要がないという点で好ましい。さらに、こうしたループ状の表示切替えは、選択されたオブジェクトの表示色を変更するのみで実現できるので、画像処理の簡素化や少ないメモリでの演算処理が可能となる点でも好ましい。 Also, when the display is switched in a loop shape as described above, it is preferable in that it is not necessary to separately select an icon or the like in order to return to the original display state. Furthermore, such a loop-like display switching can be realized only by changing the display color of the selected object, which is preferable in that the image processing can be simplified and the arithmetic processing can be performed with a small amount of memory.
 透過度を変更するためのジェスチャとしては上記記載のものに限らず、例えば、上下方向または左右方向に指(一例)をスワイプさせると、画像処理装置がその入力を受付け、それに応じて透過度が変更される構成としてもよい。この場合、透過度は例えば0%、30%、70%、100%(非表示)のように何段階かに設定されていてもよいし、そうではなく、無断階的(連続的に)に透過度が変わっていくような構成としてもよい。 Gestures for changing the transparency are not limited to those described above. For example, when a finger (for example) is swiped in the vertical direction or the horizontal direction, the image processing apparatus accepts the input, and the transparency is changed accordingly. It is good also as a structure changed. In this case, the transparency may be set in several steps, for example, 0%, 30%, 70%, 100% (non-display), or instead, in a stepless manner (continuously). It is good also as a structure which the transmittance | permeability changes.
(表示/非表示切替え機能)
 画像処理装置301は、操作者が、所定の解剖学的構造物に一定時間以上触れていた場合(一例)、その解剖学的構造物を「選択された状態」とする。「選択された状態」であることを示すために、その解剖学的構造物(例えば肝臓)を初期状態とは異なる色で表示させたり、点滅表示させたりするようにしてもよい。
(Display / non-display switching function)
When the operator has touched a predetermined anatomical structure for a certain period of time (an example), the image processing apparatus 301 sets the anatomical structure as “selected”. In order to indicate the “selected state”, the anatomical structure (for example, the liver) may be displayed in a color different from the initial state or may be blinked.
 画像処理装置301は、操作者が、選択された状態の解剖学的構造物(例えば肝臓)に触れたままその指先(一例)を画面端の方に移動させた場合(スワイプ操作、ドラッグ操作等)、その解剖学的構造物を非表示とする。この例の場合、肝臓が非表示となり、血管等の三次元画像のみが残ることとなる。 When the operator moves the fingertip (one example) toward the screen edge while touching the selected anatomical structure (for example, liver) (swipe operation, drag operation, etc.) ) And hide the anatomical structure. In this example, the liver is not displayed, and only a three-dimensional image such as a blood vessel remains.
 このような機能は、操作者が所望の解剖学的構造物のみを見たい場合に有用である。また、本実施形態のように、解剖学的構造物を直接選択し、移動させるだけで非表示とすることができる方式によれば、例えば何らかのアイコンを選択するなどしてはじめて表示のオンオフを切り替えるモードとなるようなものに比べて、簡便かつ直感的に操作を行うことができる。 This function is useful when the operator wants to see only the desired anatomical structure. In addition, according to the method in which the anatomical structure can be directly selected and moved only by being moved as in the present embodiment, the display is turned on / off only when, for example, some icon is selected. Compared to the mode, the operation can be performed easily and intuitively.
(カット機能)
 カット機能は次のように行なわれる。肝臓をカットし、さらにその後その一部を非表示とする例について以下説明する。図8は一連の操作のフローチャートである。
(Cut function)
The cutting function is performed as follows. An example in which the liver is cut and then a part thereof is not displayed will be described below. FIG. 8 is a flowchart of a series of operations.
 画像処理装置301は、まず、ステップS1として図7Aのような3次元の医用画像を表示する。そして、画像処理装置301は、操作者が、図7Bに示すように、任意の解剖学的構造物(ここでは肝臓71)上の2点に触れた場合に、その状態、すなわち2点が触れられていることを判定する(ステップS2)。なお、タイミングとしては、2点が同時にまたは実質的同時に触れられるものであってもよい。 The image processing apparatus 301 first displays a three-dimensional medical image as shown in FIG. 7A as step S1. Then, when the operator touches two points on an arbitrary anatomical structure (here, the liver 71) as shown in FIG. 7B, the state, that is, the two points touch. Is determined (step S2). In addition, as timing, two points may be touched simultaneously or substantially simultaneously.
 次いで、いわゆる長押しの場合に機能を発揮させるようにするために、画像処理装置301は、2点が触れられている状態が一定時間以上継続したか否かの判定を行う(ステップS3)。 Next, in order to perform the function in the case of a so-called long press, the image processing apparatus 301 determines whether or not the state where the two points are touched has continued for a certain period of time (step S3).
 画像処理装置301は、ステップS3で一定時間以上継続と判定した場合に、触れられていた2点P1、P2が指定されたことを所定の表示態様で画面上に表示する。「所定の表示態様」としてはどのようなものであってもよいが、例えば、(i)点P1、P2とそれらを結ぶ線L1の両方を表示する、または、(ii)点P1、P2のみもしくは線L1のみを表示するものであってもよい。点P1、P2に関しては、指定された位置がよく分かるように、単なる小さいドットではなく、図7Bのような少し大きめのグラフィカル画像(例えば、円形、矩形、多角形、星形などどのような形状であっても構わないが、ここでは円形を例示している)などの表示としてもよい。 The image processing apparatus 301 displays on the screen in a predetermined display manner that the two touched points P1 and P2 are designated when it is determined in step S3 that the continuation is longer than a certain time. Any "predetermined display mode" may be used. For example, (i) both the points P1 and P2 and the line L1 connecting them are displayed, or (ii) only the points P1 and P2 are displayed. Alternatively, only the line L1 may be displayed. Regarding the points P1 and P2, not only a small dot but a slightly larger graphical image as shown in FIG. 7B (for example, any shape such as a circle, a rectangle, a polygon, a star, etc.) so that the designated position can be clearly understood. However, it is also possible to display such as a circle).
 画像処理装置301は、操作者が画面から手を離した後も、図7Cのように指定された点P1、P2をそのまま表示させるものであってもよい。また、点P1、P2の位置の微調整を受け付けるように構成されていてもよい。このように微調整を受け付けるモードであることが分かるように、例えば、P1、P2の円形のグラフィカル画像および/または線L1が点滅表示するようになっていてもよい。図7Cでは、一例として、点P2が少し動かされて点P2′に微調整された状態を例示している。 The image processing apparatus 301 may display the designated points P1 and P2 as they are as shown in FIG. 7C even after the operator releases the hand from the screen. Moreover, you may be comprised so that the fine adjustment of the position of the points P1 and P2 may be received. In this way, for example, the circular graphical image of P1 and P2 and / or the line L1 may be blinked so that it can be understood that the mode is a mode for accepting fine adjustment. FIG. 7C illustrates a state in which the point P2 is slightly moved and finely adjusted to the point P2 ′ as an example.
 この微調整は、操作者が例えば指で点P1、P2のグラフィカル画像を動かすことで行なわれるものであってもよい(タッチパネル上での操作)。別の態様としては、モーション入力を利用し、機器には非接触で点P1、P2の位置の微調整を行うようにしてもよい。なお、音声入力で切断基準線L1を表示させることについては、再度後述するものとする。ここでは、まず、タッチパネル操作を前提とし、本実施形態のカット機能等について説明するものとする。 This fine adjustment may be performed by the operator moving the graphical images of the points P1 and P2 with a finger, for example (operation on the touch panel). As another aspect, motion input may be used to finely adjust the positions of the points P1 and P2 without contacting the device. Note that displaying the cutting reference line L1 by voice input will be described later again. Here, the cut function and the like of the present embodiment will be described first on the premise of touch panel operation.
 このようにして点P1、P2の指定が終わった後、ステップS4において、例えば操作者が画面上の所定のアイコン(例えば「OK」入力用のアイコン)をタッチする。すると、カット機能により、図7Dに示すように点P1、P2を結ぶ線L1によって肝臓が切断される(ステップS5)。 After the designation of the points P1 and P2 is completed in this way, in step S4, for example, the operator touches a predetermined icon (for example, an “OK” input icon) on the screen. Then, the liver is cut by the line L1 connecting the points P1 and P2 as shown in FIG. 7D by the cutting function (step S5).
 線L1を挟んで2つに分割された第1の部位71-1と第2の部位71-2は、それぞれ独立の解剖学的構造物として操作可能となる。なお、上記操作以外の方法として、例えば(i)アイコンをタッチするのではなく、画面上の所定のエリアをタッチすることにより、または、(ii)複数回タッチする(一例でダブルタップ)といった通用外の操作をすることにより、上記機能を実行するようにしてもよい。音声入力によるものであってもよい。 The first part 71-1 and the second part 71-2 divided into two with the line L1 in between can be operated as independent anatomical structures. As a method other than the above operation, for example, (i) not touching an icon, but touching a predetermined area on the screen, or (ii) touching multiple times (double tap in one example) The above functions may be executed by performing an external operation. It may be by voice input.
 独立の解剖学的構造物として操作可能となっているので、例えば、第1の部位71-1をタッチした場合(ステップS6)であれば、上述した機能により、図7Eに示すように当該部位のみが選択される。そして、その表示濃度が切り替わる。具体的には、第1の部位71-1のみが半透明表示となる。もう一度タッチすることで元の表示に戻る。 Since it can be operated as an independent anatomical structure, for example, when the first part 71-1 is touched (step S6), the part as shown in FIG. Only selected. Then, the display density is switched. Specifically, only the first part 71-1 is translucently displayed. Touch again to return to the original display.
 また、例えば第1の部位71-1を長押しして画面周辺部側にスワイプ操作またはドラッグ操作等した場合には、その部位が非表示状態となり、第2の部位71-2と血管73、75のみが残る。非表示となったものは図7Fに例示するようにサムネイル画像66として表示されてもよい。 Further, for example, when the first part 71-1 is long pressed and swiped or dragged to the periphery of the screen, the part is not displayed, and the second part 71-2 and the blood vessel 73, Only 75 remains. The non-displayed image may be displayed as a thumbnail image 66 as illustrated in FIG. 7F.
(領域指定)
 画像処理装置301は、操作者の次のような操作により解剖学的構造物の一部を領域指定する。図9Aは、図7Bを参照して説明した操作のように点P1、P2の2点に触れられた状態である(なお、操作者の指は画面上の2点に触れたままの状態であるが、図示は省略している)。
(Area specification)
The image processing apparatus 301 designates a part of an anatomical structure by the following operation of the operator. FIG. 9A shows a state in which two points P1 and P2 are touched as in the operation described with reference to FIG. 7B (note that the operator's finger remains touching two points on the screen. (The illustration is omitted).
 この状態から、次いで、例えば図9Bに示すように、操作者が2本の指を移動させた場合(限定されるものではないが2本の指の同時移動)、画像処理装置301は、移動後の2点の点P1′、P2′の位置を特定し、それに基いて略四角形の領域を指定する。具体的には、移動前の2点P1、P2および移動後の2点P1′、P2′の4点で囲まれる四角形を領域指定する。 Next, from this state, for example, as shown in FIG. 9B, when the operator moves two fingers (but not limited to two fingers simultaneously), the image processing apparatus 301 moves. The positions of the latter two points P1 ′ and P2 ′ are specified, and a substantially rectangular area is designated based on the positions. Specifically, a quadrangle surrounded by four points, two points P1 and P2 before movement and two points P1 ′ and P2 ′ after movement, is designated as an area.
 ここでも、上記同様、操作者が手を離しても画面上に指定された領域の略四角形が残り、かつ、それぞれの点P1、P2、P1′、P2′の位置を個別に動かして位置の微調整を行えるように、画像処理装置301が構成されていることも好ましい。指定された領域を確定する方式としては、例えば、操作者が、操作者が画面上の所定のアイコン(例えば「OK」入力用のアイコン)をタッチするようなものであってもよい。 Here again, as described above, even if the operator releases his hand, a substantially square of the designated area remains on the screen, and the position of each point P1, P2, P1 ', P2' is moved individually. It is also preferable that the image processing apparatus 301 is configured so that fine adjustment can be performed. As a method of determining the designated area, for example, the operator may touch a predetermined icon (for example, an “OK” input icon) on the screen.
 図9Bのように4点P1、P2、P1′、P2′の位置が指定されている場合に、各位置を微調整できるように、点P1、P2、P1′、P2′の円形(一例)のグラフィカル画像やそれらを結ぶ線を点滅表示するようにしてもよい。 As shown in FIG. 9B, when the positions of the four points P1, P2, P1 ′, and P2 ′ are designated, the circles of the points P1, P2, P1 ′, and P2 ′ can be finely adjusted (an example). You may make it display the graphical image and the line which connects them blinking.
 領域指定された領域Sa1(図9B参照)は、その他の部位から分割され独立のオブジェクトとして操作可能となる。したがって領域のみの表示濃度を変更したり、表示のオンオフを切替えたりすることが可能となる。このような機能によれば、例えば当該領域Sa1のみ非表示にすることで、内側の血管73、75の観察や、血管73、75と肝臓71との関係を確認することが可能となる。 The designated area Sa1 (see FIG. 9B) is divided from other parts and can be operated as an independent object. Therefore, it is possible to change the display density of only the region or switch the display on and off. According to such a function, for example, by not displaying only the region Sa1, it is possible to observe the inner blood vessels 73 and 75 and to confirm the relationship between the blood vessels 73 and 75 and the liver 71.
 なお、領域指定は必ずしも四角形で行われる必要はなく、三角形もしくは五角形以上の多角形形状が領域指定されるものであってもよい。 It should be noted that the area designation is not necessarily performed with a quadrangle, and a triangle or a polygonal shape of a pentagon or more may be area designated.
 上記説明では、肝臓およびその周辺の医用画像を例に挙げたが、当然ながら、本発明において解剖学的構造物は特定のものに限定されない。例えば検者の頭部の医用画像を表示し、それに対して種々の画像処理を行うことができるようになっていてもよい。 In the above description, medical images of the liver and its surroundings are taken as an example, but of course, the anatomical structure is not limited to a specific one in the present invention. For example, a medical image of the examiner's head may be displayed and various image processing may be performed on the medical image.
〔音声入力による他の機能〕
(表示/非常時切替え)
 ここまで説明したような表示非表示切替え機能やカット機能は、タッチパネルに対する入力を行なうことなく、音声入力等のみで実行できるようになっていることも好ましい。
[Other functions by voice input]
(Display / emergency switching)
It is also preferable that the display non-display switching function and the cut function as described so far can be executed only by voice input or the like without performing input to the touch panel.
 まず、所定の解剖学的構造物の選択については、タッチではなく、音声で対象を発声しそれが認識された場合に、選択が行われるようになっている。例えば、操作者が「肝臓」と言うと、画像処理装置301はそれを音声認識し、肝臓が選択された状態とする。「選択された状態」であることを示すために、その解剖学的構造物(例えば肝臓)が初期状態とは異なる色で表示させたり、点滅表示させたりするようにしてもよい。 First, regarding the selection of a predetermined anatomical structure, selection is performed when a target is uttered by voice instead of touching and recognized. For example, if the operator says “liver”, the image processing apparatus 301 recognizes it and sets the liver to the selected state. In order to indicate the “selected state”, the anatomical structure (for example, the liver) may be displayed in a color different from the initial state, or may be blinked.
 そして、肝臓の透過度を変更したい場合には、例えば操作者が「透過」と発声する。画像処理装置301はそれを音声認識し、肝臓を半透明状態での表示に切り替える。この除隊では、他の解剖学的構造部(一例で、血管および腫瘍)は不透過のまま表示され続ける。このような構成によれば、本来であれば肝臓に隠れて見ることができない血管等についても、確認できるようになる。 And, when it is desired to change the permeability of the liver, for example, the operator utters “Transparent”. The image processing apparatus 301 recognizes the voice and switches the display to a semi-transparent display. In this discharge, other anatomical structures (in one example, blood vessels and tumors) continue to appear impermeable. According to such a configuration, it is possible to confirm blood vessels and the like that are otherwise hidden behind the liver and cannot be seen.
 一つの実施形態として、例えば、モーションセンサから操作者の手までの距離に応じて透過度が変更される構成としてもよい。すなわち、モーションセンサに手を近づけていくと徐々に透過度が増大し(または減少し)、反対にモーションセンサから手を遠ざけていくと徐々に透過度が減少(または増大)する構成としてもよい。具体的には、操作者が「透過」と発声し画像処理装置301がそれを音声認識すると、上記のようなモーション入力を受けつけるモードとなる。次いで、装置は、モーションセンサから操作者の手までの距離を検出し、それに応じて透過度を変更する。 As one embodiment, for example, the transparency may be changed according to the distance from the motion sensor to the operator's hand. That is, the transmittance gradually increases (or decreases) when the hand is brought closer to the motion sensor, and conversely, the transmittance gradually decreases (or increases) when the hand is moved away from the motion sensor. . Specifically, when the operator speaks “transparent” and the image processing apparatus 301 recognizes the voice, the mode is set to accept the motion input as described above. The device then detects the distance from the motion sensor to the operator's hand and changes the transparency accordingly.
(ラインカット/ボックスカット)
 ラインカットを行う場合、例えば操作者が「ラインカット」と発声する。画像処理装置301はそれを音声認識し、切断の基準線を画面上に表示する。この基準線は図7Bの線L1のようなものであってもよい。
(Line cut / box cut)
When performing line cut, for example, the operator utters “line cut”. The image processing apparatus 301 recognizes it and displays a cutting reference line on the screen. This reference line may be like the line L1 in FIG. 7B.
 そして、画像処理装置301はモーション入力を待つ。操作者は、モーション入力により、切断基準線の位置、長さ、向き等の変更を行うことができる。これにより、非接触で基準線を所定位置に設定することができる。 Then, the image processing apparatus 301 waits for motion input. The operator can change the position, length, orientation, etc. of the cutting reference line by motion input. Thereby, the reference line can be set at a predetermined position without contact.
 次いで、例えば「右カット」と言うことで、基準線の右側領域が除去される。代わりに、「左カット」と言うことで、基準線の左側領域が除去される。除去ではなく、半透明表示としてもよい。図5(b)はその一例を示しており、基準線の右側の符号371-2の部分が不透明の表示のままで、左側の符号371-1の部分が半透明の表示となっている。 Next, for example, “right cut”, the right region of the reference line is removed. Instead, the left region of the reference line is removed by saying “left cut”. Instead of removal, a semi-transparent display may be used. FIG. 5 (b) shows an example of this, and the portion 371-2 on the right side of the reference line remains opaque and the portion 371-1 on the left side is semi-transparent.
 ボックスカットを行う場合、例えば操作者が「ボックスカット」を発声する。詳細な図示は省略するが、画像処理装置301はそれを音声認識し、切除の基準となる矩形(一例)を画面上に表示する。四角形の大きさは、所定の1サイズのみであってもよいし、例えば大、中、小のような複数サイズ用意されていてもよい。 When performing box cut, for example, the operator utters “box cut”. Although detailed illustration is omitted, the image processing apparatus 301 recognizes the voice and displays a rectangle (one example) as a reference for excision on the screen. The size of the rectangle may be only one predetermined size, or a plurality of sizes such as large, medium, and small may be prepared.
 このボックスカットは、対象の解剖学的構造物を所定の深さで切除するものである。画面上の切除の基準となる矩形を表示している状態で、画像処理装置301はモーション入力を待つ。四角形のサイズや形状は固定であってもよいが、変更自在となっていてもよい。例えば、最初に表示されたデフォルトの四角形の角部の位置を動かすことができるになっていることにより、四角形のサイズや形状を変更可能な構成としてもよい。角部の位置を動かすために、モーション入力を使用することができる。 This box cut is to cut the target anatomical structure at a predetermined depth. The image processing apparatus 301 waits for motion input while displaying a rectangle serving as a reference for excision on the screen. The size and shape of the quadrangle may be fixed, but may be changeable. For example, the size and shape of the quadrangle can be changed by moving the corners of the default quadrangle displayed first. Motion input can be used to move the corner position.
 この状態で操作者が、例えば手をモーションセンサ380に近づけていくと、それに対応して、上記矩形を輪郭としつつ手の移動距離に応じた所定深さの略直方体型の穴が肝臓に形成されていく。これにより、肝臓の一部は切除しながら、かつ、内部の血管は切除されていないような医用観察用画像を得ることができる。 In this state, for example, when the operator moves his hand closer to the motion sensor 380, a substantially rectangular parallelepiped hole having a predetermined depth corresponding to the moving distance of the hand is formed in the liver correspondingly to the rectangle as an outline. It will be done. As a result, it is possible to obtain a medical observation image such that a part of the liver is excised and an internal blood vessel is not excised.
 なお、このように所定深さの切除部を形成した状態で、3次元医用画像全体を所定角度だけ回転させることも可能である。例えば、操作者が、回転モードで、「上」、「15°」等と発声することで、画像処理装置301はそれを認識して、穴が形成された状態の医用画像を15°回転させる。このような構成によれば、穴の内部構成(例えば、肝臓の一部は切除されているが、血管は表示されている)を異なる角度から観察することができるので、有用である。 Note that it is also possible to rotate the entire three-dimensional medical image by a predetermined angle in a state where the excision part having a predetermined depth is formed in this way. For example, when the operator utters “up”, “15 °”, or the like in the rotation mode, the image processing apparatus 301 recognizes it and rotates the medical image in which the hole is formed by 15 °. . According to such a configuration, the internal configuration of the hole (for example, a part of the liver is excised but the blood vessel is displayed) can be observed from different angles, which is useful.
 上記では矩形の輪郭で除去を行うことが説明したが、当然ながら、三角形、多角形、円形、楕円形その他の任意の幾何形状で輪郭を規定してもよい。 In the above description, the removal is performed using a rectangular outline, but it is needless to say that the outline may be defined by a triangular, polygonal, circular, elliptical or other arbitrary geometric shape.
 上記ではボックスとして指定した領域を切除するものであったが、これとは逆に、ボックスとして指定した領域のみが残されそれ以外の領域を非表示とする構成としてもよい。 In the above description, the area designated as a box is cut out, but conversely, only the area designated as a box may be left and the other areas may be hidden.
〔セクションB:複数の解剖学的構造部の一括的取扱い〕
1.本セクションの発明の課題
 図5に例示したような3次元医用画像としては、既述のとおり、ボリュームデータ等が利用される。ところで、このような3次元医用画像(すなわち、種類の異なる幾つかの解剖学的構造部を含むもの)では、肝臓と血管とではそのデータのCT値(信号値)が異なったものとなっている。また、同じ血管であっても、例えば動脈、静脈、門脈などでそれぞれ異なるCT値(信号値)となっている。
[Section B: Collective handling of multiple anatomical structures]
1. Problems of the Invention of this Section As described above, volume data or the like is used as a three-dimensional medical image as illustrated in FIG. By the way, in such a three-dimensional medical image (that is, an image including several different types of anatomical structures), CT values (signal values) of the data are different between the liver and blood vessels. Yes. Even in the same blood vessel, for example, the CT value (signal value) is different for an artery, vein, portal vein, or the like.
 これは、造影剤を利用した透視撮像においては、造影剤を注入した後、所定時間後に透視撮像を行って画像データが取得されるものであるところ、造影剤の到達する時間の違い等によって、動脈、静脈、門脈、肝臓実質といった各部のCT値(信号値)に差が生じるためである。従来の手法では、それぞれの血管や臓器ごとに閾値の設定やフィルター処理を行い、各部のボリュームデータを作成していた。 This is because in fluoroscopic imaging using a contrast agent, image data is obtained by performing fluoroscopic imaging after a predetermined time after injecting the contrast agent, depending on the difference in the arrival time of the contrast agent, etc. This is because there is a difference in CT values (signal values) of each part such as an artery, vein, portal vein, and liver parenchyma. In the conventional technique, threshold data are set and filtered for each blood vessel or organ to create volume data for each part.
 しかしながら、例えば同じ血管であっても、CT値(信号値)の違いから動脈、静脈、門脈が個別にデータ登録されているような場合、次のような場合に、比較的手間のかかる作業が必要となりうる。すなわち、3次元医用画像のビューワ機能においては、例えば血管を強調した状態で表示したり、強調しない表示としたりを切り換えることが可能であるものがある。このようなことができることにより、例えば、血管の末梢部分(CT値(信号値)が低い)を必要に応じて表示させたりさせなかったりを切り替えることができ、必要に応じた観察を行えるものとなる(図10参照)。ところが、種類の異なる複数の血管を含む3次元医用画像の場合、それぞれの血管ごとに、操作者が、表示閾値の再設定やフィルタリングを行なわないと全ての血管の表示態様を一様に変更することはできず、そのために、簡単にこのような表示変更を行い難いという問題点がある。 However, even if the blood vessels are the same, for example, when the data of arteries, veins, and portal veins are individually registered due to differences in CT values (signal values), the following operations are relatively time-consuming. May be required. That is, in the viewer function of a three-dimensional medical image, for example, it is possible to switch between displaying with a blood vessel emphasized and displaying without emphasis. By being able to do this, for example, the peripheral portion of the blood vessel (the CT value (signal value) is low) can be switched as necessary or not, and observation can be performed as necessary. (See FIG. 10). However, in the case of a three-dimensional medical image including a plurality of different types of blood vessels, the display mode of all blood vessels is uniformly changed for each blood vessel unless the display threshold is reset or filtered. Therefore, there is a problem that it is difficult to easily change the display.
 これに対して、実際の観察では、例えば、血管系は血管系、実質系は実質系といったような切り分けで、取扱いを行えるようになっていることが好ましい場合もある。そこで、このセクションの発明では、画像処理装置は以下のような機能を備えている。 On the other hand, in actual observation, for example, it may be preferable that the vasculature can be handled by dividing it into a vascular system and a parenchymal system. Therefore, in the invention of this section, the image processing apparatus has the following functions.
2.機能および動作
 本実施形態の画像処理装置は、患者を撮像することによって得られた情報に基づくボリュームデータを読み込む(図3のステップS1も参照)。
2. Function and Operation The image processing apparatus according to the present embodiment reads volume data based on information obtained by imaging a patient (see also step S1 in FIG. 3).
 そして、ボリューム内の信号値やCT値、標準偏差(SD:Standard Deviation)の解析を行う。そして、例えばCT値の平均が300HU以上であれば動脈であると自動判定し、CT値の平均が100HU以下であれば臓器であると自動判定する。また、CT値に加え、CT値のヒストグラム形状も認識する。一般的に、動脈はピークが高く幅(分布)が狭く、門脈や静脈はピークが低く幅(分布)が広いという傾向がある。したがって、そうした要素に基いて、血管の種別の自動認識を実現し得る。 Then, the signal value, CT value, and standard deviation (SD) in the volume are analyzed. For example, if the average CT value is 300 HU or more, it is automatically determined as an artery, and if the average CT value is 100 HU or less, it is automatically determined as an organ. In addition to the CT value, the histogram shape of the CT value is also recognized. Generally, arteries tend to have a high peak and a narrow width (distribution), and portal veins and veins have a low peak and a wide width (distribution). Therefore, automatic recognition of the blood vessel type can be realized based on such elements.
 以上のようにして、本来、CT値(信号値)の異なる動脈、静脈、門脈等を画像処理装置で自動的に区別してデータ登録することが可能となる。また、自動的に、動脈、静脈、門脈等に異なる色を振り当てて別々の色で表示するようにしてもよい。 As described above, it is possible to automatically distinguish and register data of arteries, veins, portal veins, and the like that originally have different CT values (signal values) by the image processing apparatus. In addition, different colors may be automatically assigned to arteries, veins, portal veins, etc., and displayed in different colors.
 また、血管のヒストグラムは、本来、動脈、静脈、門脈等ごとに1つずつ存在するが、それぞれを統合して正規化を行い、1つのヒストグラムで全ての血管(別の実施態様では任意の2つ以上でもよい)を表わすようにしてもよい。具体的には、一例であるが、それぞれのヒストグラムの平均値や重心などを計算し、低いものは高いものに合わせるように全体をシフトさせて1つのヒストグラムを作成してもよい。 In addition, a histogram of blood vessels originally exists for each artery, vein, portal vein, etc., but each is integrated and normalized, and all blood vessels (in another embodiment, arbitrary blood vessels are normalized). It may be possible to represent two or more. Specifically, as an example, the average value and the center of gravity of each histogram may be calculated, and one histogram may be created by shifting the whole so that the lower one matches the higher one.
 このように複数の血管(他の解剖学的構造物でもよい)を1つのヒストグラムに統合して表した場合、個々の血管のヒストグラムに対する操作を行うのではなく、1つのヒストグラムに対する操作のみで一括的に表示の変更等を行うことができる。つまり、例えば、血管の末梢部分は表示しなくて構わないような場合、画像処理装置は操作者からの所定の入力を受け付け、ある基準値以下の部分については表示しない(または、ある基準値以下の部分については表示しない)といった処理を行うことで、動脈、静脈、門脈等の末梢部分をまとめて非表示とすることができる(例えば、図10(b)参照)。一方、血管の末梢部分まで強調して表示したい場合には、図10(a)に示すように(ここでは閾値が130HUとなっている)表示するCT値の下限値をより低く設定すればよい。 In this way, when a plurality of blood vessels (other anatomical structures may be combined) are expressed in a single histogram, an operation is not performed on the histogram of each blood vessel, but only by an operation on one histogram. The display can be changed automatically. That is, for example, when it is not necessary to display the peripheral portion of the blood vessel, the image processing apparatus accepts a predetermined input from the operator and does not display a portion below a certain reference value (or below a certain reference value). By performing processing such as “do not display this part”, peripheral parts such as arteries, veins, and portal veins can be collectively hidden (see, for example, FIG. 10B). On the other hand, when it is desired to emphasize and display the peripheral portion of the blood vessel, the lower limit value of the CT value to be displayed may be set lower as shown in FIG. 10A (the threshold value is 130 HU here). .
 この構成によれば、動脈、静脈、門脈等のそれぞれの表示態様を変更する必要がなるので、非常に操作性が良く実用に資するものとなる。 According to this configuration, since it is necessary to change the display modes of the arteries, veins, portal veins, etc., the operability is very good and contributes to practical use.
 上記した「操作者からの所定の入力」としては、例えば、画面上のアイコンやカーソル、スライダーといった画像ボタンを操作することで行なわれるものであってもよい。または、タッチパネル上での操作者の指の所定のジェスチャを認識し、それに基いて行なわれるものであってもよい。 The above-described “predetermined input from the operator” may be performed by operating an image button such as an icon, a cursor, or a slider on the screen, for example. Alternatively, a predetermined gesture of an operator's finger on the touch panel may be recognized and performed based on it.
 例えば、血管の表示を変更するモードの状態で、操作者が数本の指でタッチパネル上をタッチし、指を同時に所定方向に移動させることで血管の表示が変更されるような構成としてもよい。より具体的には、数本の指を同時に画面上方(第1の方向)に移動させると、血管の末梢部分まで表示されるようになり、反対に、指を下方(第2の方向)に移動させると、血管の末梢部分(より正確には血管の太い部分の外縁付近も)が消えていくような構成としてもよい。 For example, in a state in which the display of the blood vessel is changed, the blood vessel display may be changed by the operator touching the touch panel with several fingers and simultaneously moving the fingers in a predetermined direction. . More specifically, when several fingers are simultaneously moved upward (in the first direction), the peripheral part of the blood vessel is displayed, and conversely, the fingers are moved downward (in the second direction). When moved, the peripheral portion of the blood vessel (more precisely, the vicinity of the outer edge of the thick portion of the blood vessel) may be removed.
 タッチパネルでの操作ではなく、モーションセンサ380を介したモーション入力によって上記の操作を行うことができる構成であることも好ましい。すなわち、この構成では、音声入力とモーション入力のみで血管(一例。他の解剖学的構造部でもよい。)の表示の切替えを行うことができるので、タッチパネルに触れる必要がなく、清潔を維持したまま3次元医用画像の観察を行うことができる。 It is also preferable that the above-described operation can be performed by a motion input via the motion sensor 380 instead of an operation on the touch panel. That is, in this configuration, the display of blood vessels (for example, other anatomical structures may be used) can be switched only by voice input and motion input, so it is not necessary to touch the touch panel, and the cleanliness is maintained. The three-dimensional medical image can be observed as it is.
〔セクションC:その他の機能〕
(1)3D-ポインタ
 図5に例示したような3次元医用画像の利用態様としては、例えば複数の医療従事者が術前に血管の走行状態(一例)などを確認し、実際の手術のシミュレーションを行うこと等が挙げられる。
[Section C: Other functions]
(1) 3D-Pointer As a usage mode of the three-dimensional medical image as illustrated in FIG. 5, for example, a plurality of medical workers confirm the running state of blood vessels (example) before the operation and simulate an actual operation. And so on.
 2次元の医用画像の場合では、例えば、画像としてポインタを画面上に出現させそれを所定の部位に位置させることで、当該部分を注視できるものとなる。しかしながら、3次元医用画像ではポインタは、平面内ではなく、3次元空間内の任意の部位に配置させることが必要となる。このように次元空間内の任意の部位にポインタを移動させるような操作は、マウスやタッチパネル等の入力インターフェースでは比較的行い難いものである。 In the case of a two-dimensional medical image, for example, when a pointer appears as an image on the screen and is positioned at a predetermined part, the part can be watched. However, in the three-dimensional medical image, the pointer needs to be arranged at an arbitrary part in the three-dimensional space, not in the plane. Such an operation of moving the pointer to an arbitrary part in the dimensional space is relatively difficult to perform with an input interface such as a mouse or a touch panel.
 そこで、本実施形態では、モーション入力を使ってポインタの3次元的な配置を行うようにしてもよい。具体的には、医用画像処理装置301は、先ず、音声認識機能で「ポインタ」(一例)との入力を受け付ける。そして、画面上に、一例として、立体的に表示されたポインタを表示させる。 Therefore, in this embodiment, the pointers may be arranged three-dimensionally using motion input. Specifically, the medical image processing apparatus 301 first receives an input of a “pointer” (an example) using a voice recognition function. Then, as an example, a three-dimensionally displayed pointer is displayed on the screen.
 画面上の上下方向および左右方向については、操作者の手の水平面内での動きに対応してポインタを移動させるようにすればよい。奥行き方向については、操作者が手をモーションセンサ380に近づけるとポインタが3次元医用画像の奥方向へと移動し、遠ざけるとポインタが手前方向へと移動するようになっていてもよい。 In the vertical direction and the horizontal direction on the screen, the pointer may be moved in accordance with the movement of the operator's hand in the horizontal plane. With respect to the depth direction, the pointer may move in the depth direction of the three-dimensional medical image when the operator brings his hand close to the motion sensor 380, and the pointer may move in the near direction when the operator moves away.
 ポインタの表示に関し、奥方向へと移動するにつれて徐々に表示サイズが小さくなっていき、手前方向に移動するにつれて徐々に大きくなる表示態様を採用してもよい。 Regarding the display of the pointer, a display mode in which the display size gradually decreases as it moves in the back direction and gradually increases as it moves in the front direction may be adopted.
(2)シェーマ画像の書出し
 診断や検査においては患者の身体部位を表すシェーマ画像と呼ばれる2次元的な絵図が利用されることがある。そこで、本発明の一形態に係る画像処理装置は、シェーマ画像の書き出し機能を備えるものであってもよい。
(2) Writing of a schema image In diagnosis and examination, a two-dimensional picture called a schema image representing a body part of a patient may be used. Therefore, an image processing apparatus according to an aspect of the present invention may include a schema image writing function.
 例えば、画像処理装置は、操作者による所定の入力があった場合に、3次元医用画像(例えば図5参照)のデータを利用してそれに対応するシェーマ画像を作成する。シェーマ画像としては、線画、モノクロ画像、カラー画像のいずれの2次元画像であっても構わない。操作者による所定の入力としては、例えば、画面上のアイコンをタッチすることによる入力、音声入力、またはモーションセンサを介した所定のジェスチャ入力など、種々の入力が挙げられる。 For example, when there is a predetermined input by the operator, the image processing apparatus creates a schema image corresponding to the data using data of a three-dimensional medical image (see, for example, FIG. 5). The schema image may be any two-dimensional image such as a line drawing, a monochrome image, or a color image. Examples of the predetermined input by the operator include various inputs such as an input by touching an icon on the screen, a voice input, or a predetermined gesture input via a motion sensor.
 3次元医用画像のデータを利用してそれに対応する2次元のシェーマ画像を作成する例としては、例えば、図5のような向き(一例)で表示されている医用画像について、現在表示されている画像をそのまま2次元化してデータを書き出すようなものであってもよい。この際、輪郭抽出処理を行って線画を作成するようにしてもよい。データの形式はどのようなものであってもよいが、例えばPDF(Portable Document Format)形式や、または、GIF、PNG、JPEGといったその他任意の画像形式を利用することができる。医師は、このようにして作成されたシェーマ画像に対して、例えば、タッチペンや指でスケッチや所見等を書き込むことができる。 As an example of using the data of a three-dimensional medical image and creating a corresponding two-dimensional schema image, for example, a medical image displayed in an orientation (example) as shown in FIG. 5 is currently displayed. It is also possible to write the data by converting the image into two dimensions as it is. At this time, a line drawing may be created by performing contour extraction processing. The data format may be any format, but for example, a PDF (Portable Document Format) format or any other image format such as GIF, PNG, or JPEG can be used. A doctor can write a sketch, a finding, or the like on the schema image created in this way, for example, with a touch pen or a finger.
 画像処理装置によって作成されたシェーマ画像は、同装置から外部に送り出され、ネットワーク(図3参照)上に接続された所定の記憶領域に保存することができる。例えば、電子カルテの一部資料として取り込まれるものであってもよい。 The schema image created by the image processing apparatus can be sent out from the apparatus and stored in a predetermined storage area connected on the network (see FIG. 3). For example, it may be imported as a part of the electronic medical record.
(3-1)秘匿化
 本実施形態の画像処理装置は、タブレット端末のような可搬型のものであり、場合によっては病院の外部に持ち出して利用しうるものである。こうした構成は、例えば院外で特定の患者の3次元医用画像を確認しながら施術のシミュレーションを行うような場合に有用であることがある。しかしながら、病院の外部に持ち出す際には内部の情報が秘匿化されていることが、セキュリティ上の観点から必要となる。
(3-1) Concealment The image processing apparatus according to the present embodiment is a portable type such as a tablet terminal, and can be taken out of the hospital and used in some cases. Such a configuration may be useful, for example, when performing a simulation of a procedure while confirming a three-dimensional medical image of a specific patient outside the hospital. However, it is necessary from the viewpoint of security that the internal information is kept secret when it is taken out of the hospital.
 そこで、本発明の一形態の画像処理装置は、次のような機能を有していることが好ましい:(a)装置の現在位置を認識する機能、(b)それに基づいて装置が病院の外部に存在するか否か(または、装置が病院の内部には存在しないか否か)を判定する機能、および、(c)病院の外部と判定した場合(または、病院内部には存在しないと判定した場合)に装置に保持されている所定の情報を自動的に非表示とする機能。 Therefore, the image processing apparatus according to an embodiment of the present invention preferably has the following functions: (a) a function for recognizing the current position of the apparatus, and (b) the apparatus based on the outside of the hospital. A function for determining whether or not the device exists (or whether or not the device does not exist inside the hospital), and (c) if determined to be outside the hospital (or determined not to exist inside the hospital) Function to automatically hide the predetermined information held in the device.
 非表示とされる情報としては、例えば、患者を特定できる情報を少なくとも含む情報(識別情報)がこれに該当する。なお、単に表示とするのではなく、対象となる情報を暗号化したり、同情報へのアクセスを禁止するようにしたりしてもよい。 As the information to be hidden, for example, information (identification information) including at least information that can identify a patient corresponds to this. Note that, instead of simply displaying, the target information may be encrypted, or access to the information may be prohibited.
 病院の内部か外部かを判定するための方式としては、例えば、病院内の無線ネットワークシステムの範囲内か否かを基準とするようにしてもよい。 As a method for determining whether the hospital is inside or outside, for example, it may be based on whether it is within the range of the wireless network system in the hospital.
 このように、装置が院外に持ち出された際に所定の情報が自動的に秘匿化される構成によれば、患者情報等の漏洩防止に寄与するので、セキュリティの観点からより好ましいものとなる。 Thus, according to the configuration in which predetermined information is automatically concealed when the device is taken out of the hospital, it contributes to prevention of leakage of patient information and the like, which is more preferable from the viewpoint of security.
 なお、上記のような秘匿化による作用効果は必ずしも「病院」の中か外かのみに限定されるものではない。対象となる特定エリア(どのような施設または場所であっても構わない)においてそのような秘匿化を行うようにしてもよい。 It should be noted that the operational effects of concealment as described above are not necessarily limited to being inside or outside the “hospital”. Such concealment may be performed in a target specific area (any facility or place may be used).
(3-2)
 秘匿化に関して、さらに、次のような機能が備わっていてもよい。図12は、秘匿化を実施している状態を模式的に示した図である。この画面では、患者を特定できる情報は全て秘匿化されている。また、画面にはアイコン441が表示されている。
(3-2)
Regarding the concealment, the following functions may be further provided. FIG. 12 is a diagram schematically illustrating a state in which concealment is performed. In this screen, all information that can identify the patient is concealed. An icon 441 is displayed on the screen.
 ところで、3次元医用画像を閲覧している医師等にとって、例えば当該画像がどの患者のものかを確認したい場合など、一時的に患者情報を確認したいことが生じることも想定される。 By the way, it may be assumed that a doctor who is viewing a three-dimensional medical image may want to temporarily confirm patient information, for example, when it is desired to confirm which patient the image belongs to.
 そこで、この例の医用画像処理装置は、まず、アイコン441が押されたことを判別し(図12、図13)、次いで、患者情報を表示する。患者情報は、装置の内部に保存されたデータを用いてもよいし、または、外部サーバ(一例で病院システム内のサーバ)にアクセスしてそこから取得されるデータであってもよい。 Therefore, the medical image processing apparatus of this example first determines that the icon 441 has been pressed (FIGS. 12 and 13), and then displays patient information. The patient information may be data stored inside the apparatus, or may be data obtained by accessing an external server (a server in a hospital system, for example).
 より具体的には、このような患者情報の表示を行うことができる条件が、一定条件下に限定されていることも好ましい。例えば、装置に対して操作者の指紋認証が行われている場合である。当然ながら、他の認証方式によって本人認証が行われている場合であってもよい。 More specifically, it is also preferable that the conditions under which such patient information can be displayed are limited to certain conditions. For example, this is a case where the fingerprint authentication of the operator is performed on the apparatus. Of course, it may be a case where the identity authentication is performed by another authentication method.
 外部サーバとの通信は、一例で、VPN(Virtual Private Network)などのセキュアな通信状況下のみで可能となるように構成されていることも好ましい。 It is also preferable that the communication with the external server is an example, and is configured to be possible only under secure communication conditions such as VPN (Virtual Private Network).
 表示する情報としては、例えば、患者のイニシャル、生年月日、性別、年齢、住所、手術日、担当医師等のうちの1つ、2つ、または3つ以上であってもよい。患者ID、検査ID等を表示してもよい。 As the information to be displayed, for example, one, two, or three or more of the patient's initial, date of birth, sex, age, address, operation date, doctor in charge, etc. may be used. Patient ID, examination ID, etc. may be displayed.
 上記のような状況下で患者情報を確認する場合、患者等に関する必要最低限の情報(例えば、患者を特定でき、かつ、手術日または医師を特定できるような内容)のみが表示される構成であることが、一形態において、好ましい。 When confirming patient information under the above circumstances, only the minimum necessary information about the patient etc. (for example, contents that can identify the patient and identify the operation date or doctor) is displayed. In one form, it is preferable.
 表示された患者情報は、一定時間経過後に自動的に再び秘匿されるようになっていてもよい。または、一旦表示した後、操作中は継続して表示を続けるようになっていてもよい(一例で、作業終了(ログアウト)の状態まで表示を続ける)。 The displayed patient information may be automatically hidden again after a certain period of time. Alternatively, once displayed, the display may be continued during the operation (in one example, the display is continued until the operation is completed (logout)).
 上記のように、必要に応じて最小限の患者情報および/または手術情報を確認できるようになっていることで、表示されている3次元医用画像と実際に手術が施される患者との取り違え等の問題の発生の可能性を低減させることができる。 As described above, the minimum patient information and / or operation information can be confirmed as necessary, so that the displayed 3D medical image is mistaken for the patient who is actually operated on. The possibility of occurrence of such problems can be reduced.
 なお、上記の特徴に関し、本明細書は、装置の発明のみならず、上記内容に対応する方法およびプログラムの発明も開示する。 In addition, regarding the above features, the present specification discloses not only an apparatus invention but also a method and program invention corresponding to the above contents.
(4)ステレオ画像
 本発明の一形態に係る医用画像処理装置は、下記のような、ステレオ画像を表示する機能を有していてもよい。
(4) Stereo Image The medical image processing apparatus according to an aspect of the present invention may have a function of displaying a stereo image as described below.
 図11に示すように、操作者による立体視が可能となるように、第1の画像431Lと第2の画像431Rとを含む画像が表示されるようになっていてもよい。第1の画像431Lおよび第2の画像431Rは、同一被写体を所定の視差をもって表示するものである。左目画像および右目画像などと称されることもある。 As shown in FIG. 11, an image including the first image 431L and the second image 431R may be displayed so that the operator can view stereoscopically. The first image 431L and the second image 431R display the same subject with a predetermined parallax. Sometimes referred to as a left-eye image and a right-eye image.
 画像431L、431Rを含む画面の中に、図11に示すように、操作パッドエリア433が表示されていてもよい。この操作パッドエリア433は、被写体の表示角度を変えるための領域である。操作者が、このエリア433内に指を触れて指を動かすことにより、医用画像処理装置は、それに応じて、被写体(2つの画像)の表示角度を同時に変更する。つまり、指の動きに連動して、被写体の向き等を変更できるようになっている。 In the screen including the images 431L and 431R, an operation pad area 433 may be displayed as shown in FIG. The operation pad area 433 is an area for changing the display angle of the subject. When the operator touches and moves his / her finger in the area 433, the medical image processing apparatus changes the display angle of the subject (two images) at the same time accordingly. That is, the direction of the subject can be changed in conjunction with the movement of the finger.
 なお、上記の操作はタッチパネルに対する操作を想定したものであるが、入力方式としてはそれに限定されず種々のものを利用できる。例えば、本明細書で開示する非接触の入力方式の1つまたは複数を利用することが可能である。 In addition, although said operation assumes operation with respect to a touch panel, it is not limited to it as an input method, A various thing can be utilized. For example, one or more of the contactless input methods disclosed herein can be utilized.
 図11のように、第1の画像431Lおよび第2の画像431Rと一緒に、1つの操作パッドエリア433を表示するような態様の場合、操作者は、そこを操作することにより被写体画像の向きを変更等できることを直感的に理解できる。 As shown in FIG. 11, in the case where one operation pad area 433 is displayed together with the first image 431L and the second image 431R, the operator operates the orientation of the subject image by operating it. You can intuitively understand that you can change.
 被写体画像としては、特に限定されるものではないが、血管の造影画像であってもよい。 The subject image is not particularly limited, but may be a blood vessel contrast image.
 上記のように、第1の画像431Lと第2の画像431Rとを表示する構成によれば、医師は立体視によって、被写体の三次元的構造をより正確に把握できることとなる。 As described above, according to the configuration in which the first image 431L and the second image 431R are displayed, the doctor can more accurately grasp the three-dimensional structure of the subject through stereoscopic vision.
 上記説明に係る本発明の一形態の装置は、互いに視差の異なる第1の画像および第2の画像を表示する機能を有するものである。より具体的には、さらに、それらの表示角度を操作するための操作パッドエリアを表示する機能を有する。これらの立体視用の画像の表示変更に対して、例えば、ジェスチャ入力、音声入力、モーション入力等の1つまたは組合せを利用しうる。なお、本明細書は、上記内容に対応する方法およびプログラムの発明も開示する。 The apparatus according to an embodiment of the present invention described above has a function of displaying a first image and a second image having different parallaxes. More specifically, it further has a function of displaying an operation pad area for operating those display angles. For example, one or a combination of gesture input, voice input, motion input, and the like can be used to change the display of these stereoscopic images. In addition, this specification also discloses the invention of the method and program corresponding to the said content.
(5)画像作成のフロー
 医用画像を作成する方法として、例えば、次のような方法を採用してもよい。この方法では、まず、所定のデータサーバ(一例でDICOMサーバ:モダリティから受信したデータを所定の形式で保存するもの)に患者の透視撮像画像が保存される。
(5) Flow of image creation As a method of creating a medical image, for example, the following method may be employed. In this method, first, a fluoroscopic image of a patient is stored in a predetermined data server (for example, a DICOM server: one that stores data received from a modality in a predetermined format).
 そして、その所定のデータサーバ(または、他のコンピュータ)において、被写体の個々の部位を自動認識して、それらを別個独立のオブジェクトとして管理する。図14に例示する表に示すように、動脈、静脈、骨、臓器等の区分(さらにはそれらをより細分化した区分)に分けて管理されてもよい。個々のオブジェクトごとに、音声認識機能で選択される場合の情報(例えば、大動脈であれば「大動脈(だいどうみゃく)」という音声入力)が設定されていてもよい。 Then, in the predetermined data server (or other computer), individual parts of the subject are automatically recognized and managed as separate independent objects. As shown in the table illustrated in FIG. 14, management may be performed by dividing into sections such as arteries, veins, bones, organs, and the like (further subdivided sections). Information for selection by the voice recognition function (for example, voice input of “aorta” for an aorta) may be set for each object.
 また、個々のオブジェクトのうち必要なもののみを選択的に読み込む際に便利なように、自動判別キー(記号、アルファベット、数字、それらの組合せ)が設定されていてもよい。 Also, automatic discrimination keys (symbols, alphabets, numbers, and combinations thereof) may be set so as to be convenient when selectively reading only necessary objects among individual objects.
 また、個々のオブジェクトごとにオフセット値が設定されていてもよい。この例では、オフセット値(表の「CombineOfs」参照)が、例えば、大動脈であれば「+50」、腹部動脈であれば「+100」、静脈であれば「+400」等のように設定されている。 Also, an offset value may be set for each individual object. In this example, the offset value (see “CombineOfs” in the table) is set to, for example, “+50” for the aorta, “+100” for the abdominal artery, “+400” for the vein, and the like. .
 オフセット値は、一例で次のように利用される。例えば、動脈の中心CT値が350HU、静脈の中心CT値が200HUであったとする。このように、血管は、動脈や静脈(さらには門脈)等でそれぞれ造影の度合いが異なる。したがって、それらを揃えるために、オフセット値が利用される上記の例で言えば、静脈のオフセット値を+150HUとして、仮想的に造影効果を上げたことにより、動脈および静脈を同一の閾値設定で扱うことが可能となる。なお、当然ながら、動脈および静脈に限らず門脈などの他の血管でオフセット値が設定されていてもよい。 The offset value is used as follows as an example. For example, it is assumed that the central CT value of the artery is 350 HU and the central CT value of the vein is 200 HU. In this way, blood vessels have different degrees of contrast in arteries and veins (and portal veins). Therefore, in the above example in which the offset value is used to align them, the vein offset value is set to +150 HU, and the contrast effect is virtually increased so that the artery and vein are handled with the same threshold setting. It becomes possible. Of course, the offset value may be set not only in the artery and vein but also in other blood vessels such as the portal vein.
 また、血管系ではなく実質系のオフセット値に関して次のような利用がされてもよい。ここで、一例で、実質臓器(例えば肝臓など)のオフセット値が+700HUなど大きな値で設定されているとする。実質臓器をこのような大きな値とすることで、下記の通り、臓器形状がキープされ易くなるという利点がある。 Also, the following usage may be performed with respect to the offset value of the real system instead of the vascular system. Here, as an example, it is assumed that the offset value of a real organ (for example, liver) is set to a large value such as +700 HU. By setting the real organ to such a large value, there is an advantage that the organ shape is easily kept as follows.
 すなわち、例えば、動脈の抹消の表示が不要である場合、閾値を上げて抹消が表示されない状態とするような操作が想定されるが、この場合、肝臓などの実質臓器もそれに併せて表示が変わり、形状が崩れてしまうこととなる。これを防止するために(つまり、血管の表示態様は変えつつ、実質臓器は本来の形態のままとする)、実質臓器については+700HUなどの大きなオフセット値で表示するようにしてもよい。 That is, for example, when it is not necessary to display the deletion of the artery, an operation of increasing the threshold value so that the deletion is not displayed is assumed. In this case, however, the display of the real organs such as the liver changes accordingly. The shape will collapse. In order to prevent this (that is, the real organ remains in its original form while changing the display mode of blood vessels), the real organ may be displayed with a large offset value such as +700 HU.
(データ変換)
 所定のデータサーバ(例えばDICOMサーバ)において、血管、骨、臓器などが別個のオブジェクトとして管理されている場合(一例でテーブルに設定)、次のような利点もある。すなわち、タブレット端末等において医用画像を構成する際に、手作業で個々のオブジェクトの情報(音声認識のためのテキストや、画像合成のためのオフセット値など)を入力する必要がなく、簡便に医用画像を作成することが可能となる。
(Data conversion)
When blood vessels, bones, organs, and the like are managed as separate objects in a predetermined data server (for example, DICOM server) (set in a table as an example), there are the following advantages. That is, when composing a medical image on a tablet terminal or the like, it is not necessary to manually input individual object information (text for speech recognition, offset value for image synthesis, etc.), and the medical image can be easily used. An image can be created.
 本実施形態では、基本的に、医用画像処理装置専用の医用画像を作成しておくことが必要となる。その際に、上記のような設定テーブルがあることは作製の手間を少しでも省けるという点で好ましい。但し、術式(どのような手術を行うか等)によっては、幾つかのテーブルが用意されていることが好ましい場合も想定される。この理由は、ある術式ではA血管(もしくはA臓器)がはっきりと視認できることが好ましいが、別の術式ではA血管(もしくはA臓器)よりもB血管(もしくはB臓器)がはっきりと視認できることが好ましいといったことがあるためである。つまり、それぞれ術式に応じたオフセット値が設定されたテーブルがいくつか登録されていることが、一形態において好ましい。 In this embodiment, it is basically necessary to create a medical image dedicated to the medical image processing apparatus. At that time, the presence of the setting table as described above is preferable in that the labor of production can be saved as much as possible. However, depending on the surgical procedure (what kind of surgery is performed, etc.), it may be preferable that several tables are preferably prepared. This is because it is preferable that the A blood vessel (or A organ) can be clearly seen in one surgical procedure, but the B blood vessel (or B organ) can be clearly seen rather than the A blood vessel (or A organ) in another surgical procedure. This is because it may be preferable. That is, it is preferable in one embodiment that several tables each having an offset value corresponding to the technique are registered.
 また、この場合、システムまたは装置は次のように構成されていてもよい。すなわち、(i)複数の術式を表示し、(ii)操作者がそのうちの1つを選択したことを受け付け、(iii)それに対応するテーブルを呼び出す(必要に応じて表示する)ように構成されていてもよい。 In this case, the system or apparatus may be configured as follows. That is, (i) a plurality of techniques are displayed, (ii) an operator selects one of them, and (iii) a corresponding table is called (displayed as necessary). May be.
 術式の表示は、例えば、注入条件を設定するユーザインターフェースにおいて、体のどの部位に対する施術かを選択するためのモード(部位選択)に応じてなされてもよい。例えば、選択された所定の部位(頭部、胸部、腹部など)に対応して、その部位に対応する術式を表示する(上記(i)参照)構成であってもよい。 The display of the surgical method may be performed in accordance with a mode (part selection) for selecting which part of the body to perform the treatment in, for example, a user interface for setting injection conditions. For example, it may be configured to display a technique corresponding to a selected predetermined part (head, chest, abdomen, etc.) (see (i) above).
(5)物理的スイッチ等との併用
 上述した実施形態は、音声入力やモーション入力(人の動作に対応した入力)を利用するものであった。本発明の一形態では、さらに、所定の物理的スイッチをこれらの入力と併用するようにしてもよい。
(5) Combined use with physical switch etc. The embodiment described above uses voice input or motion input (input corresponding to human movement). In one embodiment of the present invention, a predetermined physical switch may be used in combination with these inputs.
 具体的には、物理的な接触を検出するスイッチ(たとえばフットスイッチ)などを利用可能である。「フットスイッチ」とは、一例で、床に配置されて使用されるものであって、センサや基板が配置されるスイッチ筐体と、足などで踏まれる押下部とを有している。押下部は、限定されるものではないが、踏まれた時に押し下がるように可動に構成された部分であってもよい。フットスイッチからの検出信号はケーブル(有線)で外部に供給されるようになっていてもよいし、または、無線で外部に供給される構成でもよい。 Specifically, a switch (for example, a foot switch) that detects physical contact can be used. The “foot switch” is an example that is used by being placed on the floor, and includes a switch housing on which a sensor and a substrate are placed, and a pressing unit that can be stepped on with a foot or the like. The pressing portion is not limited, but may be a portion configured to be movable so as to be pushed down when stepped on. The detection signal from the foot switch may be supplied to the outside via a cable (wired), or may be configured to be supplied to the outside wirelessly.
 フットスイッチは、本発明の医用画像処理装置(一例で制御部350、図2参照)に電気的に接続されるものであってもよいが、これに限定されるものではない。フットスイッチが他の機器に接続される構成(一例で薬液注入装置の一部として設けられる)などとしてもよい。 The foot switch may be electrically connected to the medical image processing apparatus of the present invention (for example, the control unit 350, see FIG. 2), but is not limited thereto. A configuration in which the foot switch is connected to another device (provided as part of the chemical injection device in one example) may be used.
 図15は、フットスイッチの配置の一例を示している。この例では、薬液注入装置は、撮像装置470の近くに配置される注入ヘッド475と、それに接続された第1の制御ユニット(電源ユニット)478と、それに接続されたコンソール(第2の制御ユニット)476とを備えている。フットスイッチ477dは一例として電源ユニットに接続されている。 FIG. 15 shows an example of the arrangement of foot switches. In this example, the chemical injection device includes an injection head 475 disposed near the imaging device 470, a first control unit (power supply unit) 478 connected thereto, and a console (second control unit) connected thereto. 476). The foot switch 477d is connected to the power supply unit as an example.
(物理的スイッチの利用例)
 本発明の一形態の装置においては、音声操作に関し、複数の受付条件が設定されていてもよい。この受付条件の1つとしてフットスイッチを利用してもよい。
(Example of using physical switches)
In the apparatus of one embodiment of the present invention, a plurality of reception conditions may be set for voice operation. A foot switch may be used as one of the reception conditions.
 まず、音声操作の複数の受付条件としては、次のような設定がなされていてもよい(1つまたは複数)。
1)モーション入力があるときのみ、音声入力を受け付ける
2)フットスイッチがONとなっているときのみ、音声入力を受け付ける
3)モーション入力があり、かつ、フットスイッチがONとなっているときのみ、音声入力を受け付ける
4)常時、音声入力を受け付ける
First, as a plurality of reception conditions for voice operation, the following settings may be made (one or more).
1) Accept voice input only when there is motion input 2) Accept voice input only when the foot switch is ON 3) Only when there is motion input and the foot switch is ON Accept voice input 4) Always accept voice input
 なお、上記のような入力条件が、装置の音声操作コマンドテーブルに登録されていてもよい。1)の入力は、換言すれば、モーション入力がない状態では、音声入力を受け付けない構成となっているということである。 Note that the input conditions as described above may be registered in the voice operation command table of the device. In other words, the input 1) means that no voice input is accepted in the absence of motion input.
 音声入力に関し、次のような入力方式となっていることも好ましい。すなわち、例えば「左45°、上30°」のように方向と角度の組合せ情報が入ってきたときに、コマンド入力を受け付けるというものである。組合せ情報でない場合、誤認識の確率が上がるおそれがあるためである。以下詳しく説明する。 It is also preferable that the following input method is used for voice input. That is, for example, when a combination of direction and angle is entered such as “45 ° left, 30 ° above”, a command input is accepted. This is because if the information is not combination information, the probability of misrecognition may increase. This will be described in detail below.
 音声入力がONとなっており、例えば「上」や「前」のような単語のみも認識可能となっている場合、場合によっては、施術中の会話の中の単語を認識してしまい、意図しない入力が行われることも想定される。そこで、複数の単語の組合せが認識されたときにはじめてコマンドを受け付けるような構成としてもよい。 If voice input is ON and only words such as “up” and “previous” can be recognized, in some cases, the word in the conversation being performed is recognized, and the intention is It is also assumed that no input is made. Therefore, a configuration may be adopted in which a command is accepted only when a combination of a plurality of words is recognized.
 こうした誤認識防止のための音声入力方式は、必ずしも上記のような、方向と角度の組合せ情報に限定されるものではない。具体的な一例として、画像を前方向に移動させる場合を考える。この場合、移動させる動作モードであることを識別するための「方向」という単語(一例)と、移動させたい向きである「前」という単語(一例)の2つを認識した場合に、コマンドを受け付ける。誤認識防止の観点から、「方向」+「前」のような入力は受け付けるが、反対の「前」+「方向」のような入力は受け付けない構成としてもよい(すなわち、組合せの順序が決まっている)。他にも、画像を拡大表示するコマンドに関し、単に「拡大」という単語を認識するのではなく、例えば「3D」+「拡大」のような組合せを認識したときにはじめてコマンドを受け付ける構成としてもよい。このような構成により、意図しない音声入力を防止することができ、より使い勝手のよいものとなる。 Such a voice input method for preventing erroneous recognition is not necessarily limited to the combination of direction and angle as described above. As a specific example, consider the case of moving an image forward. In this case, when the word “direction” (one example) for identifying the operation mode to be moved is recognized and the word “front” (one example) that is the direction to be moved are recognized, the command is Accept. From the viewpoint of preventing misrecognition, an input such as “direction” + “front” may be accepted, but an input such as the opposite “front” + “direction” may not be accepted (that is, the order of combination is determined). ing). In addition, regarding a command for enlarging an image, the command may be received only when a combination such as “3D” + “enlarge” is recognized, for example, instead of simply recognizing the word “enlarge”. . With such a configuration, an unintended voice input can be prevented, and it becomes more convenient.
 音声入力の一態様として、上記の態様の他にまたは上記の態様とともに、ある一定時間の間のみ、音声入力を受け付ける(その一定時間以外は音声入力を受け付けない構成)構成であってもよい。 As an aspect of the voice input, in addition to the above aspect or together with the above aspect, a structure may be adopted in which the voice input is accepted only for a certain period of time (a configuration in which the voice input is not accepted other than the certain period).
 フットスイッチの入力に関し、フットスイッチは、押下部が踏まれている間のみONとなるようなものであってもよい。フットスイッチの入力に対する処理として、一定時間以上踏まれた場合(いわゆる長押しの場合)にのみ、それに続く処理を行うようにしてもよい。これによれば、意図せずフットスイッチを押してしまって不要な処理が行われるといったことが防止される。このような処理が懸念されるプロセスに関し、上記のような入力を行うようにしてもよい。また、フットスイッチではなく、他のスイッチ(物理的スイッチ)がONとなっているときに、上記動作を行うようにしてもよい。 Regarding the input of the foot switch, the foot switch may be turned ON only while the pressing part is stepped on. As a process for the input of the foot switch, the subsequent process may be performed only when the foot switch is stepped for a certain period of time (so-called long press). According to this, it is possible to prevent unnecessary processing from being performed by unintentionally pressing the foot switch. The input as described above may be performed for a process in which such processing is a concern. In addition, the above operation may be performed when other switches (physical switches) are ON instead of the foot switch.
 音声入力に関し、次のような機能が備わっていてもよい。ここで、本発明の一形態に係る典型的な画像処理装置は、音声入力に適した所定のモードのみ音声入力で入力が行われ、それ以外は別の入力方式で入力が行われるようになっているものとする。このような場合に、所定の入力がなされたとき(一例で「音声オールON」と音声入力する)、本来音声入力による入力がデフォルトとなっていないもの(それらの少なくとも一部)についても、音声入力が可能となる機能が備わっていてもよい。もっとも、このような音声入力の拡張は、一例で、一定時間経過後自動的に終了してもとの状態に戻るようになっていてもよい(タイムアウト機能)。限定されるものではないが、予め設定された1分、3分、または5分程度のタイムアウト時間としてもよい。 ∙ Regarding voice input, the following functions may be provided. Here, in a typical image processing apparatus according to an aspect of the present invention, input is performed by voice input only in a predetermined mode suitable for voice input, and input is performed by another input method other than that. It shall be. In such a case, when a predetermined input is made (for example, “speech all ON” is entered as an example), voice input that is not defaulted by default (at least a part of them) is also voiced. A function that enables input may be provided. However, such expansion of voice input is just an example, and may return to the original state after a certain time has elapsed (timeout function). Although not limited, a time-out period of about 1 minute, 3 minutes, or 5 minutes may be set in advance.
 限定されるものではないが、角度を指定するための入力に関しては、常に、音声のみに反応する構成としてもよい。また、音声入力の各ワードのテーブルを用意し、そのワードごとに、どのような状況のときに入力を許可するか等(例えば「aaa」というワード入力に関してはフットスイッチがONとなっている状態でないと受け付けない、「bbb」というワード入力は常に受け付ける等)が設定されていてもよい。 Although not limited, regarding the input for designating the angle, it may be configured to always react only to voice. In addition, a table for each word of voice input is prepared, and under what circumstances the input is permitted for each word (for example, the foot switch is ON for the word input “aaa”) Otherwise, the word input “bbb” is always accepted) may be set.
(ホームポジション設定機能)
 ところで、本発明の画像処理装置は、基本的には、どの部位であろうと、常に一定の向きから見た医用画像をデフォルト表示するような構成であってもよい。一例で、どのような部位および/または術式であっても、常に、体の正面から見た画像をデフォルト表示するという構成である。
(Home position setting function)
By the way, the image processing apparatus of the present invention may basically be configured such that a medical image viewed from a certain direction is always displayed by default regardless of the part. In one example, the configuration is such that an image viewed from the front of the body is always displayed by default regardless of the site and / or technique.
 しかしながら、部位および/または術式の種別によっては、それに好ましい、予め設定されたアングルから見た画像をデフォルト表示するような構成であることも、一形態において、好ましい。例えば、胸腔鏡下手術では、側臥位が基本となる。したがって、胸腔鏡下手術の場合には、側臥位の向きをホームポジションとしデフォルト表示するように構成されていてもよい。 However, it is also preferable in one embodiment that the configuration is such that an image viewed from a preset angle is displayed by default, which is preferable depending on the type of the region and / or technique. For example, in thoracoscopic surgery, the lateral position is fundamental. Therefore, in the case of thoracoscopic surgery, the orientation of the lateral position may be set as the home position and the default display may be performed.
 つまり、本発明の一形態においては、部位および/または術式の種別に応じて、正面視のみではなく、それぞれに適した別々の表示アングルがプリセットされていて、それをデフォルト表示する構成となっていてもよい。 In other words, according to one aspect of the present invention, not only the front view but also different display angles suitable for each are preset according to the part and / or the type of surgical procedure, and the display is defaulted. It may be.
 ホームポジションの選択は、術式、部位および腫瘍の位置の少なくとも1つに応じて、手動または自動で、設定されるものであってもよい。具体的な一例で言えば、病変が左肺にあるか右肺にあるかによって、左側臥位とすべきか右側臥位とすべきかが決まる。病変(腫瘍)の位置を自動認識し、それに応じて、いずれかを自動的に決定するような構成も有用である。 The selection of the home position may be set manually or automatically according to at least one of the surgical method, the site, and the position of the tumor. In a specific example, whether the lesion is in the left lung or the right lung determines whether to be in the left lateral position or the right lateral position. A configuration is also useful in which the position of a lesion (tumor) is automatically recognized and one of them is automatically determined accordingly.
(画面表示の一例)
 音声入力がONとなっている状態で表示される画面は、例えば図16のようなものであってもよい。なお、この画面は図6に示した表示例に、幾つかの画像を追加して表示するものであるが、図6の画像(符号391、393参照)等は省略してもよいことは当業者であれば容易に理解されよう。
(Example of screen display)
The screen displayed when the voice input is ON may be as shown in FIG. 16, for example. This screen is displayed by adding several images to the display example shown in FIG. 6, but the images (see reference numerals 391 and 393) in FIG. 6 may be omitted. It will be easily understood by contractors.
 図16に示すように、音声認識(音声入力)が可能である状態であることを理解できるように、画面上に音声認識(音声入力)ONの表示397bがあってもよい。また、モーション入力についても同様に、モーション入力ONの表示397aがあってもよい。また、認識した音声を、文字として表示する表示部398を有することも好ましい。これにより、どのような音声入力がされたかを視覚的に確認することが可能となる。 As shown in FIG. 16, there may be a voice recognition (voice input) ON display 397b on the screen so that it can be understood that voice recognition (voice input) is possible. Similarly, for motion input, there may be a motion input ON display 397a. It is also preferable to have a display unit 398 that displays the recognized voice as characters. This makes it possible to visually confirm what kind of voice input has been made.
 また、認識した音声に対して、コマンドとして受け付けられたか否かを示す表示部399を有していてもよい。受け付けられなかった場合に、「REJECTED」のような表示が出るようになっていてもよく、これにより、操作者は受け付けられなかったことを視覚的に確認可能となる。 Further, it may have a display unit 399 indicating whether or not the recognized voice is accepted as a command. If it is not accepted, a display such as “REJECTED” may be displayed, and this allows the operator to visually confirm that it has not been accepted.
 以上、図面を参照して本発明の実施形態について説明したが、本発明はその趣旨を逸脱しない範囲で種々変更可能であり、上記例に限定されるものではない。 The embodiments of the present invention have been described above with reference to the drawings. However, the present invention can be variously modified without departing from the spirit of the present invention and is not limited to the above examples.
 以上、複数の技術的特徴を説明したが、上記に開示のそれぞれの技術的特徴は、互いに相反するもののような場合を除き、適宜組合せて利用できるものであり、本明細書はそのような内容をも開示する。また、説明の都合上、幾つかの技術的特徴を1つの実施態様として説明した個所に関しても、それらの特徴の1つまたは複数を省略することが可能である。物の発明、方法の発明、またはコンピュータプログラムの発明の区別をせずに説明した技術内容であっても、当業者であれば、いずれの発明としても把握できることを理解されよう。 Although a plurality of technical features have been described above, each of the technical features disclosed above can be used in appropriate combinations, except for cases where they are not in conflict with each other, and the present specification includes such contents. Is also disclosed. Also, for convenience of explanation, one or more of these features may be omitted for portions where some technical features are described as one embodiment. It will be understood by those skilled in the art that any invention can be grasped even by the technical contents described without distinguishing between the invention of the product, the invention of the method, or the invention of the computer program.
(付記)
 本明細書は下記の発明を開示する(なお、括弧中の符号は本発明を何ら限定するものではない):
1.ディスプレイ(361)と、
 前記ディスプレイに接続された制御部(350)と、
 音声入力デバイス(370)と、
 モーションセンサ(380)と、
 を備える医用画像処理装置(301)であって、
 前記制御部(350)は、
a:前記ディスプレイに3次元医用画像を表示させる画像表示部と、
b:前記音声認識デバイス(170)を用いて入力された音声を認識し、それに応じて、前記3次元医用画像の表示に関するモードを切り替えるモード選択部と、
c:前記モーションセンサを介して操作者のモーション入力を認識し、それに応じて、前記3次元医用画像の表示を変更する表示処理部と、
 を有する、
 医用画像処理装置。または、上記のような特徴を有する制御部を少なくとも有する装置またはシステム。
(Appendix)
This specification discloses the following invention (note that the reference numerals in parentheses do not limit the present invention):
1. A display (361);
A control unit (350) connected to the display;
An audio input device (370);
A motion sensor (380),
A medical image processing apparatus (301) comprising:
The control unit (350)
a: an image display unit for displaying a three-dimensional medical image on the display;
b: a mode selection unit for recognizing a voice input using the voice recognition device (170) and switching a mode related to the display of the three-dimensional medical image accordingly;
c: a display processing unit for recognizing an operator's motion input via the motion sensor and changing the display of the three-dimensional medical image accordingly;
Having
Medical image processing apparatus. Alternatively, an apparatus or system having at least a control unit having the above-described characteristics.
2.3次元医用画像の表示に関する前記モードとして、
 画像の拡大および縮小をズームモード、
 画像を平行移動させるパンモード、
 画像を回転させる回転モード、
 のうち少なくとも1つを有する、上記記載の医用画像処理装置。
2. As the mode relating to the display of a three-dimensional medical image,
Zoom mode, to enlarge and reduce the image
Pan mode to translate the image,
Rotation mode to rotate the image,
The medical image processing apparatus according to the above, having at least one of the above.
3.前記ズームモードでは、操作者の手を第1の方向に移動させると3次元医用画像が拡大し、それとは反対の第2の方向に移動させると3次元医用画像が縮小する、
 上記記載の医用画像処理装置。
3. In the zoom mode, when the operator's hand is moved in the first direction, the three-dimensional medical image is enlarged, and when the operator's hand is moved in the opposite second direction, the three-dimensional medical image is reduced.
The medical image processing apparatus as described above.
4.前記ズームモードでは、3次元用画像を回転させずに、拡大または縮小が行われる、上記記載の医用画像処理装置。 4). The medical image processing apparatus as described above, wherein in the zoom mode, enlargement or reduction is performed without rotating the three-dimensional image.
5.前記回転モードでは、操作者の手の動きに対応して、3次元医用画像が回転する、
 上記記載の医用画像処理装置。
5). In the rotation mode, the three-dimensional medical image rotates in response to the movement of the operator's hand.
The medical image processing apparatus as described above.
6.前記回転モードでは、3次元用画像を拡大または縮小させずに、拡大または縮小が行われる、上記記載の医用画像処理装置。 6). The medical image processing apparatus as described above, wherein in the rotation mode, enlargement or reduction is performed without enlarging or reducing the three-dimensional image.
7.筐体(301a)を備え、
 少なくとも前記ディスプレイと前記制御部とが一体的に前記筐体に組み込まれた、可搬型の上記記載の医用画像処理装置。
7). A housing (301a),
The portable medical image processing apparatus according to the above, wherein at least the display and the control unit are integrally incorporated in the housing.
8.前記モーションセンサは、1つもしくは複数のカメラを有する、上記記載の医用画像処理装置。 8). The medical image processing apparatus according to the above, wherein the motion sensor includes one or more cameras.
9.前記3次元医用画像が、少なくとも肝臓および血管の画像データを含むものである、上記記載の医用画像処理装置。 9. The medical image processing apparatus according to the above, wherein the three-dimensional medical image includes at least liver and blood vessel image data.
10.前記音声入力デバイスがマイクである、上記記載の医用画像処理装置。 10. The medical image processing apparatus according to the above, wherein the voice input device is a microphone.
11.コンピュータに、
a:ディスプレイに3次元医用画像を表示させる処理と、
b:音声認識デバイス(170)を用いて入力された音声を認識し、それに応じて、前記3次元医用画像の表示に関するモードを切り替える処理と、
c:モーションセンサを介して操作者のモーション入力を認識し、それに応じて、前記3次元医用画像の表示を変更する処理と、
 を行わせる、医用画像処理プログラム。
11. On the computer,
a: processing for displaying a three-dimensional medical image on a display;
b: processing for recognizing a voice input using the voice recognition device (170) and switching a mode relating to the display of the three-dimensional medical image accordingly;
c: processing for recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly;
A medical image processing program.
12.3次元医用画像の表示に関する前記モードとして、
 画像の拡大および縮小をズームモード、
 画像を平行移動させるパンモード、
 画像を回転させる回転モード、
 のうち少なくとも1つを有する、上記記載のプログラム。
12. As the mode related to the display of the three-dimensional medical image,
Zoom mode, to enlarge and reduce the image
Pan mode to translate the image,
Rotation mode to rotate the image,
The program as described above, having at least one of the following.
13.前記ズームモードでは、操作者の手を第1の方向に移動させると3次元医用画像が拡大し、それとは反対の第2の方向に移動させると3次元医用画像が縮小するように、コンピュータに処理を行わせる、上記記載のプログラム。 13. In the zoom mode, when the operator's hand is moved in the first direction, the three-dimensional medical image is enlarged, and when the operator's hand is moved in the opposite second direction, the three-dimensional medical image is reduced. The program as described above, which causes processing to be performed.
14.前記ズームモードでは、3次元用画像を回転させずに、拡大または縮小が行われるように、コンピュータに処理を行わせる、上記記載のプログラム。 14 The program according to the above, wherein in the zoom mode, the computer performs processing so that enlargement or reduction is performed without rotating the three-dimensional image.
15.前記回転モードでは、操作者の手の動きに対応して、3次元医用画像が回転するように、コンピュータに処理を行わせる、上記記載のプログラム。 15. The program according to the above, wherein in the rotation mode, the computer performs processing so that the three-dimensional medical image is rotated in accordance with the movement of the hand of the operator.
16.前記回転モードでは、3次元用画像を拡大または縮小させずに、拡大または縮小が行われるように、コンピュータに処理を行わせる、上記記載のプログラム。 16. The program according to the above, wherein in the rotation mode, the computer performs processing so that the three-dimensional image is enlarged or reduced without being enlarged or reduced.
17.コンピュータが、ディスプレイに3次元医用画像を表示させるステップと、
 コンピュータが、音声認識デバイス(170)を用いて入力された音声を認識し、それに応じて、前記3次元医用画像の表示に関するモードを切り替えるステップと、
 コンピュータが、モーションセンサを介して操作者のモーション入力を認識し、それに応じて、前記3次元医用画像の表示を変更するステップと、
 を含む、医用画像処理装置の作動方法。
17. A computer displaying a three-dimensional medical image on a display;
A computer recognizing an input voice using a voice recognition device (170) and switching a mode relating to display of the three-dimensional medical image accordingly;
A computer recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly;
A method for operating a medical image processing apparatus, comprising:
18.3次元医用画像の表示に関する前記モードとして、
 画像の拡大および縮小をズームモード、
 画像を平行移動させるパンモード、
 画像を回転させる回転モード、
 のうち少なくとも1つを有する、上記記載の方法。
18. As the mode relating to the display of a three-dimensional medical image,
Zoom mode, to enlarge and reduce the image
Pan mode to translate the image,
Rotation mode to rotate the image,
The method as described above, comprising at least one of
19.前記ズームモードでは、操作者の手を第1の方向に移動させると3次元医用画像が拡大し、それとは反対の第2の方向に移動させると3次元医用画像が縮小するようになっている、上記記載の作動方法。 19. In the zoom mode, when the operator's hand is moved in the first direction, the three-dimensional medical image is enlarged, and when the operator's hand is moved in the opposite second direction, the three-dimensional medical image is reduced. The operating method described above.
20.前記ズームモードでは、3次元用画像を回転させずに、拡大または縮小が行われるようになっている、上記記載の作動方法。 20. The operation method described above, wherein in the zoom mode, enlargement or reduction is performed without rotating the three-dimensional image.
21.前記回転モードでは、操作者の手の動きに対応して、3次元医用画像が回転するようになっている、上記記載の作動方法。 21. The operation method according to the above, wherein in the rotation mode, the three-dimensional medical image is rotated in response to the movement of the hand of the operator.
1 撮像装置
10 薬液注入装置
71、371 肝臓
73、75、375 血管
301 医用画像処理装置
301a 筐体
350 制御部
351 音声認識部
353 ジェスチャ認識部
355a 画像表示部
355b 操作判定部
355c 表示処理部
359 記憶部
360 タッチパネル式ディスプレイ
361 ディスプレイ
363 タッチパネル
365 入力デバイス
367 通信部
368 インターフェース
369 スロット
370 マイク
380 モーションセンサ
DESCRIPTION OF SYMBOLS 1 Imaging device 10 Chemical solution injection apparatus 71,371 Liver 73,75,375 Blood vessel 301 Medical image processing apparatus 301a Case 350 Control part 351 Speech recognition part 353 Gesture recognition part 355a Image display part 355b Operation determination part 355c Display processing part 359 Storage Part 360 touch panel display 361 display 363 touch panel 365 input device 367 communication part 368 interface 369 slot 370 microphone 380 motion sensor

Claims (12)

  1.  ディスプレイと、
     前記ディスプレイに接続された制御部と、
     音声入力デバイスと、
     モーションセンサと、
     を備える医用画像処理装置であって、
     前記制御部は、
    a:前記ディスプレイに3次元医用画像を表示させる画像表示部と、
    b:前記音声認識デバイスを用いて入力された音声を認識し、それに応じて、前記3次元医用画像の表示に関するモードを切り替えるモード選択部と、
    c:前記モーションセンサを介して操作者のモーション入力を認識し、それに応じて、前記3次元医用画像の表示を変更する表示処理部と、
     を有する、
     医用画像処理装置。
    Display,
    A control unit connected to the display;
    An audio input device;
    A motion sensor,
    A medical image processing apparatus comprising:
    The controller is
    a: an image display unit for displaying a three-dimensional medical image on the display;
    b: a mode selection unit for recognizing a voice input using the voice recognition device and switching a mode related to display of the three-dimensional medical image in accordance with the voice;
    c: a display processing unit for recognizing an operator's motion input via the motion sensor and changing the display of the three-dimensional medical image accordingly;
    Having
    Medical image processing apparatus.
  2.  3次元医用画像の表示に関する前記モードとして、
     画像の拡大および縮小をズームモード、
     画像を平行移動させるパンモード、
     画像を回転させる回転モード、
     のうち少なくとも1つを有する、請求項1に記載の医用画像処理装置。
    As the mode relating to the display of a three-dimensional medical image,
    Zoom mode, to enlarge and reduce the image
    Pan mode to translate the image,
    Rotation mode to rotate the image,
    The medical image processing apparatus according to claim 1, further comprising at least one of the following:
  3.  前記ズームモードでは、操作者の手を第1の方向に移動させると3次元医用画像が拡大し、それとは反対の第2の方向に移動させると3次元医用画像が縮小する、
     請求項2に記載の医用画像処理装置。
    In the zoom mode, when the operator's hand is moved in the first direction, the three-dimensional medical image is enlarged, and when the operator's hand is moved in the opposite second direction, the three-dimensional medical image is reduced.
    The medical image processing apparatus according to claim 2.
  4.  前記ズームモードでは、3次元用画像を回転させずに、拡大または縮小が行われる、請求項2または3に記載の医用画像処理装置。 The medical image processing apparatus according to claim 2 or 3, wherein in the zoom mode, enlargement or reduction is performed without rotating the three-dimensional image.
  5.  前記回転モードでは、操作者の手の動きに対応して、3次元医用画像が回転する、
     請求項2に記載の医用画像処理装置。
    In the rotation mode, the three-dimensional medical image rotates in response to the movement of the operator's hand.
    The medical image processing apparatus according to claim 2.
  6.  前記回転モードでは、3次元用画像を拡大または縮小させずに、拡大または縮小が行われる、請求項5に記載の医用画像処理装置。 The medical image processing apparatus according to claim 5, wherein in the rotation mode, enlargement or reduction is performed without enlarging or reducing the three-dimensional image.
  7.  筐体を備え、
     少なくとも前記ディスプレイと前記制御部とが一体的に前記筐体に組み込まれた、可搬型の請求項1~6のいずれか一項に記載の医用画像処理装置。
    With a housing,
    The medical image processing apparatus according to any one of claims 1 to 6, wherein at least the display and the control unit are integrally incorporated in the housing.
  8.  前記モーションセンサは、1つもしくは複数のカメラを有する、請求項1~7のいずれか一項に記載の医用画像処理装置。 The medical image processing apparatus according to any one of claims 1 to 7, wherein the motion sensor includes one or more cameras.
  9.  前記3次元医用画像が、少なくとも肝臓および血管の画像データを含むものである、請求項1~8のいずれか一項に記載の医用画像処理装置。 The medical image processing apparatus according to any one of claims 1 to 8, wherein the three-dimensional medical image includes at least liver and blood vessel image data.
  10.  前記音声入力デバイスがマイクである、請求項1~9のいずれか一項に記載の医用画像処理装置。 10. The medical image processing apparatus according to claim 1, wherein the voice input device is a microphone.
  11.  コンピュータに、
    a:ディスプレイに3次元医用画像を表示させる処理と、
    b:音声認識デバイスを用いて入力された音声を認識し、それに応じて、前記3次元医用画像の表示に関するモードを切り替える処理と、
    c:モーションセンサを介して操作者のモーション入力を認識し、それに応じて、前記3次元医用画像の表示を変更する処理と、
     を行わせる、医用画像処理プログラム。
    On the computer,
    a: processing for displaying a three-dimensional medical image on a display;
    b: processing for recognizing a voice input using a voice recognition device and switching a mode relating to display of the three-dimensional medical image accordingly;
    c: processing for recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly;
    A medical image processing program.
  12.  コンピュータが、ディスプレイに3次元医用画像を表示させるステップと、
     コンピュータが、音声認識デバイスを用いて入力された音声を認識し、それに応じて、前記3次元医用画像の表示に関するモードを切り替えるステップと、
     コンピュータが、モーションセンサを介して操作者のモーション入力を認識し、それに応じて、前記3次元医用画像の表示を変更するステップと、
     を含む、医用画像処理装置の作動方法。
    A computer displaying a three-dimensional medical image on a display;
    A computer recognizing speech input using a speech recognition device and switching a mode relating to display of the three-dimensional medical image accordingly;
    A computer recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly;
    A method for operating a medical image processing apparatus, comprising:
PCT/JP2016/074966 2015-08-26 2016-08-26 Medical image processing device and medical image processing program WO2017034020A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017536488A JPWO2017034020A1 (en) 2015-08-26 2016-08-26 Medical image processing apparatus and medical image processing program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015166871 2015-08-26
JP2015-166871 2015-08-26

Publications (1)

Publication Number Publication Date
WO2017034020A1 true WO2017034020A1 (en) 2017-03-02

Family

ID=58100556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/074966 WO2017034020A1 (en) 2015-08-26 2016-08-26 Medical image processing device and medical image processing program

Country Status (2)

Country Link
JP (3) JPWO2017034020A1 (en)
WO (1) WO2017034020A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009116581A (en) * 2007-11-06 2009-05-28 Ziosoft Inc Medical image processor and medical image processing program
WO2011085815A1 (en) * 2010-01-14 2011-07-21 Brainlab Ag Controlling a surgical navigation system
JP2014523772A (en) * 2011-06-22 2014-09-18 コーニンクレッカ フィリップス エヌ ヴェ System and method for processing medical images

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3625933B2 (en) * 1995-12-14 2005-03-02 ジーイー横河メディカルシステム株式会社 Medical image display device
JPH10301567A (en) * 1997-04-22 1998-11-13 Kawai Musical Instr Mfg Co Ltd Voice controller of electronic musical instrument
JP2003280681A (en) * 2002-03-25 2003-10-02 Konica Corp Apparatus and method for medical image processing, program, and recording medium
JP2006051170A (en) * 2004-08-11 2006-02-23 Toshiba Corp Image diagnostic apparatus, head ischaemia region analysis system, head ischaemia region analysis program, and head ischaemia region analysis method
EP1762976B1 (en) * 2005-09-08 2011-05-18 Aloka Co., Ltd. Computerized tomography device and image processing method for identifying brown adipose tissue
JP2008259710A (en) * 2007-04-12 2008-10-30 Fujifilm Corp Image processing method, system and program
JP2009061028A (en) * 2007-09-05 2009-03-26 Nemoto Kyorindo:Kk Image processing apparatus and medical workstation equipped with the same
KR20100096224A (en) * 2007-12-03 2010-09-01 데이타피직스 리서치 인코포레이티드 Systems and methods for efficient imaging
JP2011118684A (en) * 2009-12-03 2011-06-16 Toshiba Tec Corp Cooking assisting terminal and program
JP5747007B2 (en) * 2012-09-12 2015-07-08 富士フイルム株式会社 MEDICAL IMAGE DISPLAY DEVICE, MEDICAL IMAGE DISPLAY METHOD, AND MEDICAL IMAGE DISPLAY PROGRAM
JP5989498B2 (en) * 2012-10-15 2016-09-07 東芝メディカルシステムズ株式会社 Image processing apparatus and program
US20150212676A1 (en) * 2014-01-27 2015-07-30 Amit Khare Multi-Touch Gesture Sensing and Speech Activated Radiological Device and methods of use

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009116581A (en) * 2007-11-06 2009-05-28 Ziosoft Inc Medical image processor and medical image processing program
WO2011085815A1 (en) * 2010-01-14 2011-07-21 Brainlab Ag Controlling a surgical navigation system
JP2014523772A (en) * 2011-06-22 2014-09-18 コーニンクレッカ フィリップス エヌ ヴェ System and method for processing medical images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RYOMA FUJII ET AL.: "Development of hands-free 3D medical image visuallization system using Kinect", IEICE TECHNICAL REPORT, vol. 115, no. 139, 7 July 2015 (2015-07-07), pages 33 - 38, ISSN: 0913-5685 *

Also Published As

Publication number Publication date
JP2021121337A (en) 2021-08-26
JP2023071677A (en) 2023-05-23
JP7229569B2 (en) 2023-02-28
JPWO2017034020A1 (en) 2018-08-02

Similar Documents

Publication Publication Date Title
US11096668B2 (en) Method and ultrasound apparatus for displaying an object
US10849597B2 (en) Method of providing copy image and ultrasound apparatus therefor
US10403402B2 (en) Methods and systems for accessing and manipulating images comprising medically relevant information with 3D gestures
US10617391B2 (en) Ultrasound apparatus and information providing method of the ultrasound apparatus
KR102244258B1 (en) Display apparatus and image display method using the same
JP2018180840A (en) Head-mount display control device, operation method and operation program thereof, and image display system
US20140372136A1 (en) Method and apparatus for providing medical information
JP6501525B2 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
JP6462358B2 (en) Medical image display terminal and medical image display program
JP7229569B2 (en) Medical image processing device and medical image processing program
EP3342347B1 (en) Method and apparatus for displaying medical image
JP6902012B2 (en) Medical image display terminal and medical image display program
JP7107590B2 (en) Medical image display terminal and medical image display program
JP7391348B2 (en) Medical image processing system, medical image processing method, and medical image processing program
JP6968950B2 (en) Information processing equipment, information processing methods and programs
JP2022145671A (en) Viewer, control method of the viewer and control program of the viewer
JP2021171443A (en) Medical image processing device, control method of medical image processing device and medical image processing program
JP2022009606A (en) Information processing device, information processing method, and program
JP2021133170A (en) Medical image processing apparatus, medical image processing method and medical image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16839367

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017536488

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16839367

Country of ref document: EP

Kind code of ref document: A1