WO2018043205A1 - Medical image processing device, medical image processing method, and program - Google Patents

Medical image processing device, medical image processing method, and program Download PDF

Info

Publication number
WO2018043205A1
WO2018043205A1 PCT/JP2017/029919 JP2017029919W WO2018043205A1 WO 2018043205 A1 WO2018043205 A1 WO 2018043205A1 JP 2017029919 W JP2017029919 W JP 2017029919W WO 2018043205 A1 WO2018043205 A1 WO 2018043205A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
camera
main
unit
Prior art date
Application number
PCT/JP2017/029919
Other languages
French (fr)
Japanese (ja)
Inventor
高橋 康昭
憲治 池田
智之 平山
山口 健太
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2018043205A1 publication Critical patent/WO2018043205A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof

Definitions

  • the present technology relates to a medical image processing apparatus, a medical image processing method, and a program. Regarding the program.
  • Patent Document 1 discloses the positional relationship between an endoscope, an instrument, and a patient during endoscopic surgery. That is, an image in the body cavity is obtained by inserting a tube called a sheath SH into a through-hole created in the body wall in advance, creating a path, and inserting a camera (generally called an endoscope) through the path. .
  • This type of endoscopic surgery has a positive effect of improving QoL (quality of life), which means that the period between recovery of the patient is short because the wound on the body is small, and reducing the burden caused by shortening the hospitalization period.
  • the treatment tool is inserted through a small hole and the endoscope is inserted through a hole provided at another location, and the treatment is performed while referring to the image in the body cavity. Surgery techniques are required.
  • main image main image
  • sub-image sub-image
  • the present technology has been made in view of such a situation, and makes it possible to provide a plurality of pieces of information in a state that is easy for the user to visually recognize.
  • a first medical image processing apparatus uses one of images captured by a plurality of imaging units as a main image, another image as a sub image, and superimposes the sub image.
  • a conversion unit that converts the image into a work image; and a superimposition unit that superimposes the converted image on a predetermined position in the main image.
  • a second medical image processing apparatus uses one image of images captured by a plurality of imaging units as a main image, another image as a sub image, and the sub image as the image.
  • An imaging unit that superimposes on the main image is provided, an imaging unit that captures the sub image is displayed on the main image, and the sub image is displayed in the vicinity of the imaging unit.
  • one of images captured by a plurality of imaging units is used as a main image, another image is used as a sub image, and the sub image is superimposed. Converting to an image for use, and superimposing the converted image on a predetermined position in the main image.
  • one of images captured by a plurality of imaging units is used as a main image, another image is used as a sub image, and the sub image is used as the sub image.
  • the method includes a step of superimposing the image on the main image, displaying an image capturing unit capturing the sub image on the main image, and displaying the sub image near the image capturing unit.
  • a first program causes a computer to use one of images captured by a plurality of imaging units as a main image, another image as a sub image, and superimpose the sub image. Conversion to an image is performed, and processing including a step of superimposing the converted image on a predetermined position in the main image is executed.
  • a second program causes a computer to use one of images captured by a plurality of imaging units as a main image, another image as a sub image, and the sub image as the main image.
  • a process including a step of superimposing the image on the image, displaying an image capturing unit capturing the sub image on the main image, and displaying the sub image near the image capturing unit is executed.
  • one of the images captured by the plurality of imaging units is set as a main image, and the other The image is used as a sub image, the sub image is converted into an image for superimposition, and the converted image is superimposed at a predetermined position in the main image.
  • one of images captured by a plurality of imaging units is set as a main image, and the other The image is set as a sub image, the sub image is superimposed on the main image, an imaging unit that captures the sub image is displayed on the main image, and the sub image is displayed in the vicinity of the imaging unit.
  • the medical image processing apparatus may be an independent apparatus or an internal block constituting one apparatus.
  • the program can be provided by being transmitted through a transmission medium or by being recorded on a recording medium.
  • a plurality of pieces of information can be provided in a state that is easy for the user to visually recognize.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure may be applied to an endoscopic surgery system.
  • an endoscopic operation system will be described as an example, but the present technology can also be applied to a surgical operation system, a microscopic operation system, and the like.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system 10 to which the technology according to the present disclosure can be applied.
  • an endoscopic surgery system 10 includes an endoscope 20, other surgical tools 30, a support arm device 40 that supports the endoscope 20, and various devices for endoscopic surgery. And a cart 50 on which is mounted.
  • trocars 37a to 37d are punctured into the abdominal wall. Then, the barrel 21 of the endoscope 20 and other surgical tools 30 are inserted into the body cavity of the patient 75 from the trocars 37a to 37d.
  • an insufflation tube 31, an energy treatment tool 33, and forceps 35 are inserted into the body cavity of a patient 75 as other surgical tools 30.
  • the energy treatment tool 33 is a treatment tool that performs tissue incision and peeling, blood vessel sealing, or the like by high-frequency current or ultrasonic vibration.
  • the illustrated surgical tool 30 is merely an example, and as the surgical tool 30, various surgical tools generally used in endoscopic surgery, such as a lever and a retractor, may be used.
  • the image of the surgical site in the body cavity of the patient 75 photographed by the endoscope 20 is displayed on the display device 53.
  • the surgeon 71 performs a treatment such as excision of the affected area using the energy treatment tool 33 and the forceps 35 while viewing the image of the surgical site displayed on the display device 53 in real time.
  • the pneumoperitoneum tube 31, the energy treatment tool 33, and the forceps 35 are supported by an operator 71 or an assistant during the operation.
  • the support arm device 40 includes an arm portion 43 extending from the base portion 41.
  • the arm portion 43 includes joint portions 45 a, 45 b, 45 c and links 47 a, 47 b and is driven by control from the arm control device 57.
  • the endoscope 20 is supported by the arm portion 43, and its position and posture are controlled. Thereby, the fixation of the stable position of the endoscope 20 can be realized.
  • the endoscope 20 includes a lens barrel 21 in which a region having a predetermined length from the distal end is inserted into the body cavity of the patient 75, and a camera head 23 connected to the proximal end of the lens barrel 21.
  • a lens barrel 21 in which a region having a predetermined length from the distal end is inserted into the body cavity of the patient 75, and a camera head 23 connected to the proximal end of the lens barrel 21.
  • an endoscope 20 configured as a so-called rigid mirror having a rigid lens barrel 21 is illustrated, but the endoscope 20 is configured as a so-called flexible mirror having a flexible lens barrel 21. Also good.
  • An opening into which an objective lens is fitted is provided at the tip of the lens barrel 21.
  • a light source device 55 is connected to the endoscope 20, and light generated by the light source device 55 is guided to the tip of the lens barrel by a light guide that extends inside the lens barrel 21. Irradiation is performed toward the observation target in the body cavity of the patient 75 through the lens.
  • the endoscope 20 may be a direct endoscope, a perspective mirror, or a side endoscope.
  • An optical system and an image sensor are provided inside the camera head 23, and reflected light (observation light) from the observation target is condensed on the image sensor by the optical system. Observation light is photoelectrically converted by the imaging element, and an electrical signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated. The image signal is transmitted as RAW data to a camera control unit (CCU) 51.
  • the camera head 23 has a function of adjusting the magnification and the focal length by appropriately driving the optical system.
  • the camera head 23 may be provided with a plurality of imaging elements.
  • a plurality of relay optical systems are provided inside the lens barrel 21 in order to guide observation light to each of the plurality of imaging elements.
  • the CCU 51 is configured by a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and comprehensively controls the operations of the endoscope 20 and the display device 53. Specifically, the CCU 51 performs various types of image processing for displaying an image based on the image signal, such as development processing (demosaic processing), for example, on the image signal received from the camera head 23. The CCU 51 provides the display device 53 with the image signal subjected to the image processing. Further, the CCU 51 transmits a control signal to the camera head 23 to control its driving.
  • the control signal can include information regarding imaging conditions such as magnification and focal length.
  • the display device 53 displays an image based on an image signal subjected to image processing by the CCU 51 under the control of the CCU 51.
  • high-resolution imaging such as 4K (horizontal pixel number 3840 ⁇ vertical pixel number 2160) or 8K (horizontal pixel number 7680 ⁇ vertical pixel number 4320), and / or 3D display
  • a display device 53 capable of high-resolution display and / or 3D display can be used.
  • a more immersive feeling can be obtained by using a display device 53 having a size of 55 inches or more.
  • a plurality of display devices 53 having different resolutions and sizes may be provided depending on the application.
  • the light source device 55 is composed of a light source such as an LED (light emitting diode), and supplies irradiation light to the endoscope 20 when photographing a surgical site.
  • a light source such as an LED (light emitting diode)
  • the arm control device 57 is configured by a processor such as a CPU, for example, and operates according to a predetermined program to control driving of the arm portion 43 of the support arm device 40 according to a predetermined control method.
  • the input device 59 is an input interface for the endoscopic surgery system 10.
  • the user can input various information and instructions to the endoscopic surgery system 10 via the input device 59.
  • the user inputs various information related to the operation, such as the patient's physical information and information about the surgical technique, via the input device 59.
  • the user instructs to drive the arm unit 43 via the input device 59 or to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 20.
  • An instruction to drive the energy treatment device 33 is input.
  • the type of the input device 59 is not limited, and the input device 59 may be various known input devices.
  • the input device 59 for example, a mouse, a keyboard, a touch panel, a switch, a foot switch 69 and / or a lever can be applied.
  • the touch panel may be provided on the display surface of the display device 53.
  • the input device 59 is a device worn by the user, such as a glasses-type wearable device or an HMD (Head-Mounted Display), and various types of input are performed according to the user's gesture and line of sight detected by these devices. Is done.
  • the input device 59 includes a camera capable of detecting the user's movement, and various inputs are performed according to the user's gesture and line of sight detected from the video captured by the camera.
  • the input device 59 includes a microphone capable of collecting a user's voice, and various inputs are performed by voice through the microphone.
  • the input device 59 is configured to be able to input various information without contact, so that a user belonging to the clean area (for example, the operator 71) can operate a device belonging to the unclean area without contact. Is possible.
  • the user since the user can operate the device without releasing his / her hand from the surgical tool he / she has, the convenience for the user is improved.
  • the treatment instrument control device 61 controls the driving of the energy treatment instrument 33 for tissue cauterization, incision, or blood vessel sealing.
  • the pneumoperitoneum device 63 gas is introduced into the body cavity via the pneumoperitoneum tube 31.
  • the recorder 65 is a device that can record various types of information related to surgery.
  • the printer 67 is a device that can print various types of information related to surgery in various formats such as text, images, or graphs.
  • the support arm device 40 includes a base portion 41 that is a base and an arm portion 43 that extends from the base portion 41.
  • the arm portion 43 is composed of a plurality of joint portions 45a, 45b, 45c and a plurality of links 47a, 47b connected by the joint portions 45b.
  • FIG. The structure of the arm part 43 is simplified and shown.
  • the shape, number and arrangement of the joint portions 45a to 45c and the links 47a and 47b, the direction of the rotation axis of the joint portions 45a to 45c, and the like are appropriately set so that the arm portion 43 has a desired degree of freedom.
  • the arm portion 43 can be preferably configured to have 6 degrees of freedom or more.
  • the endoscope 20 can be freely moved within the movable range of the arm portion 43, so that the barrel 21 of the endoscope 20 can be inserted into the body cavity of the patient 75 from a desired direction. It becomes possible.
  • the joints 45a to 45c are provided with actuators, and the joints 45a to 45c are configured to be rotatable around a predetermined rotation axis by driving the actuators.
  • the rotation angle of each joint portion 45a to 45c is controlled, and the driving of the arm portion 43 is controlled.
  • the arm control device 57 can control the driving of the arm unit 43 by various known control methods such as force control or position control.
  • the arm control device 57 appropriately controls the driving of the arm unit 43 in accordance with the operation input.
  • the position and posture of the endoscope 20 may be controlled.
  • the endoscope 20 at the distal end of the arm portion 43 can be moved from an arbitrary position to an arbitrary position and then fixedly supported at the position after the movement.
  • the arm part 43 may be operated by what is called a master slave system.
  • the arm unit 43 can be remotely operated by the user via the input device 59 installed at a location away from the operating room.
  • the arm control device 57 When force control is applied, the arm control device 57 receives the external force from the user and moves the actuators of the joint portions 45a to 45c so that the arm portion 43 moves smoothly according to the external force. You may perform what is called power assist control to drive. Thereby, when the user moves the arm unit 43 while directly touching the arm unit 43, the arm unit 43 can be moved with a relatively light force. Accordingly, the endoscope 20 can be moved more intuitively and with a simpler operation, and the convenience for the user can be improved.
  • the endoscope 20 is supported by a doctor called a scopist.
  • the position of the endoscope 20 can be more reliably fixed without relying on human hands, so that an image of the surgical site can be stably obtained. It becomes possible to perform the operation smoothly.
  • the arm controller 57 does not necessarily have to be provided in the cart 50. Further, the arm control device 57 is not necessarily one device. For example, the arm control device 57 may be provided in each of the joint portions 45a to 45c of the arm portion 43 of the support arm device 40. The plurality of arm control devices 57 cooperate with each other to drive the arm portion 43. Control may be realized.
  • the light source device 55 supplies irradiation light to the endoscope 20 when photographing a surgical site.
  • the light source device 55 is composed of a white light source composed of, for example, an LED, a laser light source, or a combination thereof.
  • a white light source is configured by a combination of RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. Adjustments can be made.
  • the driving of the light source device 55 may be controlled so as to change the intensity of the output light every predetermined time.
  • the driving of the image sensor of the camera head 23 is controlled to acquire images in a time-sharing manner, and the images are synthesized, so that high dynamics without so-called blackout and overexposure are obtained. A range image can be generated.
  • the light source device 55 may be configured to be able to supply light of a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, by utilizing the wavelength dependence of light absorption in body tissue, the surface of the mucous membrane is irradiated by irradiating light in a narrow band compared to irradiation light (ie, white light) during normal observation.
  • a so-called narrow-band light observation is performed in which a predetermined tissue such as a blood vessel is imaged with high contrast.
  • fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating excitation light.
  • the body tissue is irradiated with excitation light to observe fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally administered to the body tissue and applied to the body tissue.
  • ICG indocyanine green
  • the light source device 55 can be configured to be able to supply narrowband light and / or excitation light corresponding to such special light observation.
  • FIG. 2 is a block diagram illustrating an example of functional configurations of the camera head 23 and the CCU 51 illustrated in FIG.
  • the camera head 23 has a lens unit 25, an imaging unit 27, a drive unit 29, a communication unit 26, and a camera head control unit 28 as its functions.
  • the CCU 51 includes a communication unit 81, an image processing unit 83, and a control unit 85 as its functions.
  • the camera head 23 and the CCU 51 are connected to each other via a transmission cable 91 so that they can communicate with each other.
  • the lens unit 25 is an optical system provided at a connection portion with the lens barrel 21. Observation light taken from the tip of the lens barrel 21 is guided to the camera head 23 and enters the lens unit 25.
  • the lens unit 25 is configured by combining a plurality of lenses including a zoom lens and a focus lens. The optical characteristics of the lens unit 25 are adjusted so that the observation light is condensed on the light receiving surface of the image pickup device of the image pickup unit 27.
  • the zoom lens and the focus lens are configured such that their positions on the optical axis are movable in order to adjust the magnification and focus of the captured image.
  • the image pickup unit 27 is configured by an image pickup device, and is arranged at the rear stage of the lens unit 25.
  • the observation light that has passed through the lens unit 25 is collected on the light receiving surface of the image sensor, and an image signal corresponding to the observation image is generated by photoelectric conversion.
  • the image signal generated by the imaging unit 27 is provided to the communication unit 26.
  • CMOS Complementary Metal Metal Oxide Semiconductor
  • the imaging element for example, an element capable of capturing a high-resolution image of 4K or more may be used.
  • the image sensor that configures the image capturing unit 27 is configured to have a pair of image sensors for acquiring right-eye and left-eye image signals corresponding to 3D display. By performing the 3D display, the operator 71 can more accurately grasp the depth of the living tissue in the operation site.
  • the imaging unit 27 is configured as a multi-plate type, a plurality of lens units 25 are also provided corresponding to each imaging element.
  • the imaging unit 27 is not necessarily provided in the camera head 23.
  • the imaging unit 27 may be provided in the barrel 21 immediately after the objective lens.
  • the drive unit 29 is configured by an actuator, and moves the zoom lens and the focus lens of the lens unit 25 by a predetermined distance along the optical axis under the control of the camera head control unit 28. Thereby, the magnification and the focus of the image captured by the imaging unit 27 can be appropriately adjusted.
  • the communication unit 26 includes a communication device for transmitting and receiving various types of information to and from the CCU 51.
  • the communication unit 26 transmits the image signal obtained from the imaging unit 27 as RAW data to the CCU 51 via the transmission cable 91.
  • the image signal is preferably transmitted by optical communication.
  • the surgeon 71 performs the surgery while observing the state of the affected part with the captured image, so that a moving image of the surgical part is displayed in real time as much as possible for safer and more reliable surgery. Because it is required.
  • the communication unit 26 is provided with a photoelectric conversion module that converts an electrical signal into an optical signal.
  • the image signal is converted into an optical signal by the photoelectric conversion module, and then transmitted to the CCU 51 via the transmission cable 91.
  • the communication unit 26 receives a control signal for controlling the driving of the camera head 23 from the CCU 51.
  • the control signal includes, for example, information for designating the frame rate of the captured image, information for designating the exposure value at the time of imaging, and / or information for designating the magnification and focus of the captured image. Contains information about the condition.
  • the communication unit 26 provides the received control signal to the camera head control unit 28.
  • control signal from the CCU 51 may also be transmitted by optical communication.
  • the communication unit 26 is provided with a photoelectric conversion module that converts an optical signal into an electric signal.
  • the control signal is converted into an electric signal by the photoelectric conversion module and then provided to the camera head control unit 28.
  • the imaging conditions such as the frame rate, exposure value, magnification, and focus are automatically set by the control unit 85 of the CCU 51 based on the acquired image signal. That is, a so-called AE (Auto-Exposure) function, AF (Auto-Focus) function, and AWB (Auto-White Balance) function are mounted on the endoscope 20.
  • AE Auto-Exposure
  • AF Auto-Focus
  • AWB Auto-White Balance
  • the camera head control unit 28 controls driving of the camera head 23 based on the control signal from the CCU 51 received via the communication unit 26. For example, the camera head control unit 28 controls driving of the imaging element of the imaging unit 27 based on information indicating that the frame rate of the captured image is specified and / or information indicating that the exposure at the time of imaging is specified. For example, the camera head control unit 28 appropriately moves the zoom lens and the focus lens of the lens unit 25 via the drive unit 29 based on information indicating that the magnification and focus of the captured image are designated.
  • the camera head control unit 28 may further have a function of storing information for identifying the lens barrel 21 and the camera head 23.
  • the camera head 23 can be resistant to autoclave sterilization by arranging the configuration of the lens unit 25, the imaging unit 27, etc. in a sealed structure with high airtightness and waterproofness.
  • the communication unit 81 is configured by a communication device for transmitting and receiving various types of information to and from the camera head 23.
  • the communication unit 81 receives an image signal transmitted from the camera head 23 via the transmission cable 91.
  • the image signal can be suitably transmitted by optical communication.
  • the communication unit 81 is provided with a photoelectric conversion module that converts an optical signal into an electric signal.
  • the communication unit 81 provides the image processing unit 83 with the image signal converted into an electrical signal.
  • the communication unit 81 transmits a control signal for controlling the driving of the camera head 23 to the camera head 23.
  • the control signal may also be transmitted by optical communication.
  • the image processing unit 83 performs various types of image processing on the image signal that is RAW data transmitted from the camera head 23.
  • image processing for example, development processing, image quality enhancement processing (band enhancement processing, super-resolution processing, NR (Noise reduction) processing and / or camera shake correction processing, etc.), and / or enlargement processing (electronic zoom processing)
  • image quality enhancement processing band enhancement processing, super-resolution processing, NR (Noise reduction) processing and / or camera shake correction processing, etc.
  • / or enlargement processing electronic zoom processing
  • the image processing unit 83 performs detection processing on the image signal for performing AE, AF, and AWB.
  • the image processing unit 83 is configured by a processor such as a CPU and a GPU, and the above-described image processing and detection processing can be performed by the processor operating according to a predetermined program.
  • the image processing unit 83 is configured by a plurality of GPUs, the image processing unit 83 appropriately divides information related to the image signal and performs image processing in parallel by the plurality of GPUs.
  • the control unit 85 performs various controls related to imaging of the surgical site by the endoscope 20 and display of the captured image. For example, the control unit 85 generates a control signal for controlling the driving of the camera head 23. At this time, when the imaging condition is input by the user, the control unit 85 generates a control signal based on the input by the user. Alternatively, when the endoscope 20 is equipped with the AE function, the AF function, and the AWB function, the control unit 85 determines the optimum exposure value, focal length, and the distance according to the detection processing result by the image processing unit 83. A white balance is appropriately calculated and a control signal is generated.
  • control unit 85 causes the display device 53 to display an image of the surgical site based on the image signal subjected to the image processing by the image processing unit 83. At this time, the controller 85 recognizes various objects in the surgical part image using various image recognition techniques.
  • control unit 85 detects the shape and color of the edge of the object included in the surgical part image, thereby removing surgical tools such as forceps, specific biological parts, bleeding, mist when using the energy treatment tool 33, and the like. Can be recognized.
  • the control unit 85 uses the recognition result to superimpose and display various types of surgery support information on the image of the surgical site. Surgery support information is displayed in a superimposed manner and presented to the operator 71, so that the surgery can be performed more safely and reliably.
  • the transmission cable 91 connecting the camera head 23 and the CCU 51 is an electric signal cable corresponding to electric signal communication, an optical fiber corresponding to optical communication, or a composite cable thereof.
  • communication is performed by wire using the transmission cable 91, but communication between the camera head 23 and the CCU 51 may be performed wirelessly.
  • communication between the two is performed wirelessly, it is not necessary to lay the transmission cable 91 in the operating room, so that the situation where the movement of the medical staff in the operating room is hindered by the transmission cable 91 can be solved.
  • the endoscopic surgery system 10 has been described here as an example, a system to which the technology according to the present disclosure can be applied is not limited to such an example.
  • the technology according to the present disclosure may be applied to a testing flexible endoscope system or a microscope operation system.
  • FIG. 3 is a diagram illustrating an example of a screen displayed on the display device 53.
  • an image 201 captured by the endoscope 20 and an image 202 captured by the sub camera are displayed.
  • the endoscope 20 is a main camera
  • a camera different from the main camera is a sub camera.
  • this technique is applicable to the apparatus which image
  • the main camera and a plurality of sub cameras are inserted into the body cavity of the patient 75, and images from the plurality of cameras are displayed on the display device 53.
  • the display is a display that is easy for the operator 71 to visually recognize.
  • the forceps 231, the forceps 232, and the thread 233 are captured in the image 201 captured by the endoscope 20.
  • An image 202 captured by the sub camera is displayed in computer graphics (hereinafter referred to as mirror 211) in which a mirror is replicated.
  • the screen example shown in FIG. 3 shows a state where the forceps 231 and the thread 233 are reflected on the mirror 211.
  • the forceps 231 is imaged by the sub camera.
  • the imaged forceps 231 is displayed in the mirror 211.
  • the forceps 231 displayed in the mirror 211 is described with a forceps 231 'and a dash. Also in the following description, the object displayed in the mirror 211 is described with a dash.
  • the thread 233 ' is also projected.
  • the thread 233 is in a state of being pinched by the forceps 232.
  • the image from the sub-camera can be provided as an image that is easy for the user to visually recognize.
  • a picture replicating a mirror
  • it may be a picture replicating an actual tool other than a mirror, such as a tooth mirror or a loupe.
  • a tool that displays an image of a sub camera is described as a virtual tool.
  • An actual tool is displayed as a virtual tool on the screen, and an image from the sub camera is displayed on the virtual tool.
  • the virtual tool is the above-described mirror 211
  • the angle of the mirror 211 when the angle of the mirror 211 is changed, the image from the sub camera displayed on the mirror is also changed following the angle.
  • an image in which the positional relationship between the forceps 231 and the thread 233 is changed is displayed in the mirror 211.
  • the virtual tool is a loupe
  • the enlargement ratio is changed, for example, when an operation such as moving the loupe as a virtual tool closer to or away from the forceps 232 is performed, an image displayed in the loupe is also displayed. The image is enlarged or reduced according to the change. Note that the image displayed in the loupe in this case can be a part of the image captured by the main camera.
  • the loupe is superimposed on the image captured by the main camera, and an image obtained by enlarging the main image in the loupe is displayed in the loupe.
  • optical tools that exist in the real world such as tooth mirrors (mirrors) and loupes
  • PinP PinP (Picture-In-Picture) is performed by superimposing an image (sub-image) from the sub-camera.
  • the command for operating the virtual tool may be the same command as when operating the tool in the real world, it is easy to convey to a person, and a third party (a person other than the surgeon 71) operates it. It is easy to give an instruction, and an operation performed by a third party can easily obtain a sufficient result.
  • the third party has used a tool in the real world, it is easy to operate the virtual tool, and the third party can easily operate the virtual tool desired by the surgeon 71. For example, a third person can perform an operation before the surgeon 71 gives an instruction. Therefore, operations such as surgery can be performed more smoothly.
  • sub camera images can be processed by image processing.
  • the virtual tool is the mirror 211
  • the angle of the mirror 211 is changed
  • the image 202 displayed on the mirror 211 is also changed, but the change can be dealt with by image processing.
  • image processing it is possible to deal with user operations without moving the sub-camera.
  • By not moving the sub camera it is possible to prevent the sub camera and the organ from coming into contact with each other, and it is possible to improve safety.
  • FIG. 4 is a diagram for explaining the attachment position of the sub camera in the endoscope.
  • the endoscope 20a In the body cavity of the patient 75, the endoscope 20a, the forceps 35, and the endoscope 20b are inserted.
  • the endoscope 20a is a main camera
  • the endoscope 20b is a sub camera. In this way, a plurality of endoscopes 20 can be inserted into the body cavity, and one of the endoscopes 20 can be used as the main camera and the other endoscope 20 can be used as the sub camera.
  • the diameter of the endoscope 20b as the sub camera may be an endoscope having a smaller diameter than the diameter of the endoscope 20a as the main camera. Further, the resolution of the sub camera may be lower than the resolution of the main camera, and may not be the same resolution.
  • the main camera and the sub camera may be switched according to an instruction from the operator 71. That is, in the case shown in FIG. 4, at a certain point in time, the endoscope 20a is a main camera and the endoscope 20b is a sub-camera. A mechanism that is a sub camera and the endoscope 20b serves as a main camera may be provided.
  • FIG. 4 shows the case where there is one sub camera, a plurality of sub cameras may be inserted into the body cavity.
  • FIG. 5 is a diagram for explaining another mounting position of the sub camera in the endoscope.
  • the endoscope 20 In the body cavity of the patient 75, the endoscope 20, the forceps 35a, and the forceps 35b are inserted.
  • two forceps 35 are inserted into the body cavity, and the sub-camera 251 is attached to one of the forceps 35b.
  • the sub camera 251 is mounted at a position that does not hinder the function of the forceps 35b as a forceps.
  • the sub camera 251 can be a camera that has been authenticated as a medical camera.
  • an endoscope called a capsule endoscope may be used as the sub camera 251 and attached to the forceps 35b.
  • the forceps 35b may be inserted into the body cavity while holding the sub camera 251.
  • the sub camera 251 may be detachably attached to the forceps 35b, or may be configured as a part of the forceps 35b.
  • the endoscope 20 is used as a main camera, and a camera attached to a surgical instrument other than the endoscope 20 is used as a sub camera.
  • FIG. 5 shows a case where there is one sub camera 251, a plurality of sub cameras 251 may be attached to a plurality of surgical instruments and inserted into a body cavity.
  • the main camera and the sub camera may be switched according to an instruction from the operator 71. That is, in the case shown in FIG. 5, at a certain point in time, the endoscope 20a is a main camera and the sub camera 251 is a sub camera. However, when a switching instruction is given from the user, the endoscope 20a is A mechanism may be provided in which the sub camera 251 functions as a main camera.
  • the medical robot has a configuration as shown in FIG.
  • the medical robot includes an operation unit 281, a main body 282, and a monitor unit 283.
  • the operation unit 281 is a device for operating the main body 282.
  • the main body 282 includes, for example, three arms 291 to 293.
  • the operation unit 281 is operated by the operator 71a to remotely operate the arms 291 to 293 of the main body 282.
  • the surgeon 71a operates the arms 291 to 293 of the main body 282 while looking at the display provided in the operation unit 281.
  • the arms 291 to 293 of the main body 282 are an electric knife, an endoscope, forceps, and the like.
  • the monitor unit 283 is a monitor that is installed near the main body 282 and monitors the state of the operation.
  • the surgeon 71b looks at the monitor unit 283 and supports surgery as necessary.
  • the arm 292 is an arm camera having the same function as the above-described endoscope, and functions as a main camera.
  • the arms 291 and 293 are forceps, a scalpel, and the like.
  • a sub camera 252 and a sub camera 253 are attached to the arm 291 and the arm 293, respectively.
  • the sub-cameras 252 and 253 are mounted at positions that do not hinder the functions of the arms 291 and 293 as an arm, for example, the functions of a knife and forceps.
  • the sub cameras 252 and 253 may be cameras that have been authenticated as medical cameras, and are referred to as capsule endoscopes, for example. Can be used as the sub cameras 252 and 253.
  • the sub cameras 252 and 253 may be detachably attached to the arms 291 and 293, or may be configured as part of the arms 291 and 293.
  • the arm camera of the arm 292 is used as a main camera, and a camera attached to a surgical instrument other than the arm camera is used as a sub camera.
  • FIG. 7 shows a case where there are two sub cameras 251, one or a plurality of (three or more) sub cameras 252 (253) are respectively attached to a plurality of surgical instruments and inserted into a body cavity. May be.
  • the main camera and the sub camera may be switched according to an instruction from the operator 71. That is, in the case shown in FIG. 7, at a certain point in time, the arm camera of the arm 292 is the main camera and the sub cameras 252 and 253 are sub cameras. A mechanism may be provided in which the sub camera is instructed by one of the sub cameras 252 and 253 to function as the main camera.
  • FIG. 8 shows the configuration of the image processing unit 83 (FIG. 2) that generates the screen as shown in FIG. 3 and controls the display on the display device 53.
  • the case where the endoscope 20a as the main camera and the endoscope 20b as the sub camera are inserted into the body cavity as shown in FIG. 4 will be described as an example.
  • the image processing unit 83 includes a virtual tool drawing superimposing unit 411 and an image conversion processing unit 412.
  • the virtual tool drawing superimposing unit 411 is supplied with image data from the main camera, in this case, image data from the endoscope 20a.
  • the image conversion processing unit 412 is supplied with image data from the sub camera, in this case, image data from the endoscope 20b.
  • the position sensor 421 is a sensor that detects the position of the endoscope 20a or the endoscope 20b.
  • a sensor using GPS (Global Positioning System) or the like can be used.
  • the position sensor 421 is provided to grasp the positional relationship between the endoscope 20a and the endoscope 20b (main camera and sub-camera), for example, how far away they are, and how many angles are formed. Yes. As long as these pieces of information are obtained, any sensor may be used as the position sensor 421.
  • the position sensor 421 may not be provided.
  • the position sensor 421 may not be provided.
  • the position sensor 421 may not be provided.
  • the position sensor 421 may not be provided.
  • the processing in the image processing unit 83 will be described with reference to FIG.
  • the upper left diagram in FIG. 9 shows an image captured by the endoscope 20a that is the main camera.
  • the upper right diagram in FIG. 9 shows an image captured by the endoscope 20b as a sub camera.
  • the lower part of FIG. 9 shows an image obtained by superimposing an image captured by the endoscope 20b on an image captured by the endoscope 20a.
  • the lower diagram of FIG. 9 is the image shown in FIG. 3, and here, the case where the image described with reference to FIG. 3 is generated will be described as an example.
  • the forceps 231, the forceps 232, and the thread 233 are imaged.
  • image data of the image 201 is supplied to the virtual tool drawing superimposing unit 411.
  • the forceps 231, the forceps 232, and the thread 233 are imaged.
  • image data of the image 202 is supplied to the image conversion processing unit 412.
  • Both the endoscope 20a and the endoscope 20b image the forceps 231, the forceps 232, and the thread 233, but are different images because they are inserted at different positions.
  • the endoscope 20a and the endoscope 20b are inserted into the body cavity in a positional relationship having an angle of 90 degrees. Therefore, when the image 201 captured by the endoscope 20a is an image captured from the front, the image 202 captured by the endoscope 20b is an image captured from the side.
  • the image conversion processing unit 412 cuts out, enlarges, reduces, or converts an image to be superimposed on the image 201 from the endoscope 20a as the main camera from the image 202 from the endoscope 20b as the sub camera. To do. That is, the image conversion processing unit 412 generates an image to be displayed in the mirror 211 from the image 202.
  • the image conversion processing unit 412 performs processing using position information from the position sensor 421 as necessary when performing processing such as conversion.
  • the virtual tool drawing superimposing unit 411 draws the mirror 211 on the image 201 from the main camera. Then, an image in which the image from the image conversion processing unit 412 is displayed in the drawn mirror 211 is generated. The image data of the image generated by the virtual tool drawing superimposing unit 411 is supplied to the display device 53, and an image as shown in the lower diagram of FIG. 9 is displayed on the display device 53.
  • the processing described with reference to FIG. 10 will be described by taking the processing at the time of sewing work as an example. Therefore, the process at the time of other work is a process suitable for the work.
  • the processing of the flowchart shown in FIG. 10 is changed as appropriate according to operations such as an incision operation, an affected area removal operation, and a cleaning operation.
  • step S101 it is determined whether or not the sewing operation is performed. If it is determined in step S101 that the sewing operation is not performed, the process proceeds to step S108. As described above, since it differs from work to work, when it is determined that the work is not a suturing work, other work, for example, whether it is an incision work or an affected part removal work, is sequentially determined. You may make it.
  • the determination as to whether or not the operation is a suturing operation can be performed by, for example, recognizing the occurrence of the operator 71 by voice.
  • the determination in step S101 can be performed by determining whether or not the operator 71 has issued a keyword relating to suturing such as “start sewing” and “thread and needle”.
  • the operation may be determined whether or not the operation is a suturing operation using the fact that the flow of the operation is patterned by an operation method.
  • the process for determining whether or not the operation is a suturing operation in the operation flow may be performed together with the determination based on the voice recognition described above, and the determination may be performed by integrating the results.
  • the stitching operation is performed by image recognition. For example, since a thread or a needle is used during a sewing operation, it is determined whether or not a thread or a needle is detected from an image captured by a main camera or a sub camera. When a thread or a needle is detected It may be determined that the sewing operation is performed.
  • step S101 it is also possible to make the determination in step S101 by voice recognition and image recognition. Further, the determination in step S101 may be performed by another method not illustrated here.
  • step S101 If it is determined in step S101 that it is a suturing operation, the process proceeds to step S102.
  • step S102 a default value is set. The default value to be set will be described with reference to FIG.
  • the mirror 211 which is a virtual tool is displayed on the side where the sub camera (endoscope 20b) is present.
  • the mirror 211 is also within the screen. Displayed on the right side of. First, in this way, the side displaying the image of the sub camera is set as a default value.
  • the position where the mirror 211 is displayed is a position away from the forceps on the side close to the sub camera (endoscope 20b) by the length of the tip portion of the forceps.
  • the mirror 211 is positioned away from the forceps 232 by the length (length d1) of the distal end portion of the forceps. Is displayed.
  • the length d1 between the forceps 232 and the mirror 211 may be the length between the center of the forceps 232 and the center of the mirror 211, or the side surface of the forceps 232 (side closer to the mirror 211) and the side surface of the mirror 211 (forceps).
  • the length on the side close to 232 may be used. That is, how the point between the forceps 232 and the mirror 211 is separated by the length of the forceps tip may be set in any way.
  • the display position of the mirror 211 in other words, the distance from the forceps is set as a default value.
  • the size of the mirror 211 is set as a default value.
  • the size of the mirror 211 can be the radius of the tip of the forceps.
  • the shape of the mirror 211 is a circle, the shape is a circular shape having a radius of the size (length d1 in FIG. 11) of the distal end portion of the forceps.
  • the mirror 211 has an elliptical shape as shown in FIG. 11, it has an elliptical shape having a major axis or a minor axis that is twice the size (length d1) of the distal end portion of the forceps.
  • setting the display position and size of the mirror 211 of the virtual tool on the basis of the size of the tip portion of the forceps is a tool familiar to the operator 71, and such a tool. This is because the size of the virtual tool can be easily recognized by using the size of the tool as a reference.
  • the forceps is described as an example, but the size of the main subject (for example, forceps) in the main image can be used as a reference.
  • the angle of the mirror 211 is 45 degrees as a default value.
  • the angle of the mirror 211 is 90 degrees, only the side surface of the mirror 211 can be seen, and the mirror 211 is positioned perpendicular to the image 201. In other words, when the angle of the mirror 211 is 90 degrees, the mirror surface is not facing this side.
  • the mirror surface of the mirror 211 is directed to this side and is in a state of being parallel to the image 201. In such a state, an image from the sub camera is not displayed on the mirror 211.
  • 45 degrees which is between 0 degrees and 90 degrees, is set as the default value of the mirror 211.
  • the default value is 45 degrees here, the default value related to the angle may be set to other than 45 degrees depending on the positional relationship between the endoscope 20a and the endoscope 20b (main camera and sub camera). In other words, the default value related to the angle may be set as a variable value set according to the situation at that time, not a fixed value.
  • the angle at which the side of the forceps 232 closest to the sub camera can be projected in other words, the angle at which the image captured when viewed from the direction perpendicular to the surface of the forceps 232 captured by the main camera can be projected.
  • a default value may be set.
  • the default value may be set by learning.
  • the position and size of the mirror 211 are set as defaults, the position, size, angle, and the like of the mirror 211 may be changed by the operator 71 after the setting.
  • the operator 71 changes the display position, size, angle, etc. of the mirror 211, the history is recorded (learned). Then, frequently used positions, sizes, angles, etc. may be set as default values.
  • a default value may be set according to the surgical sequence. Depending on the sequence of the operation, what kind of operation is performed, for example, an operation for removing an affected part, an operation for cutting a bone, and what kind of work process is performed, for example, an incision operation
  • the default value may be set according to whether it is a sewing operation or the like.
  • step S102 when a default value is set in step S102, the process proceeds to step S103.
  • step S103 the sub-camera image is converted. In other words, an image to be displayed on the mirror surface of the virtual tool (mirror 211) is generated.
  • FIG. 12 is a diagram for explaining the positional relationship between the main camera, the sub camera, and the mirror 211.
  • the imaging surface of the main camera is the main camera 501
  • the imaging surface of the sub camera is the sub camera 502.
  • FIG. 12 shows a case where the main camera 501 and the sub camera 502 are installed with an acute angle of 90 degrees or less.
  • the main camera 501 and the sub camera 502 image the object 511.
  • the main camera 501 captures an image of the surface side of the object 511 including the x axis and the y axis.
  • the sub camera 502 captures an image of the surface of the object 511 formed by the y ′ axis and the z ′ axis.
  • the main camera 501 captures an image 201 as illustrated in FIG. 12 by capturing the object 511.
  • a mirror 211 is superimposed on the image 201 for explanation.
  • a point P in the mirror 211 is a point Q in the image 202 captured by the sub camera 502.
  • the virtual wall xw is a wall created by the object 511.
  • the edge of the object 511 is extracted and is a straight line passing through the edge.
  • the point P of the mirror 211 reflects the point R on the virtual wall xw, and the point R (point P) corresponds to the point Q in the image 202 captured by the sub camera 502.
  • the center point of the mirror 211 is a point C, and the x-axis coordinate is described as xc.
  • the x-axis coordinate of the point P to be obtained in the mirror 211 is described as xp.
  • the distance between the point c of the mirror 211 and the main camera 501 is a distance dxc, and the distance between the point P of the mirror 211 and the main camera 501 is a distance dxp.
  • an angle formed by a line connecting the point P on the mirror 211 and the point R on the virtual plane xw and a perpendicular to the mirror 211 is defined as an angle ⁇ .
  • the main camera 501 and the sub camera 502 are arranged at positions that satisfy the relationship of the angle ⁇ .
  • the x-axis coordinate of the point R on the virtual plane xw is described as xw.
  • the coordinate of the point Q in the image 202 of the sub camera 502 corresponding to the point P on the mirror 211 (coordinate of the z ′ axis) is obtained by the following equation.
  • the distance da between the point R on the virtual plane xw and the point P of the mirror 211 is obtained by the following equation (1).
  • the image 202 captured by the sub camera 502 displayed in the mirror 211 is specified and cut out.
  • the mirror 211 When enlarging or reducing the image in the mirror 211, the mirror 211 can be handled as a convex mirror, and basically the same processing as described above can be performed by changing the perpendicular line on each surface.
  • image signal processing such as enlargement, reduction, brightness adjustment, edge enhancement, etc. may be performed. Since processing that cannot be performed with an actual mirror can be performed as image processing with the mirror 211 as a virtual tool, such image processing may be performed.
  • step S104 the process proceeds to step S104.
  • step S104 the virtual tool drawing / superimposing unit 411 draws and superimposes the virtual tool (mirror 211) on the image 201 captured by the main camera. Further, the virtual tool drawing superimposing unit 411 superimposes the image generated by the image conversion processing unit 412 in the process of step S103 on the drawn virtual tool.
  • a screen as shown in the lower diagram of FIG. 9 (FIG. 3) is provided to the operator 71.
  • step S105 it is determined whether or not a control value has been input. For example, it is determined that the control value has been input when an instruction such as “close the mirror”, “turn off the mirror”, or “end suture” is issued by the operator 71 by voice input.
  • step S105 If it is determined in step S105 that no control value has been input, the process returns to step S105, and the subsequent processes are repeated. That is, in this case, processing such as conversion of the image of the sub camera and drawing of the virtual tool based on the control value set at that time is continued.
  • step S105 it is determined whether or not the sewing is finished. This determination is performed by determining whether or not the control value input in step S105 is a control value indicating the end of stitching. For example, when a control value is input by voice input, it is performed by determining whether or not a keyword indicating the end of sewing such as “end of sewing” has been issued.
  • step S106 If it is determined in step S106 that the stitching has not ended, the process proceeds to step S107.
  • step S107 a change value is set based on the input control value. After the change value is set, the process returns to step S103, and the subsequent processing is repeated based on the change value.
  • step S106 determines whether the end of sewing has been instructed. If it is determined in step S106 that the end of sewing has been instructed, the process proceeds to step S108. In step S108, there is a case where it is determined in step S101 that the sewing operation is not performed.
  • step S108 when it is determined that the end of the operation is not instructed, the process is returned to step S101, and the subsequent processes are repeated. On the other hand, if it is determined in step S108 that the end of the operation has been instructed, the process based on the flowchart shown in FIG. 10 is ended.
  • the tool used by the surgeon 71 in the real world is superimposed on the image from the main camera as a virtual tool, and the image from the sub camera is superimposed on the virtual tool.
  • the image of the sub camera can be provided by a method that is easy for the operator 71 to visually recognize.
  • the mirror 211 is described as an example of the virtual tool.
  • the virtual tool may be other than the mirror.
  • the virtual tool may be a loupe, and an enlarged image may be displayed in the loupe.
  • the virtual tool may be a spotlight, and for example, an image from the main camera or an image from the sub camera may be projected brightly. Displaying such a spotlight as a virtual tool is effective when it is desired to partially control the brightness.
  • an image that is brightly displayed and illuminated by the spotlight in this case can be a part of an image captured by the main camera. That is, when the virtual tool is a spotlight, the spotlight is superimposed on the image captured by the main camera, and an image in which the main image illuminated by the spotlight is displayed brightly is presented to the user.
  • a special tool image may be used as a lens filter so that a special light image is displayed in the lens filter, and a special light multiplexed image may be realized. Images with different features can coexist naturally, and the location where the image is to be switched can be easily designated.
  • FIG. 13 shows another screen example displayed on the display device 53.
  • the sub camera 502-1 and the sub camera 502-2 are captured by the main camera.
  • the image captured by the sub camera is displayed in the vicinity of the sub camera captured by the main camera.
  • the sub camera 502-1 since the sub camera 502-1 is captured by the main camera, the sub camera 502-1 is located in the vicinity of the sub camera 502-1 (on the left side of the sub camera 502-1 in FIG. 13). The image 202-1 captured at is displayed. Further, since the sub camera 502-2 is imaged by the main camera, the sub camera 502-2 is imaged in the vicinity of the sub camera 502-2 (below the sub camera 502-2 in FIG. 13). An image 202-2 is displayed.
  • the sub camera image may be displayed superimposed on the main camera image.
  • a picture of the sub camera may be drawn by computer graphics, and an image captured by the sub camera may be displayed in the vicinity thereof.
  • the position where the sub camera drawn by computer graphics is displayed is a position reflecting the positional relationship between the main camera and the sub camera and the positional relationship between the sub cameras. Therefore, it is assumed that information on the positions of the main camera and the sub camera is acquired.
  • the position sensor 421 is provided, and information on the positions of the main camera and the sub camera is acquired by information from the position sensor 421. 6 and 7, in the case of a robot, the positional relationship between the arms 291 to 293 can be grasped in advance from the arm position control information and the angle of view information (zoom magnification, etc.) of each camera. It is also possible to use known information.
  • the sub camera image frame is The display may be performed by displaying a dotted line or the like and indicating that the sub camera is outside the range captured by the main camera.
  • FIG. 14 shows another display example.
  • the display example shown in FIG. 14 is basically the same as the display example shown in FIG. 13, and when the forceps 35b and arm 291 are imaged by the main camera, the forceps 35b and arm 291 are located near the forceps 35b and arm 291. And an image from a sub camera attached to the arm 291 is displayed.
  • the arm 291 and the arm 294 are picked up by the arm 292 which is the main camera. Has been.
  • the image 202-1 captured by the sub camera 252 attached to the arm 291 is displayed in the vicinity of the arm 291.
  • an image 202-2 captured by a sub camera (not shown) attached to the arm 294 is displayed in the vicinity of the arm 294.
  • the arm 293 is not imaged by the arm 292 (arm camera) which is the main camera. As described above, the arm 293 not captured by the main camera is drawn by computer graphics. Then, in the vicinity of the drawn arm 293, an image 202-3 captured by the sub camera 253 attached to the arm 293 is displayed.
  • the display in consideration of the depth information is displayed. It may be performed. For example, an image captured by a sub camera located at a position distant in the depth direction may be displayed small.
  • a display using that information may also be performed. good. For example, an image from the sub camera may be displayed at the depth position of the arm.
  • FIG. 4 the main camera is the endoscope 20a, and the endoscope 20b different from the endoscope 20a has been described as an example.
  • the scope of application of the present technology is not limited when the main camera and the sub camera are mounted on different surgical instruments.
  • the present technology described above can be applied even when the main camera and the sub camera are mounted on the same surgical instrument.
  • the endoscope 601 includes a main camera 601a at the tip thereof.
  • the endoscope 601 includes a sub camera 602a and a sub camera 602b in a part of a housing.
  • the endoscope 601 may be configured to include the main camera 601a and the sub camera 602.
  • the main camera 601a and the sub camera 601 are included in one surgical instrument such as the endoscope 601. The present technology can be applied even when the camera 602 is provided.
  • the series of processes described above can be executed by hardware or can be executed by software.
  • a program constituting the software is installed in the computer.
  • the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
  • FIG. 16 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 1005 is further connected to the bus 1004.
  • An input unit 1006, an output unit 1007, a storage unit 1008, a communication unit 1009, and a drive 1010 are connected to the input / output interface 1005.
  • the input unit 1006 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 1007 includes a display, a speaker, and the like.
  • the storage unit 1008 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 1009 includes a network interface.
  • the drive 1010 drives a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 1001 loads, for example, the program stored in the storage unit 1008 to the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program. Is performed.
  • the program executed by the computer (CPU 1001) can be provided by being recorded on the removable medium 1011 as a package medium, for example.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 1008 via the input / output interface 1005 by attaching the removable medium 1011 to the drive 1010. Further, the program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the storage unit 1008. In addition, the program can be installed in advance in the ROM 1002 or the storage unit 1008.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • system represents the entire apparatus composed of a plurality of apparatuses.
  • this technique can also take the following structures.
  • One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image, A conversion unit that converts the sub-image into a superimposition image;
  • a medical image processing apparatus comprising: a superimposing unit that superimposes the converted image on a predetermined position in the main image.
  • the said conversion part converts into the sub image according to the movement, when a display position is moved.
  • the medical image processing apparatus as described in said (1).
  • the sub image is displayed at a position satisfying the same positional relationship as the positional relationship between the imaging unit that captured the main image and the imaging unit that captured the sub image.
  • Medical image processing device (5) The medical image processing apparatus according to any one of (1) to (4), wherein the plurality of imaging units are endoscopes. (6) The medical image processing apparatus according to any one of (1) to (5), wherein an imaging unit that captures the sub-image among the plurality of imaging units is provided in a surgical instrument. (7) The medical image processing apparatus according to any one of (1) to (5), wherein an imaging unit that captures the sub-image among the plurality of imaging units is provided in an arm.
  • a superimposition unit that superimposes one of the images captured by the plurality of imaging units as a main image, another image as a sub image, and superimposing the sub image on the main image;
  • a medical image processing apparatus in which an imaging unit that captures the sub image is displayed in the main image, and the sub image is displayed in the vicinity of the imaging unit.
  • the imaging unit that captures the sub image is an imaging unit that is captured by the imaging unit that captures the main image.
  • the imaging unit that captures the sub-image is displayed with a picture that mimics the surgical instrument provided with the imaging unit that captures the sub-image, The picture that replicates the surgical instrument is displayed at a position that satisfies the same positional relationship as the positional relationship between the imaging unit that captured the main image and the imaging unit that captured the sub-image.
  • Image processing device One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image, Converting the sub-image into an image for superimposition; A medical image processing method including a step of superimposing the converted image on a predetermined position in the main image.
  • One of the images captured by the plurality of imaging units is a main image, another image is a sub image, and the sub image is superimposed on the main image.
  • a medical image processing method comprising: displaying an imaging unit capturing the sub-image on the main image, and displaying the sub-image in the vicinity of the imaging unit.
  • One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image, Converting the sub-image into an image for superimposition;
  • a program for executing a process including a step of superimposing the converted image on a predetermined position in the main image.
  • One of the images captured by the plurality of imaging units is a main image, another image is a sub image, and the sub image is superimposed on the main image.
  • a program for executing a process including a step of displaying an imaging unit capturing the sub-image on the main image and displaying the sub-image near the imaging unit.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

The present technology relates to a medical image processing device, a medical image processing method, and a program for enabling a plurality of images to be displayed in an easily viewable manner to a user. The medical image processing device is provided with: a conversion unit which sets one of a plurality of images, respectively captured by a plurality of imaging units, as a main image and sets another of the images as a sub-image, and converts the sub-image into a superimposition image; and a superimposition unit which superimposes the converted image at a predetermined position in the main image. When a display position is moved, the conversion unit converts the sub-image to a sub-image corresponding to the movement. The sub-image display position and display area are set with reference to the size of a major subject in the main image. The present technology can be applied in an endoscope.

Description

医療用画像処理装置、医療用画像処理方法、プログラムMedical image processing apparatus, medical image processing method, and program
 本技術は医療用画像処理装置、医療用画像処理方法、プログラムに関し、例えば、複数のカメラからの画像をユーザが視認しやすい表示で行うようにした医療用画像処理装置、医療用画像処理方法、プログラムに関する。 The present technology relates to a medical image processing apparatus, a medical image processing method, and a program. Regarding the program.
 特許文献1には、内視鏡手術中の内視鏡および器具、患者の位置関係が示されている。すなわち、体腔内の像は、あらかじめ体壁に作成した貫通穴にシースSHと呼ばれる管を差し込んで径路を作成し、その径路を通じてカメラ(一般的に内視鏡と呼ばれる)を挿入して得られる。 Patent Document 1 discloses the positional relationship between an endoscope, an instrument, and a patient during endoscopic surgery. That is, an image in the body cavity is obtained by inserting a tube called a sheath SH into a through-hole created in the body wall in advance, creating a path, and inserting a camera (generally called an endoscope) through the path. .
 このような内視鏡手術は、体の傷が小さくて済むことから患者の回復間での期間が短いというQoL(生活の質)の向上、入院期間が短くなったことによる負担軽減、というプラスの効果がある反面、小さな穴から処置具を挿入し、別の箇所に設けた穴から内視鏡を差し込んで撮影した体腔内の映像を参照しながら処置を行うことから、箸の先でつつくような手術となり、熟練した技術が要求される。 This type of endoscopic surgery has a positive effect of improving QoL (quality of life), which means that the period between recovery of the patient is short because the wound on the body is small, and reducing the burden caused by shortening the hospitalization period. On the other hand, the treatment tool is inserted through a small hole and the endoscope is inserted through a hole provided at another location, and the treatment is performed while referring to the image in the body cavity. Surgery techniques are required.
特開平7-351号公報JP-A-7-351
 上記したように、内視鏡手術には、熟練した技術が要求される。そこで、術者の処理負担を軽減するために、例えば、複数のカメラで、さまざまな角度から患部を撮影し、その画像を術者に提示することが考えられる。情報量が多くなることで、内視鏡手術における術者の処理負担を軽減することができると考えられる。 As described above, skilled techniques are required for endoscopic surgery. Therefore, in order to reduce the processing burden on the surgeon, for example, it is conceivable to photograph the affected area from various angles with a plurality of cameras and present the image to the surgeon. It is considered that the processing load on the operator in endoscopic surgery can be reduced by increasing the amount of information.
 しかしながら複数の画像を術者に提供することで、術者に提供することができる情報量を増やすことはできるが、提供の仕方によっては、術者の処理負担を軽減できない可能性がある。 However, providing a plurality of images to the surgeon can increase the amount of information that can be provided to the surgeon. However, depending on the manner of provision, there is a possibility that the processing burden on the surgeon cannot be reduced.
 例えば、主な画像(主画像)と、サブ的な画像(サブ画像)を術者に提供するときに、主画像とサブ画像を別モニタに表示すると、術者は、サブ画像を見るために、主画像が表示されているモニタから一旦視線を外さなくてはならない。このような視線の移動は、術者の処理負担を増大させる可能性がある。 For example, when providing a main image (main image) and a sub-image (sub-image) to the surgeon, if the main image and the sub-image are displayed on different monitors, the surgeon can view the sub-image. The line of sight must be removed from the monitor on which the main image is displayed. Such movement of the line of sight may increase the processing burden on the operator.
 複数の情報(画像)を、ユーザの処理負担が増大することなく、ユーザに提供することが望まれている。 It is desired to provide a plurality of information (images) to the user without increasing the processing burden on the user.
 本技術は、このような状況に鑑みてなされたものであり、複数の情報を、ユーザが視認しやすい状態で提供することができるようにするものである。 The present technology has been made in view of such a situation, and makes it possible to provide a plurality of pieces of information in a state that is easy for the user to visually recognize.
 本技術の一側面の第1の医療用画像処理装置は、複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を重畳用画像に変換する変換部と、前記変換された画像を前記メイン画像内の所定の位置に重畳する重畳部とを備える。 A first medical image processing apparatus according to an aspect of the present technology uses one of images captured by a plurality of imaging units as a main image, another image as a sub image, and superimposes the sub image. A conversion unit that converts the image into a work image; and a superimposition unit that superimposes the converted image on a predetermined position in the main image.
 本技術の一側面の第2の医療用画像処理装置は、複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳する重畳部を備え、前記メイン画像に、前記サブ画像を撮像している撮像部が表示され、その撮像部の近傍に前記サブ画像が表示される。 A second medical image processing apparatus according to an aspect of the present technology uses one image of images captured by a plurality of imaging units as a main image, another image as a sub image, and the sub image as the image. An imaging unit that superimposes on the main image is provided, an imaging unit that captures the sub image is displayed on the main image, and the sub image is displayed in the vicinity of the imaging unit.
 本技術の一側面の第1の医療用画像処理方法は、複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を重畳用画像に変換し、前記変換された画像を前記メイン画像内の所定の位置に重畳するステップを含む。 According to a first medical image processing method of an aspect of the present technology, one of images captured by a plurality of imaging units is used as a main image, another image is used as a sub image, and the sub image is superimposed. Converting to an image for use, and superimposing the converted image on a predetermined position in the main image.
 本技術の一側面の第2の医療用画像処理方法は、複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳し、前記メイン画像に、前記サブ画像を撮像している撮像部を表示し、その撮像部の近傍に前記サブ画像を表示するステップを含む。 According to a second medical image processing method of one aspect of the present technology, one of images captured by a plurality of imaging units is used as a main image, another image is used as a sub image, and the sub image is used as the sub image. The method includes a step of superimposing the image on the main image, displaying an image capturing unit capturing the sub image on the main image, and displaying the sub image near the image capturing unit.
 本技術の一側面の第1のプログラムは、コンピュータに、複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を重畳用画像に変換し、前記変換された画像を前記メイン画像内の所定の位置に重畳するステップを含む処理を実行させる。 A first program according to an aspect of the present technology causes a computer to use one of images captured by a plurality of imaging units as a main image, another image as a sub image, and superimpose the sub image. Conversion to an image is performed, and processing including a step of superimposing the converted image on a predetermined position in the main image is executed.
 本技術の一側面の第2のプログラムは、コンピュータに、複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳し、前記メイン画像に、前記サブ画像を撮像している撮像部を表示し、その撮像部の近傍に前記サブ画像を表示するステップを含む処理を実行させる。 A second program according to an aspect of the present technology causes a computer to use one of images captured by a plurality of imaging units as a main image, another image as a sub image, and the sub image as the main image. A process including a step of superimposing the image on the image, displaying an image capturing unit capturing the sub image on the main image, and displaying the sub image near the image capturing unit is executed.
 本技術の一側面の第1の医療用画像処理装置、医療用画像処理方法、およびプログラムにおいては、複数の撮像部でそれぞれ撮像された画像のうちの1の画像がメイン画像とされ、他の画像がサブ画像とされ、サブ画像が重畳用画像に変換され、変換された画像がメイン画像内の所定の位置に重畳される。 In the first medical image processing apparatus, the medical image processing method, and the program according to an aspect of the present technology, one of the images captured by the plurality of imaging units is set as a main image, and the other The image is used as a sub image, the sub image is converted into an image for superimposition, and the converted image is superimposed at a predetermined position in the main image.
 本技術の一側面の第2の医療用画像処理装置、医療用画像処理方法、およびプログラムにおいては、複数の撮像部でそれぞれ撮像された画像のうちの1の画像がメイン画像とされ、他の画像がサブ画像とされ、サブ画像がメイン画像に重畳され、メイン画像に、サブ画像を撮像している撮像部が表示され、その撮像部の近傍にサブ画像が表示される。 In the second medical image processing apparatus, the medical image processing method, and the program according to an aspect of the present technology, one of images captured by a plurality of imaging units is set as a main image, and the other The image is set as a sub image, the sub image is superimposed on the main image, an imaging unit that captures the sub image is displayed on the main image, and the sub image is displayed in the vicinity of the imaging unit.
 なお、医療用画像処理装置は、独立した装置であっても良いし、1つの装置を構成している内部ブロックであっても良い。 It should be noted that the medical image processing apparatus may be an independent apparatus or an internal block constituting one apparatus.
 また、プログラムは、伝送媒体を介して伝送することにより、または、記録媒体に記録して、提供することができる。 Also, the program can be provided by being transmitted through a transmission medium or by being recorded on a recording medium.
 本技術の一側面によれば、複数の情報を、ユーザが視認しやすい状態で提供することができる。 According to one aspect of the present technology, a plurality of pieces of information can be provided in a state that is easy for the user to visually recognize.
 なお、ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載されたいずれかの効果であってもよい。 It should be noted that the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure.
本技術を適用した内視鏡手術システムの一実施の形態の構成を示す図である。It is a figure showing composition of one embodiment of an endoscopic operation system to which this art is applied. カメラヘッドおよびCCUの機能構成の一例を示すブロック図である。It is a block diagram which shows an example of a function structure of a camera head and CCU. 画面例を示す図である。It is a figure which shows the example of a screen. サブカメラの取り付け位置について説明するための図である。It is a figure for demonstrating the attachment position of a sub camera. サブカメラの取り付け位置について説明するための図である。It is a figure for demonstrating the attachment position of a sub camera. 医療用ロボットについて説明するための図である。It is a figure for demonstrating a medical robot. サブカメラの取り付け位置について説明するための図である。It is a figure for demonstrating the attachment position of a sub camera. 画像処理部の構成について説明するための図である。It is a figure for demonstrating the structure of an image process part. 画像処理部の処理について説明するための図である。It is a figure for demonstrating the process of an image process part. 画像処理部の処理について説明するためのフローチャートである。It is a flowchart for demonstrating the process of an image process part. 画像処理について説明するための図である。It is a figure for demonstrating image processing. 画像処理について説明するための図である。It is a figure for demonstrating image processing. 他の画面例を示す図である。It is a figure which shows the other example of a screen. 他の画面例を示す図である。It is a figure which shows the other example of a screen. サブカメラの他の取り付け位置について説明するための図である。It is a figure for demonstrating the other attachment position of a sub camera. 記録媒体について説明するための図である。It is a figure for demonstrating a recording medium.
 以下に、本技術を実施するための形態(以下、実施の形態という)について説明する。 Hereinafter, modes for carrying out the present technology (hereinafter referred to as embodiments) will be described.
 <内視鏡システムの構成>
 本開示に係る技術は、さまざまな製品へ応用することができる。例えば、本開示に係る技術は、内視鏡手術システムに適用されてもよい。またここでは、内視鏡手術システムを例に挙げて説明をするが、本技術は、外科手術システム、顕微鏡下手術システムなどにも適用できる。
<Configuration of endoscope system>
The technology according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure may be applied to an endoscopic surgery system. Here, an endoscopic operation system will be described as an example, but the present technology can also be applied to a surgical operation system, a microscopic operation system, and the like.
 図1は、本開示に係る技術が適用され得る内視鏡手術システム10の概略的な構成の一例を示す図である。図1では、術者(医師)71が、内視鏡手術システム10を用いて、患者ベッド73上の患者75に手術を行っている様子が図示されている。図示するように、内視鏡手術システム10は、内視鏡20と、その他の術具30と、内視鏡20を支持する支持アーム装置40と、内視鏡下手術のための各種の装置が搭載されたカート50と、から構成される。 FIG. 1 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system 10 to which the technology according to the present disclosure can be applied. In FIG. 1, a state in which an operator (physician) 71 performs an operation on a patient 75 on a patient bed 73 using the endoscopic operation system 10 is illustrated. As shown in the figure, an endoscopic surgery system 10 includes an endoscope 20, other surgical tools 30, a support arm device 40 that supports the endoscope 20, and various devices for endoscopic surgery. And a cart 50 on which is mounted.
 内視鏡手術では、腹壁を切って開腹する代わりに、トロッカ37a~37dと呼ばれる筒状の開孔器具が腹壁に複数穿刺される。そして、トロッカ37a~37dから、内視鏡20の鏡筒21や、その他の術具30が患者75の体腔内に挿入される。図示する例では、その他の術具30として、気腹チューブ31、エネルギー処置具33および鉗子35が、患者75の体腔内に挿入されている。また、エネルギー処置具33は、高周波電流や超音波振動により、組織の切開および剥離、または血管の封止等を行う処置具である。ただし、図示する術具30はあくまで一例であり、術具30としては、例えば攝子、レトラクタ等、一般的に内視鏡下手術において用いられる各種の術具が用いられてよい。 In endoscopic surgery, instead of cutting and opening the abdominal wall, a plurality of cylindrical opening devices called trocars 37a to 37d are punctured into the abdominal wall. Then, the barrel 21 of the endoscope 20 and other surgical tools 30 are inserted into the body cavity of the patient 75 from the trocars 37a to 37d. In the example shown in the figure, an insufflation tube 31, an energy treatment tool 33, and forceps 35 are inserted into the body cavity of a patient 75 as other surgical tools 30. The energy treatment tool 33 is a treatment tool that performs tissue incision and peeling, blood vessel sealing, or the like by high-frequency current or ultrasonic vibration. However, the illustrated surgical tool 30 is merely an example, and as the surgical tool 30, various surgical tools generally used in endoscopic surgery, such as a lever and a retractor, may be used.
 内視鏡20によって撮影された患者75の体腔内の術部の画像が、表示装置53に表示される。術者71は、表示装置53に表示された術部の画像をリアルタイムで見ながら、エネルギー処置具33や鉗子35を用いて、例えば患部を切除する等の処置を行う。なお、気腹チューブ31、エネルギー処置具33および鉗子35は、手術中に、術者71または助手等によって支持される。 The image of the surgical site in the body cavity of the patient 75 photographed by the endoscope 20 is displayed on the display device 53. The surgeon 71 performs a treatment such as excision of the affected area using the energy treatment tool 33 and the forceps 35 while viewing the image of the surgical site displayed on the display device 53 in real time. The pneumoperitoneum tube 31, the energy treatment tool 33, and the forceps 35 are supported by an operator 71 or an assistant during the operation.
 (支持アーム装置)
 支持アーム装置40は、ベース部41から延伸するアーム部43を備える。図示する例では、アーム部43は、関節部45a、45b、45c、およびリンク47a、47bから構成されており、アーム制御装置57からの制御により駆動される。アーム部43によって内視鏡20が支持され、その位置および姿勢が制御される。これにより、内視鏡20の安定的な位置の固定が実現され得る。
(Support arm device)
The support arm device 40 includes an arm portion 43 extending from the base portion 41. In the example shown in the figure, the arm portion 43 includes joint portions 45 a, 45 b, 45 c and links 47 a, 47 b and is driven by control from the arm control device 57. The endoscope 20 is supported by the arm portion 43, and its position and posture are controlled. Thereby, the fixation of the stable position of the endoscope 20 can be realized.
 (内視鏡)
 内視鏡20は、先端から所定の長さの領域が患者75の体腔内に挿入される鏡筒21と、鏡筒21の基端に接続されるカメラヘッド23と、から構成される。図示する例では、硬性の鏡筒21を有するいわゆる硬性鏡として構成される内視鏡20を図示しているが、内視鏡20は、軟性の鏡筒21を有するいわゆる軟性鏡として構成されてもよい。
(Endoscope)
The endoscope 20 includes a lens barrel 21 in which a region having a predetermined length from the distal end is inserted into the body cavity of the patient 75, and a camera head 23 connected to the proximal end of the lens barrel 21. In the illustrated example, an endoscope 20 configured as a so-called rigid mirror having a rigid lens barrel 21 is illustrated, but the endoscope 20 is configured as a so-called flexible mirror having a flexible lens barrel 21. Also good.
 鏡筒21の先端には、対物レンズが嵌め込まれた開口部が設けられている。内視鏡20には光源装置55が接続されており、当該光源装置55によって生成された光が、鏡筒21の内部に延設されるライトガイドによって当該鏡筒の先端まで導光され、対物レンズを介して患者75の体腔内の観察対象に向かって照射される。なお、内視鏡20は、直視鏡であってもよいし、斜視鏡または側視鏡であってもよい。 An opening into which an objective lens is fitted is provided at the tip of the lens barrel 21. A light source device 55 is connected to the endoscope 20, and light generated by the light source device 55 is guided to the tip of the lens barrel by a light guide that extends inside the lens barrel 21. Irradiation is performed toward the observation target in the body cavity of the patient 75 through the lens. Note that the endoscope 20 may be a direct endoscope, a perspective mirror, or a side endoscope.
 カメラヘッド23の内部には光学系および撮像素子が設けられており、観察対象からの反射光(観察光)は当該光学系によって当該撮像素子に集光される。当該撮像素子によって観察光が光電変換され、観察光に対応する電気信号、すなわち観察像に対応する画像信号が生成される。当該画像信号は、RAWデータとしてカメラコントロールユニット(CCU:Camera Control Unit)51に送信される。なお、カメラヘッド23には、その光学系を適宜駆動させることにより、倍率および焦点距離を調整する機能が搭載される。 An optical system and an image sensor are provided inside the camera head 23, and reflected light (observation light) from the observation target is condensed on the image sensor by the optical system. Observation light is photoelectrically converted by the imaging element, and an electrical signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated. The image signal is transmitted as RAW data to a camera control unit (CCU) 51. The camera head 23 has a function of adjusting the magnification and the focal length by appropriately driving the optical system.
 なお、例えば立体視(3D表示)等に対応するために、カメラヘッド23には撮像素子が複数設けられてもよい。この場合、鏡筒21の内部には、当該複数の撮像素子のそれぞれに観察光を導光するために、リレー光学系が複数系統設けられる。 For example, in order to cope with stereoscopic vision (3D display) or the like, the camera head 23 may be provided with a plurality of imaging elements. In this case, a plurality of relay optical systems are provided inside the lens barrel 21 in order to guide observation light to each of the plurality of imaging elements.
 (カートに搭載される各種の装置)
 CCU51は、CPU(Central Processing Unit)やGPU(Graphics Processing Unit)等によって構成され、内視鏡20および表示装置53の動作を統括的に制御する。具体的には、CCU51は、カメラヘッド23から受け取った画像信号に対して、例えば現像処理(デモザイク処理)等の、当該画像信号に基づく画像を表示するための各種の画像処理を施す。CCU51は、当該画像処理を施した画像信号を表示装置53に提供する。また、CCU51は、カメラヘッド23に対して制御信号を送信し、その駆動を制御する。当該制御信号には、倍率や焦点距離等、撮像条件に関する情報が含まれ得る。
(Various devices mounted on the cart)
The CCU 51 is configured by a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and comprehensively controls the operations of the endoscope 20 and the display device 53. Specifically, the CCU 51 performs various types of image processing for displaying an image based on the image signal, such as development processing (demosaic processing), for example, on the image signal received from the camera head 23. The CCU 51 provides the display device 53 with the image signal subjected to the image processing. Further, the CCU 51 transmits a control signal to the camera head 23 to control its driving. The control signal can include information regarding imaging conditions such as magnification and focal length.
 表示装置53は、CCU51からの制御により、当該CCU51によって画像処理が施された画像信号に基づく画像を表示する。内視鏡20が例えば4K(水平画素数3840×垂直画素数2160)または8K(水平画素数7680×垂直画素数4320)等の高解像度の撮影に対応したものである場合、および/または3D表示に対応したものである場合には、表示装置53としては、それぞれに対応して、高解像度の表示が可能なもの、および/または3D表示可能なものが用いられ得る。4Kまたは8K等の高解像度の撮影に対応したものである場合、表示装置53として55インチ以上のサイズのものを用いることで一層の没入感が得られる。また、用途に応じて、解像度、サイズが異なる複数の表示装置53が設けられてもよい。 The display device 53 displays an image based on an image signal subjected to image processing by the CCU 51 under the control of the CCU 51. When the endoscope 20 is compatible with high-resolution imaging such as 4K (horizontal pixel number 3840 × vertical pixel number 2160) or 8K (horizontal pixel number 7680 × vertical pixel number 4320), and / or 3D display If the display device 53 is compatible with the display device 53, a display device 53 capable of high-resolution display and / or 3D display can be used. In the case of 4K or 8K high resolution imaging, a more immersive feeling can be obtained by using a display device 53 having a size of 55 inches or more. Further, a plurality of display devices 53 having different resolutions and sizes may be provided depending on the application.
 光源装置55は、例えばLED(light emitting diode)等の光源から構成され、術部を撮影する際の照射光を内視鏡20に供給する。 The light source device 55 is composed of a light source such as an LED (light emitting diode), and supplies irradiation light to the endoscope 20 when photographing a surgical site.
 アーム制御装置57は、例えばCPU等のプロセッサによって構成され、所定のプログラムに従って動作することにより、所定の制御方式に従って支持アーム装置40のアーム部43の駆動を制御する。 The arm control device 57 is configured by a processor such as a CPU, for example, and operates according to a predetermined program to control driving of the arm portion 43 of the support arm device 40 according to a predetermined control method.
 入力装置59は、内視鏡手術システム10に対する入力インタフェースである。ユーザは、入力装置59を介して、内視鏡手術システム10に対して各種の情報の入力や指示入力を行うことができる。例えば、ユーザは、入力装置59を介して、患者の身体情報や、手術の術式についての情報等、手術に関する各種の情報を入力する。また、例えば、ユーザは、入力装置59を介して、アーム部43を駆動させる旨の指示や、内視鏡20による撮像条件(照射光の種類、倍率及び焦点距離等)を変更する旨の指示、エネルギー処置具33を駆動させる旨の指示等を入力する。 The input device 59 is an input interface for the endoscopic surgery system 10. The user can input various information and instructions to the endoscopic surgery system 10 via the input device 59. For example, the user inputs various information related to the operation, such as the patient's physical information and information about the surgical technique, via the input device 59. In addition, for example, the user instructs to drive the arm unit 43 via the input device 59 or to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 20. An instruction to drive the energy treatment device 33 is input.
 入力装置59の種類は限定されず、入力装置59は各種の公知の入力装置であってよい。入力装置59としては、例えば、マウス、キーボード、タッチパネル、スイッチ、フットスイッチ69および/またはレバー等が適用され得る。入力装置59としてタッチパネルが用いられる場合には、当該タッチパネルは表示装置53の表示面上に設けられてもよい。 The type of the input device 59 is not limited, and the input device 59 may be various known input devices. As the input device 59, for example, a mouse, a keyboard, a touch panel, a switch, a foot switch 69 and / or a lever can be applied. When a touch panel is used as the input device 59, the touch panel may be provided on the display surface of the display device 53.
 あるいは、入力装置59は、例えばメガネ型のウェアラブルデバイスやHMD(Head Mounted Display)等の、ユーザによって装着されるデバイスであり、これらのデバイスによって検出されるユーザのジェスチャや視線に応じて各種の入力が行われる。また、入力装置59は、ユーザの動きを検出可能なカメラを含み、当該カメラによって撮像された映像から検出されるユーザのジェスチャや視線に応じて各種の入力が行われる。 Alternatively, the input device 59 is a device worn by the user, such as a glasses-type wearable device or an HMD (Head-Mounted Display), and various types of input are performed according to the user's gesture and line of sight detected by these devices. Is done. The input device 59 includes a camera capable of detecting the user's movement, and various inputs are performed according to the user's gesture and line of sight detected from the video captured by the camera.
 更に、入力装置59は、ユーザの声を収音可能なマイクロフォンを含み、当該マイクロフォンを介して音声によって各種の入力が行われる。このように、入力装置59が非接触で各種の情報を入力可能に構成されることにより、特に清潔域に属するユーザ(例えば術者71)が、不潔域に属する機器を非接触で操作することが可能となる。また、ユーザは、所持している術具から手を離すことなく機器を操作することが可能となるため、ユーザの利便性が向上する。 Furthermore, the input device 59 includes a microphone capable of collecting a user's voice, and various inputs are performed by voice through the microphone. As described above, the input device 59 is configured to be able to input various information without contact, so that a user belonging to the clean area (for example, the operator 71) can operate a device belonging to the unclean area without contact. Is possible. In addition, since the user can operate the device without releasing his / her hand from the surgical tool he / she has, the convenience for the user is improved.
 処置具制御装置61は、組織の焼灼、切開または血管の封止等のためのエネルギー処置具33の駆動を制御する。気腹装置63は、内視鏡20による視野の確保および術者の作業空間の確保の目的で、患者75の体腔を膨らめるために、気腹チューブ31を介して当該体腔内にガスを送り込む。レコーダ65は、手術に関する各種の情報を記録可能な装置である。プリンタ67は、手術に関する各種の情報を、テキスト、画像またはグラフ等各種の形式で印刷可能な装置である。 The treatment instrument control device 61 controls the driving of the energy treatment instrument 33 for tissue cauterization, incision, or blood vessel sealing. In order to inflate the body cavity of the patient 75 for the purpose of securing the visual field by the endoscope 20 and securing the operator's work space, the pneumoperitoneum device 63 gas is introduced into the body cavity via the pneumoperitoneum tube 31. Send in. The recorder 65 is a device that can record various types of information related to surgery. The printer 67 is a device that can print various types of information related to surgery in various formats such as text, images, or graphs.
 以下、内視鏡手術システム10において特に特徴的な構成について、更に詳細に説明する。 Hereinafter, a particularly characteristic configuration in the endoscopic surgery system 10 will be described in more detail.
 (支持アーム装置)
 支持アーム装置40は、基台であるベース部41と、ベース部41から延伸するアーム部43と、を備える。図示する例では、アーム部43は、複数の関節部45a、45b、45cと、関節部45bによって連結される複数のリンク47a、47bと、から構成されているが、図1では、簡単のため、アーム部43の構成を簡略化して図示している。
(Support arm device)
The support arm device 40 includes a base portion 41 that is a base and an arm portion 43 that extends from the base portion 41. In the illustrated example, the arm portion 43 is composed of a plurality of joint portions 45a, 45b, 45c and a plurality of links 47a, 47b connected by the joint portions 45b. However, in FIG. The structure of the arm part 43 is simplified and shown.
 実際には、アーム部43が所望の自由度を有するように、関節部45a~45cおよびリンク47a、47bの形状、数および配置、並びに関節部45a~45cの回転軸の方向等が適宜設定され得る。例えば、アーム部43は、好適に、6自由度以上の自由度を有するように構成され得る。これにより、アーム部43の可動範囲内において内視鏡20を自由に移動させることが可能になるため、所望の方向から内視鏡20の鏡筒21を患者75の体腔内に挿入することが可能になる。 Actually, the shape, number and arrangement of the joint portions 45a to 45c and the links 47a and 47b, the direction of the rotation axis of the joint portions 45a to 45c, and the like are appropriately set so that the arm portion 43 has a desired degree of freedom. obtain. For example, the arm portion 43 can be preferably configured to have 6 degrees of freedom or more. As a result, the endoscope 20 can be freely moved within the movable range of the arm portion 43, so that the barrel 21 of the endoscope 20 can be inserted into the body cavity of the patient 75 from a desired direction. It becomes possible.
 関節部45a~45cにはアクチュエータが設けられており、関節部45a~45cは当該アクチュエータの駆動により所定の回転軸まわりに回転可能に構成されている。当該アクチュエータの駆動がアーム制御装置57によって制御されることにより、各関節部45a~45cの回転角度が制御され、アーム部43の駆動が制御される。これにより、内視鏡20の位置および姿勢の制御が実現され得る。この際、アーム制御装置57は、力制御または位置制御等、各種の公知の制御方式によってアーム部43の駆動を制御することができる。 The joints 45a to 45c are provided with actuators, and the joints 45a to 45c are configured to be rotatable around a predetermined rotation axis by driving the actuators. By controlling the driving of the actuator by the arm control device 57, the rotation angle of each joint portion 45a to 45c is controlled, and the driving of the arm portion 43 is controlled. Thereby, control of the position and posture of the endoscope 20 can be realized. At this time, the arm control device 57 can control the driving of the arm unit 43 by various known control methods such as force control or position control.
 例えば、術者71が、入力装置59(フットスイッチ69を含む)を介して適宜操作入力を行うことにより、当該操作入力に応じてアーム制御装置57によってアーム部43の駆動が適宜制御され、内視鏡20の位置および姿勢が制御されてよい。当該制御により、アーム部43の先端の内視鏡20を任意の位置から任意の位置まで移動させた後、その移動後の位置で固定的に支持することができる。なお、アーム部43は、いわゆるマスタースレイブ方式で操作されてもよい。この場合、アーム部43は、手術室から離れた場所に設置される入力装置59を介してユーザによって遠隔操作され得る。 For example, when the surgeon 71 performs an appropriate operation input via the input device 59 (including the foot switch 69), the arm control device 57 appropriately controls the driving of the arm unit 43 in accordance with the operation input. The position and posture of the endoscope 20 may be controlled. By this control, the endoscope 20 at the distal end of the arm portion 43 can be moved from an arbitrary position to an arbitrary position and then fixedly supported at the position after the movement. In addition, the arm part 43 may be operated by what is called a master slave system. In this case, the arm unit 43 can be remotely operated by the user via the input device 59 installed at a location away from the operating room.
 また、力制御が適用される場合には、アーム制御装置57は、ユーザからの外力を受け、その外力にならってスムーズにアーム部43が移動するように、各関節部45a~45cのアクチュエータを駆動させる、いわゆるパワーアシスト制御を行ってもよい。これにより、ユーザが直接アーム部43に触れながらアーム部43を移動させる際に、比較的軽い力で当該アーム部43を移動させることができる。従って、より直感的に、より簡易な操作で内視鏡20を移動させることが可能となり、ユーザの利便性を向上させることができる。 When force control is applied, the arm control device 57 receives the external force from the user and moves the actuators of the joint portions 45a to 45c so that the arm portion 43 moves smoothly according to the external force. You may perform what is called power assist control to drive. Thereby, when the user moves the arm unit 43 while directly touching the arm unit 43, the arm unit 43 can be moved with a relatively light force. Accordingly, the endoscope 20 can be moved more intuitively and with a simpler operation, and the convenience for the user can be improved.
 ここで、一般的に、内視鏡下手術では、スコピストと呼ばれる医師によって内視鏡20が支持されていた。これに対して、支持アーム装置40を用いることにより、人手によらずに内視鏡20の位置をより確実に固定することが可能になるため、術部の画像を安定的に得ることができ、手術を円滑に行うことが可能になる。 Here, in general, in the endoscopic operation, the endoscope 20 is supported by a doctor called a scopist. On the other hand, by using the support arm device 40, the position of the endoscope 20 can be more reliably fixed without relying on human hands, so that an image of the surgical site can be stably obtained. It becomes possible to perform the operation smoothly.
 なお、アーム制御装置57は必ずしもカート50に設けられなくてもよい。また、アーム制御装置57は必ずしも1つの装置でなくてもよい。例えば、アーム制御装置57は、支持アーム装置40のアーム部43の各関節部45a~45cにそれぞれ設けられてもよく、複数のアーム制御装置57が互いに協働することにより、アーム部43の駆動制御が実現されてもよい。 Note that the arm controller 57 does not necessarily have to be provided in the cart 50. Further, the arm control device 57 is not necessarily one device. For example, the arm control device 57 may be provided in each of the joint portions 45a to 45c of the arm portion 43 of the support arm device 40. The plurality of arm control devices 57 cooperate with each other to drive the arm portion 43. Control may be realized.
 (光源装置)
 光源装置55は、内視鏡20に術部を撮影する際の照射光を供給する。光源装置55は、例えばLED、レーザ光源またはこれらの組み合わせによって構成される白色光源から構成される。このとき、RGBレーザ光源の組み合わせにより白色光源が構成される場合には、各色(各波長)の出力強度および出力タイミングを高精度に制御することができるため、光源装置55において撮像画像のホワイトバランスの調整を行うことができる。
(Light source device)
The light source device 55 supplies irradiation light to the endoscope 20 when photographing a surgical site. The light source device 55 is composed of a white light source composed of, for example, an LED, a laser light source, or a combination thereof. At this time, when a white light source is configured by a combination of RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. Adjustments can be made.
 また、この場合には、RGBレーザ光源それぞれからのレーザ光を時分割で観察対象に照射し、その照射タイミングに同期してカメラヘッド23の撮像素子の駆動を制御することにより、RGBそれぞれに対応した画像を時分割で撮像することも可能である。当該方法によれば、当該撮像素子にカラーフィルタを設けなくても、カラー画像を得ることができる。 In this case, laser light from each of the RGB laser light sources is irradiated on the observation target in a time-sharing manner, and the drive of the image sensor of the camera head 23 is controlled in synchronization with the irradiation timing, thereby corresponding to each RGB It is also possible to take the images that have been taken in time division. According to this method, a color image can be obtained without providing a color filter in the image sensor.
 また、光源装置55は、出力する光の強度を所定の時間毎に変更するようにその駆動が制御されてもよい。その光の強度の変更のタイミングに同期してカメラヘッド23の撮像素子の駆動を制御して時分割で画像を取得し、その画像を合成することにより、いわゆる黒つぶれおよび白とびのない高ダイナミックレンジの画像を生成することができる。 Further, the driving of the light source device 55 may be controlled so as to change the intensity of the output light every predetermined time. In synchronism with the change timing of the light intensity, the driving of the image sensor of the camera head 23 is controlled to acquire images in a time-sharing manner, and the images are synthesized, so that high dynamics without so-called blackout and overexposure are obtained. A range image can be generated.
 また、光源装置55は、特殊光観察に対応した所定の波長帯域の光を供給可能に構成されてもよい。特殊光観察では、例えば、体組織における光の吸収の波長依存性を利用して、通常の観察時における照射光(すなわち、白色光)に比べて狭帯域の光を照射することにより、粘膜表層の血管等の所定の組織を高コントラストで撮影する、いわゆる狭帯域光観察(Narrow Band Imaging)が行われる。 Further, the light source device 55 may be configured to be able to supply light of a predetermined wavelength band corresponding to special light observation. In special light observation, for example, by utilizing the wavelength dependence of light absorption in body tissue, the surface of the mucous membrane is irradiated by irradiating light in a narrow band compared to irradiation light (ie, white light) during normal observation. A so-called narrow-band light observation (Narrow Band Imaging) is performed in which a predetermined tissue such as a blood vessel is imaged with high contrast.
 あるいは、特殊光観察では、励起光を照射することにより発生する蛍光により画像を得る蛍光観察が行われてもよい。蛍光観察では、体組織に励起光を照射し当該体組織からの蛍光を観察するもの(自家蛍光観察)、またはインドシアニングリーン(ICG)等の試薬を体組織に局注すると共に当該体組織にその試薬の蛍光波長に対応した励起光を照射し蛍光像を得るもの等が行われ得る。光源装置55は、このような特殊光観察に対応した狭帯域光および/または励起光を供給可能に構成され得る。 Alternatively, in special light observation, fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating excitation light. In fluorescence observation, the body tissue is irradiated with excitation light to observe fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally administered to the body tissue and applied to the body tissue. What obtains a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent can be performed. The light source device 55 can be configured to be able to supply narrowband light and / or excitation light corresponding to such special light observation.
 (カメラヘッドおよびCCU)
 図2を参照して、内視鏡20のカメラヘッド23およびCCU51の機能についてより詳細に説明する。図2は、図1に示すカメラヘッド23およびCCU51の機能構成の一例を示すブロック図である。
(Camera head and CCU)
The functions of the camera head 23 and the CCU 51 of the endoscope 20 will be described in more detail with reference to FIG. FIG. 2 is a block diagram illustrating an example of functional configurations of the camera head 23 and the CCU 51 illustrated in FIG.
 図2を参照すると、カメラヘッド23は、その機能として、レンズユニット25、撮像部27、駆動部29、通信部26、およびカメラヘッド制御部28を有する。また、CCU51は、その機能として、通信部81、画像処理部83、および制御部85を有する。カメラヘッド23とCCU51とは、伝送ケーブル91によって双方向に通信可能に接続されている。 Referring to FIG. 2, the camera head 23 has a lens unit 25, an imaging unit 27, a drive unit 29, a communication unit 26, and a camera head control unit 28 as its functions. Further, the CCU 51 includes a communication unit 81, an image processing unit 83, and a control unit 85 as its functions. The camera head 23 and the CCU 51 are connected to each other via a transmission cable 91 so that they can communicate with each other.
 まず、カメラヘッド23の機能構成について説明する。レンズユニット25は、鏡筒21との接続部に設けられる光学系である。鏡筒21の先端から取り込まれた観察光は、カメラヘッド23まで導光され、当該レンズユニット25に入射する。レンズユニット25は、ズームレンズおよびフォーカスレンズを含む複数のレンズが組み合わされて構成される。レンズユニット25は、撮像部27の撮像素子の受光面上に観察光を集光するように、その光学特性が調整されている。また、ズームレンズおよびフォーカスレンズは、撮像画像の倍率および焦点の調整のため、その光軸上の位置が移動可能に構成される。 First, the functional configuration of the camera head 23 will be described. The lens unit 25 is an optical system provided at a connection portion with the lens barrel 21. Observation light taken from the tip of the lens barrel 21 is guided to the camera head 23 and enters the lens unit 25. The lens unit 25 is configured by combining a plurality of lenses including a zoom lens and a focus lens. The optical characteristics of the lens unit 25 are adjusted so that the observation light is condensed on the light receiving surface of the image pickup device of the image pickup unit 27. In addition, the zoom lens and the focus lens are configured such that their positions on the optical axis are movable in order to adjust the magnification and focus of the captured image.
 撮像部27は撮像素子によって構成され、レンズユニット25の後段に配置される。レンズユニット25を通過した観察光は、当該撮像素子の受光面に集光され、光電変換によって、観察像に対応した画像信号が生成される。撮像部27によって生成された画像信号は、通信部26に提供される。 The image pickup unit 27 is configured by an image pickup device, and is arranged at the rear stage of the lens unit 25. The observation light that has passed through the lens unit 25 is collected on the light receiving surface of the image sensor, and an image signal corresponding to the observation image is generated by photoelectric conversion. The image signal generated by the imaging unit 27 is provided to the communication unit 26.
 撮像部27を構成する撮像素子としては、例えばCMOS(Complementary Metal Oxide Semiconductor)タイプのイメージセンサであり、Bayer配列を有するカラー撮影可能なものが用いられる。なお、当該撮像素子としては、例えば4K以上の高解像度の画像の撮影に対応可能なものが用いられてもよい。術部の画像が高解像度で得られることにより、術者71は、当該術部の様子をより詳細に把握することができ、手術をより円滑に進行することが可能となる。 As the image pickup element constituting the image pickup unit 27, for example, a CMOS (Complementary Metal Metal Oxide Semiconductor) type image sensor, which has a Bayer array and can perform color photographing, is used. In addition, as the imaging element, for example, an element capable of capturing a high-resolution image of 4K or more may be used. By obtaining a high-resolution image of the surgical site, the surgeon 71 can grasp the state of the surgical site in more detail, and can proceed with the surgery more smoothly.
 また、撮像部27を構成する撮像素子は、3D表示に対応する右目用および左目用の画像信号をそれぞれ取得するための1対の撮像素子を有するように構成される。3D表示が行われることにより、術者71は術部における生体組織の奥行きをより正確に把握することが可能になる。なお、撮像部27が多板式で構成される場合には、各撮像素子に対応して、レンズユニット25も複数系統設けられる。 Also, the image sensor that configures the image capturing unit 27 is configured to have a pair of image sensors for acquiring right-eye and left-eye image signals corresponding to 3D display. By performing the 3D display, the operator 71 can more accurately grasp the depth of the living tissue in the operation site. When the imaging unit 27 is configured as a multi-plate type, a plurality of lens units 25 are also provided corresponding to each imaging element.
 また、撮像部27は、必ずしもカメラヘッド23に設けられなくてもよい。例えば、撮像部27は、鏡筒21の内部に、対物レンズの直後に設けられてもよい。 Further, the imaging unit 27 is not necessarily provided in the camera head 23. For example, the imaging unit 27 may be provided in the barrel 21 immediately after the objective lens.
 駆動部29は、アクチュエータによって構成され、カメラヘッド制御部28からの制御により、レンズユニット25のズームレンズおよびフォーカスレンズを光軸に沿って所定の距離だけ移動させる。これにより、撮像部27による撮像画像の倍率および焦点が適宜調整され得る。 The drive unit 29 is configured by an actuator, and moves the zoom lens and the focus lens of the lens unit 25 by a predetermined distance along the optical axis under the control of the camera head control unit 28. Thereby, the magnification and the focus of the image captured by the imaging unit 27 can be appropriately adjusted.
 通信部26は、CCU51との間で各種の情報を送受信するための通信装置によって構成される。通信部26は、撮像部27から得た画像信号をRAWデータとして伝送ケーブル91を介してCCU51に送信する。この際、術部の撮像画像を低レイテンシで表示するために、当該画像信号は光通信によって送信されることが好ましい。 The communication unit 26 includes a communication device for transmitting and receiving various types of information to and from the CCU 51. The communication unit 26 transmits the image signal obtained from the imaging unit 27 as RAW data to the CCU 51 via the transmission cable 91. At this time, in order to display a captured image of the surgical site with low latency, the image signal is preferably transmitted by optical communication.
 手術の際には、術者71が撮像画像によって患部の状態を観察しながら手術を行うため、より安全で確実な手術のためには、術部の動画像が可能な限りリアルタイムに表示されることが求められるからである。光通信が行われる場合には、通信部26には、電気信号を光信号に変換する光電変換モジュールが設けられる。画像信号は当該光電変換モジュールによって光信号に変換された後、伝送ケーブル91を介してCCU51に送信される。 At the time of surgery, the surgeon 71 performs the surgery while observing the state of the affected part with the captured image, so that a moving image of the surgical part is displayed in real time as much as possible for safer and more reliable surgery. Because it is required. When optical communication is performed, the communication unit 26 is provided with a photoelectric conversion module that converts an electrical signal into an optical signal. The image signal is converted into an optical signal by the photoelectric conversion module, and then transmitted to the CCU 51 via the transmission cable 91.
 また、通信部26は、CCU51から、カメラヘッド23の駆動を制御するための制御信号を受信する。当該制御信号には、例えば、撮像画像のフレームレートを指定する旨の情報、撮像時の露出値を指定する旨の情報、並びに/または撮像画像の倍率および焦点を指定する旨の情報等、撮像条件に関する情報が含まれる。通信部26は、受信した制御信号をカメラヘッド制御部28に提供する。 Further, the communication unit 26 receives a control signal for controlling the driving of the camera head 23 from the CCU 51. The control signal includes, for example, information for designating the frame rate of the captured image, information for designating the exposure value at the time of imaging, and / or information for designating the magnification and focus of the captured image. Contains information about the condition. The communication unit 26 provides the received control signal to the camera head control unit 28.
 なお、CCU51からの制御信号も、光通信によって伝送されてもよい。この場合、通信部26には、光信号を電気信号に変換する光電変換モジュールが設けられ、制御信号は当該光電変換モジュールによって電気信号に変換された後、カメラヘッド制御部28に提供される。 Note that the control signal from the CCU 51 may also be transmitted by optical communication. In this case, the communication unit 26 is provided with a photoelectric conversion module that converts an optical signal into an electric signal. The control signal is converted into an electric signal by the photoelectric conversion module and then provided to the camera head control unit 28.
 なお、上記のフレームレートや露出値、倍率、焦点等の撮像条件は、取得された画像信号に基づいてCCU51の制御部85によって自動的に設定される。つまり、いわゆるAE(Auto Exposure)機能、AF(Auto Focus)機能およびAWB(Auto White Balance)機能が内視鏡20に搭載される。 Note that the imaging conditions such as the frame rate, exposure value, magnification, and focus are automatically set by the control unit 85 of the CCU 51 based on the acquired image signal. That is, a so-called AE (Auto-Exposure) function, AF (Auto-Focus) function, and AWB (Auto-White Balance) function are mounted on the endoscope 20.
 カメラヘッド制御部28は、通信部26を介して受信したCCU51からの制御信号に基づいて、カメラヘッド23の駆動を制御する。例えば、カメラヘッド制御部28は、撮像画像のフレームレートを指定する旨の情報および/または撮像時の露光を指定する旨の情報に基づいて、撮像部27の撮像素子の駆動を制御する。また、例えば、カメラヘッド制御部28は、撮像画像の倍率および焦点を指定する旨の情報に基づいて、駆動部29を介してレンズユニット25のズームレンズおよびフォーカスレンズを適宜移動させる。カメラヘッド制御部28は、更に、鏡筒21やカメラヘッド23を識別するための情報を記憶する機能を備えてもよい。 The camera head control unit 28 controls driving of the camera head 23 based on the control signal from the CCU 51 received via the communication unit 26. For example, the camera head control unit 28 controls driving of the imaging element of the imaging unit 27 based on information indicating that the frame rate of the captured image is specified and / or information indicating that the exposure at the time of imaging is specified. For example, the camera head control unit 28 appropriately moves the zoom lens and the focus lens of the lens unit 25 via the drive unit 29 based on information indicating that the magnification and focus of the captured image are designated. The camera head control unit 28 may further have a function of storing information for identifying the lens barrel 21 and the camera head 23.
 なお、レンズユニット25や撮像部27等の構成を、気密性および防水性が高い密閉構造内に配置することで、カメラヘッド23について、オートクレーブ滅菌処理に対する耐性を持たせることができる。 It should be noted that the camera head 23 can be resistant to autoclave sterilization by arranging the configuration of the lens unit 25, the imaging unit 27, etc. in a sealed structure with high airtightness and waterproofness.
 次に、CCU51の機能構成について説明する。通信部81は、カメラヘッド23との間で各種の情報を送受信するための通信装置によって構成される。通信部81は、カメラヘッド23から、伝送ケーブル91を介して送信される画像信号を受信する。この際、上記のように、当該画像信号は好適に光通信によって送信され得る。この場合、光通信に対応して、通信部81には、光信号を電気信号に変換する光電変換モジュールが設けられる。通信部81は、電気信号に変換した画像信号を画像処理部83に提供する。 Next, the functional configuration of the CCU 51 will be described. The communication unit 81 is configured by a communication device for transmitting and receiving various types of information to and from the camera head 23. The communication unit 81 receives an image signal transmitted from the camera head 23 via the transmission cable 91. At this time, as described above, the image signal can be suitably transmitted by optical communication. In this case, corresponding to optical communication, the communication unit 81 is provided with a photoelectric conversion module that converts an optical signal into an electric signal. The communication unit 81 provides the image processing unit 83 with the image signal converted into an electrical signal.
 また、通信部81は、カメラヘッド23に対して、カメラヘッド23の駆動を制御するための制御信号を送信する。当該制御信号も光通信によって送信されてよい。 The communication unit 81 transmits a control signal for controlling the driving of the camera head 23 to the camera head 23. The control signal may also be transmitted by optical communication.
 画像処理部83は、カメラヘッド23から送信されたRAWデータである画像信号に対して各種の画像処理を施す。当該画像処理としては、例えば現像処理、高画質化処理(帯域強調処理、超解像処理、NR(Noise reduction)処理および/または手ブレ補正処理等)、並びに/または拡大処理(電子ズーム処理)等、各種の公知の信号処理が含まれる。また、画像処理部83は、AE、AFおよびAWBを行うための、画像信号に対する検波処理を行う。 The image processing unit 83 performs various types of image processing on the image signal that is RAW data transmitted from the camera head 23. As the image processing, for example, development processing, image quality enhancement processing (band enhancement processing, super-resolution processing, NR (Noise reduction) processing and / or camera shake correction processing, etc.), and / or enlargement processing (electronic zoom processing) Various known signal processing is included. The image processing unit 83 performs detection processing on the image signal for performing AE, AF, and AWB.
 画像処理部83は、CPUやGPU等のプロセッサによって構成され、当該プロセッサが所定のプログラムに従って動作することにより、上述した画像処理や検波処理が行われ得る。なお、画像処理部83が複数のGPUによって構成される場合には、画像処理部83は、画像信号に係る情報を適宜分割し、これら複数のGPUによって並列的に画像処理を行う。 The image processing unit 83 is configured by a processor such as a CPU and a GPU, and the above-described image processing and detection processing can be performed by the processor operating according to a predetermined program. When the image processing unit 83 is configured by a plurality of GPUs, the image processing unit 83 appropriately divides information related to the image signal and performs image processing in parallel by the plurality of GPUs.
 制御部85は、内視鏡20による術部の撮像、およびその撮像画像の表示に関する各種の制御を行う。例えば、制御部85は、カメラヘッド23の駆動を制御するための制御信号を生成する。この際、撮像条件がユーザによって入力されている場合には、制御部85は、当該ユーザによる入力に基づいて制御信号を生成する。あるいは、内視鏡20にAE機能、AF機能およびAWB機能が搭載されている場合には、制御部85は、画像処理部83による検波処理の結果に応じて、最適な露出値、焦点距離およびホワイトバランスを適宜算出し、制御信号を生成する。 The control unit 85 performs various controls related to imaging of the surgical site by the endoscope 20 and display of the captured image. For example, the control unit 85 generates a control signal for controlling the driving of the camera head 23. At this time, when the imaging condition is input by the user, the control unit 85 generates a control signal based on the input by the user. Alternatively, when the endoscope 20 is equipped with the AE function, the AF function, and the AWB function, the control unit 85 determines the optimum exposure value, focal length, and the distance according to the detection processing result by the image processing unit 83. A white balance is appropriately calculated and a control signal is generated.
 また、制御部85は、画像処理部83によって画像処理が施された画像信号に基づいて、術部の画像を表示装置53に表示させる。この際、制御部85は、各種の画像認識技術を用いて術部画像内における各種の物体を認識する。 Further, the control unit 85 causes the display device 53 to display an image of the surgical site based on the image signal subjected to the image processing by the image processing unit 83. At this time, the controller 85 recognizes various objects in the surgical part image using various image recognition techniques.
 例えば、制御部85は、術部画像に含まれる物体のエッジの形状や色等を検出することにより、鉗子等の術具、特定の生体部位、出血、エネルギー処置具33使用時のミスト等を認識することができる。制御部85は、表示装置53に術部の画像を表示させる際に、その認識結果を用いて、各種の手術支援情報を当該術部の画像に重畳表示させる。手術支援情報が重畳表示され、術者71に提示されることにより、より安全かつ確実に手術を進めることが可能になる。 For example, the control unit 85 detects the shape and color of the edge of the object included in the surgical part image, thereby removing surgical tools such as forceps, specific biological parts, bleeding, mist when using the energy treatment tool 33, and the like. Can be recognized. When displaying the image of the surgical site on the display device 53, the control unit 85 uses the recognition result to superimpose and display various types of surgery support information on the image of the surgical site. Surgery support information is displayed in a superimposed manner and presented to the operator 71, so that the surgery can be performed more safely and reliably.
 カメラヘッド23およびCCU51を接続する伝送ケーブル91は、電気信号の通信に対応した電気信号ケーブル、光通信に対応した光ファイバ、またはこれらの複合ケーブルである。 The transmission cable 91 connecting the camera head 23 and the CCU 51 is an electric signal cable corresponding to electric signal communication, an optical fiber corresponding to optical communication, or a composite cable thereof.
 ここで、図示する例では、伝送ケーブル91を用いて有線で通信が行われていたが、カメラヘッド23とCCU51との間の通信は無線で行われてもよい。両者の間の通信が無線で行われる場合には、伝送ケーブル91を手術室内に敷設する必要がなくなるため、手術室内における医療スタッフの移動が当該伝送ケーブル91によって妨げられる事態が解消され得る。 Here, in the illustrated example, communication is performed by wire using the transmission cable 91, but communication between the camera head 23 and the CCU 51 may be performed wirelessly. When communication between the two is performed wirelessly, it is not necessary to lay the transmission cable 91 in the operating room, so that the situation where the movement of the medical staff in the operating room is hindered by the transmission cable 91 can be solved.
 以上、本開示に係る技術が適用され得る内視鏡手術システム10の一例について説明した。 Heretofore, an example of the endoscopic surgery system 10 to which the technology according to the present disclosure can be applied has been described.
 なお、ここでは、一例として内視鏡手術システム10について説明したが、本開示に係る技術が適用され得るシステムは係る例に限定されない。例えば、本開示に係る技術は、検査用軟性内視鏡システムや顕微鏡手術システムに適用されてもよい。 In addition, although the endoscopic surgery system 10 has been described here as an example, a system to which the technology according to the present disclosure can be applied is not limited to such an example. For example, the technology according to the present disclosure may be applied to a testing flexible endoscope system or a microscope operation system.
 <画面例>
 図3は、表示装置53に表示される画面の一例を示す図である。表示装置53には、内視鏡20で撮像された画像201と、サブカメラで撮像された画像202が表示されている。内視鏡20をメインカメラとした場合、そのメインカメラとは異なるカメラがサブカメラである。
<Screen example>
FIG. 3 is a diagram illustrating an example of a screen displayed on the display device 53. On the display device 53, an image 201 captured by the endoscope 20 and an image 202 captured by the sub camera are displayed. When the endoscope 20 is a main camera, a camera different from the main camera is a sub camera.
 なおここでは、画像との記載を行うが、メインカメラやサブカメラは、動画像(映像)を撮影し、動画像を処理する装置に本技術は適用できる。 In addition, although described with an image here, this technique is applicable to the apparatus which image | photographs a moving image (video | video) and processes a moving image with a main camera and a subcamera.
 図4を参照して後述するように、メインカメラと複数のサブカメラが患者75の体腔内に挿入されており、それらの複数のカメラからの画像が、表示装置53に表示される。また、その表示は、術者71が視認しやすいような表示とされている。 As will be described later with reference to FIG. 4, the main camera and a plurality of sub cameras are inserted into the body cavity of the patient 75, and images from the plurality of cameras are displayed on the display device 53. In addition, the display is a display that is easy for the operator 71 to visually recognize.
 内視鏡20で撮像された画像201には、鉗子231、鉗子232、および糸233が撮像されている。サブカメラで撮像された画像202は、鏡を模写したコンピュータグラフィックス内(以下、鏡211と記述する)に表示されている。図3に示した画面例では、鏡211に鉗子231と糸233が写っている状態を示している。 The forceps 231, the forceps 232, and the thread 233 are captured in the image 201 captured by the endoscope 20. An image 202 captured by the sub camera is displayed in computer graphics (hereinafter referred to as mirror 211) in which a mirror is replicated. The screen example shown in FIG. 3 shows a state where the forceps 231 and the thread 233 are reflected on the mirror 211.
 図3に示した画面例の場合、鉗子231がある位置と対向する位置にサブカメラがあり、そのサブカメラで鉗子231が撮像されている。その撮像されている鉗子231が、鏡211内に表示されている。鏡211内に表示されている鉗子231を、鉗子231’とダッシュを付して記述する。以下の説明においても、鏡211内に表示されている物体に関しては、ダッシュを付して記述する。 In the case of the screen example shown in FIG. 3, there is a sub camera at a position opposite to the position where the forceps 231 is located, and the forceps 231 is imaged by the sub camera. The imaged forceps 231 is displayed in the mirror 211. The forceps 231 displayed in the mirror 211 is described with a forceps 231 'and a dash. Also in the following description, the object displayed in the mirror 211 is described with a dash.
 鏡211には、糸233’も映し出されている。糸233は、鉗子232につままれている状態である。このような状態の糸233を、鉗子231でつかもうとしている状態であるとき、鉗子231と糸233との位置関係は、メインカメラからの画像201だけで判断するのは困難である。このようなとき、サブカメラからの画像202を参照することで、鉗子231と糸233との位置関係を把握することが容易となる。 In the mirror 211, the thread 233 'is also projected. The thread 233 is in a state of being pinched by the forceps 232. When the thread 233 in such a state is being gripped by the forceps 231, it is difficult to determine the positional relationship between the forceps 231 and the thread 233 using only the image 201 from the main camera. In such a case, it is easy to grasp the positional relationship between the forceps 231 and the thread 233 by referring to the image 202 from the sub camera.
 また、サブカメラからの画像202を、鏡という術者71(ユーザ)が普段の生活で使い慣れている道具を模写した形状の画像内に表示することで、術者71は、そのような使い慣れた道具を使って、見づらい部分を見えるようにしているという感覚を得ることができる。よって、サブカメラからの画像を、ユーザが視認しやすい画像として提供することが可能となる。 In addition, by displaying the image 202 from the sub-camera in an image having a shape that is a copy of a tool that the surgeon 71 (user) is familiar with in everyday life, the surgeon 71 becomes accustomed to using it. You can get a sense that you can see the hard-to-see parts using tools. Therefore, the image from the sub camera can be provided as an image that is easy for the user to visually recognize.
 なおここでは、鏡を模写した絵(鏡211)が表示されるとして説明を続けるが、鏡以外の現実の道具、例えば歯鏡、ルーペなどを模写した絵であっても良い。以下の説明においては、サブカメラの画像を表示する道具を、仮想道具と記述する。現実の道具を、画面内に仮想道具として表示し、その仮想道具にサブカメラからの画像が表示される。 Here, the description will be continued on the assumption that a picture (mirror 211) replicating a mirror is displayed, but it may be a picture replicating an actual tool other than a mirror, such as a tooth mirror or a loupe. In the following description, a tool that displays an image of a sub camera is described as a virtual tool. An actual tool is displayed as a virtual tool on the screen, and an image from the sub camera is displayed on the virtual tool.
 仮想道具が上記した鏡211である場合、鏡211の角度が変更されたときには、その角度に追従し、鏡に映し出されるサブカメラからの画像も変更される。上記した例では、鏡211の角度が変更されると、鉗子231と糸233との位置関係が変更された画像が、鏡211内に表示される。 When the virtual tool is the above-described mirror 211, when the angle of the mirror 211 is changed, the image from the sub camera displayed on the mirror is also changed following the angle. In the above-described example, when the angle of the mirror 211 is changed, an image in which the positional relationship between the forceps 231 and the thread 233 is changed is displayed in the mirror 211.
 また例えば、仮想道具がルーペである場合、拡大率が変更されたとき、例えば、仮想道具であるルーペが鉗子232に近づく、遠ざかるなどの操作がなされたとき、ルーペ内に表示される画像も、その変更に応じて、拡大、縮小された画像とされる。なお、この場合のルーペ内に表示される画像は、メインカメラで撮像された画像の一部とすることができる。 Further, for example, when the virtual tool is a loupe, when the enlargement ratio is changed, for example, when an operation such as moving the loupe as a virtual tool closer to or away from the forceps 232 is performed, an image displayed in the loupe is also displayed. The image is enlarged or reduced according to the change. Note that the image displayed in the loupe in this case can be a part of the image captured by the main camera.
 すなわち仮想道具がルーペの場合、メインカメラで撮像された画像上にルーペが重畳され、そのルーペ内のメイン画像が拡大された画像が、ルーペ内に表示される。 That is, when the virtual tool is a loupe, the loupe is superimposed on the image captured by the main camera, and an image obtained by enlarging the main image in the loupe is displayed in the loupe.
 このように、歯鏡(鏡)、ルーペなどの現実世界に存在する光学的な道具を、仮想道具としてコンピュータグラフィックスで描画し、メインカメラの画像(メイン画像)上に重畳し、仮想道具にサブカメラからの画像(サブ画像)を重畳してPinP(Picture-In-Picture)が行われる。 In this way, optical tools that exist in the real world, such as tooth mirrors (mirrors) and loupes, are rendered as virtual tools with computer graphics, superimposed on the main camera image (main image), and used as virtual tools. PinP (Picture-In-Picture) is performed by superimposing an image (sub-image) from the sub-camera.
 このような仮想道具を用いた表示を行うことで、現実世界の道具を用いたことがあるユーザであれば、その道具を使った場合と同じ感覚で、見えている状況把握を直感的にできるようになる。また操作するための入力、例えば、仮想道具の鏡の角度を変えるといった操作は、直接その仮想道具を動かすような感覚で行えるため、操作性の良いユーザインタフェースを提供することが可能となる。 By using such virtual tools for display, users who have used real-world tools can intuitively grasp the visible situation with the same feeling as using those tools. It becomes like this. Further, since an input for operation, for example, an operation of changing the mirror angle of the virtual tool can be performed directly as if the virtual tool is moved, a user interface with good operability can be provided.
 また、仮想道具を操作するための命令は、現実世界で道具を操作する場合と同様の命令で良いため、人に伝えやすく、第3者(術者71以外の人)に操作してもらうための指示も出しやすく、またその指示により第3者が行った操作も十分な結果が得られやすくなる。 In addition, since the command for operating the virtual tool may be the same command as when operating the tool in the real world, it is easy to convey to a person, and a third party (a person other than the surgeon 71) operates it. It is easy to give an instruction, and an operation performed by a third party can easily obtain a sufficient result.
 仮想道具の操作に対する指示を出しやすくなることで、音声入力により指示を出したり、第3者に対して指示を出したりすることで、操作を行うといったユーザインタフェースが可能となる。 It becomes easy to give an instruction to the operation of the virtual tool, and a user interface for performing an operation by giving an instruction by voice input or by giving an instruction to a third party becomes possible.
 また、第3者も、現実世界での道具を使ったことがあれば、仮想道具を操作することも容易であり、第3者が、術者71が望む仮想道具に対する操作を行いやすくなり、例えば、術者71が指示を出す前に、第3者が操作を行うといったことも可能となる。よって、手術などの作業をよりスムーズに行うことが可能となる。 Also, if the third party has used a tool in the real world, it is easy to operate the virtual tool, and the third party can easily operate the virtual tool desired by the surgeon 71. For example, a third person can perform an operation before the surgeon 71 gives an instruction. Therefore, operations such as surgery can be performed more smoothly.
 またサブカメラの画像は、画像処理で処理することができる。例えば、仮想道具が鏡211であるとき、その鏡211の角度が変更されたとき、鏡211に映し出される画像202も変更されるが、その変更は画像処理で対応することができる。画像処理で対応することで、サブカメラを動かさなくても、ユーザの操作に対応することができる。サブカメラを動かさないことで、サブカメラと臓器が接触してしまうようなことを防ぐことができ、安全性を高めることが可能となる。 Also, sub camera images can be processed by image processing. For example, when the virtual tool is the mirror 211, when the angle of the mirror 211 is changed, the image 202 displayed on the mirror 211 is also changed, but the change can be dealt with by image processing. By dealing with image processing, it is possible to deal with user operations without moving the sub-camera. By not moving the sub camera, it is possible to prevent the sub camera and the organ from coming into contact with each other, and it is possible to improve safety.
 <サブカメラの取り付け位置>
 メインカメラとサブカメラのそれぞれの取り付け位置などについて説明を加える。図4は、内視鏡におけるサブカメラの取り付け位置について説明するための図である。
<Sub camera mounting position>
A description will be given of the mounting positions of the main camera and the sub camera. FIG. 4 is a diagram for explaining the attachment position of the sub camera in the endoscope.
 患者75の体腔内には、内視鏡20a、鉗子35、および内視鏡20bが挿入されている。内視鏡20aはメインカメラであり、内視鏡20bはサブカメラである。このように、複数の内視鏡20を体腔内に挿入し、そのうちの1つの内視鏡20をメインカメラとし、他の内視鏡20をサブカメラとして用いるようにすることができる。 In the body cavity of the patient 75, the endoscope 20a, the forceps 35, and the endoscope 20b are inserted. The endoscope 20a is a main camera, and the endoscope 20b is a sub camera. In this way, a plurality of endoscopes 20 can be inserted into the body cavity, and one of the endoscopes 20 can be used as the main camera and the other endoscope 20 can be used as the sub camera.
 サブカメラとしての内視鏡20bの口径は、メインカメラとしての内視鏡20aの口径よりも細い口径の内視鏡としても良い。また、サブカメラの解像度は、メインカメラの解像度よりも低い解像度としても良く、同一の解像度でなくても良い。 The diameter of the endoscope 20b as the sub camera may be an endoscope having a smaller diameter than the diameter of the endoscope 20a as the main camera. Further, the resolution of the sub camera may be lower than the resolution of the main camera, and may not be the same resolution.
 また、メインカメラとサブカメラは、術者71からの指示により、切り替えられるようにしても良い。すなわち、図4に示した場合、ある時点では、内視鏡20aがメインカメラであり、内視鏡20bがサブカメラであるが、ユーザからの切り替えの指示があった場合、内視鏡20aがサブカメラであり、内視鏡20bがメインカメラとなる仕組みを設けても良い。 In addition, the main camera and the sub camera may be switched according to an instruction from the operator 71. That is, in the case shown in FIG. 4, at a certain point in time, the endoscope 20a is a main camera and the endoscope 20b is a sub-camera. A mechanism that is a sub camera and the endoscope 20b serves as a main camera may be provided.
 なお、図4では、サブカメラが1台の場合を示したが、複数台のサブカメラが体腔内に挿入されていても良い。 Although FIG. 4 shows the case where there is one sub camera, a plurality of sub cameras may be inserted into the body cavity.
 図5は、内視鏡におけるサブカメラの他の取り付け位置について説明するための図である。患者75の体腔内には、内視鏡20、鉗子35a、および鉗子35bが挿入されている。図5に示した例では、2本の鉗子35が体腔内に挿入されているが、そのうちの一本の鉗子35bに、サブカメラ251が装着されている。 FIG. 5 is a diagram for explaining another mounting position of the sub camera in the endoscope. In the body cavity of the patient 75, the endoscope 20, the forceps 35a, and the forceps 35b are inserted. In the example shown in FIG. 5, two forceps 35 are inserted into the body cavity, and the sub-camera 251 is attached to one of the forceps 35b.
 鉗子35bの鉗子としての機能を妨げない位置に、サブカメラ251が装着されている。またサブカメラ251は、医療用のカメラとして認証済のカメラであるようにすることができる。例えば、カプセル内視鏡などと称される内視鏡をサブカメラ251として用い、鉗子35bに装着するようにしても良い。また鉗子35bが、サブカメラ251をつかみ、その状態で、体腔内に挿入されるようにしても良い。 The sub camera 251 is mounted at a position that does not hinder the function of the forceps 35b as a forceps. The sub camera 251 can be a camera that has been authenticated as a medical camera. For example, an endoscope called a capsule endoscope may be used as the sub camera 251 and attached to the forceps 35b. Alternatively, the forceps 35b may be inserted into the body cavity while holding the sub camera 251.
 なお、サブカメラ251は、鉗子35bに対して着脱自在に装着されていても良いし、鉗子35bの一部として組み込まれているような構成としても良い。 The sub camera 251 may be detachably attached to the forceps 35b, or may be configured as a part of the forceps 35b.
 図5に示した例では、内視鏡20をメインカメラとし、内視鏡20以外の術具に取り付けられているカメラをサブカメラとして用いるようにしている。図5では、サブカメラ251が1台の場合を示したが、複数台のサブカメラ251が複数の術具にそれぞれ装着され、体腔内に挿入されていても良い。 In the example shown in FIG. 5, the endoscope 20 is used as a main camera, and a camera attached to a surgical instrument other than the endoscope 20 is used as a sub camera. Although FIG. 5 shows a case where there is one sub camera 251, a plurality of sub cameras 251 may be attached to a plurality of surgical instruments and inserted into a body cavity.
 図5に示した構成の場合も、メインカメラとサブカメラは、術者71からの指示により、切り替えられるようにしても良い。すなわち、図5に示した場合、ある時点では、内視鏡20aがメインカメラであり、サブカメラ251がサブカメラであるが、ユーザからの切り替えの指示があった場合、内視鏡20aがサブカメラであり、サブカメラ251がメインカメラとして機能する仕組みを設けても良い。 In the case of the configuration shown in FIG. 5, the main camera and the sub camera may be switched according to an instruction from the operator 71. That is, in the case shown in FIG. 5, at a certain point in time, the endoscope 20a is a main camera and the sub camera 251 is a sub camera. However, when a switching instruction is given from the user, the endoscope 20a is A mechanism may be provided in which the sub camera 251 functions as a main camera.
 本技術は、医療用のロボットに対しても適用できる。医療用のロボットは、詳細な説明は省略するが、図6に示すような構成を有する。医療用のロボットは、操作部281、本体282、モニタ部283から構成されている。 This technology can also be applied to medical robots. Although a detailed description is omitted, the medical robot has a configuration as shown in FIG. The medical robot includes an operation unit 281, a main body 282, and a monitor unit 283.
 操作部281は、本体282を操作するための装置である。本体282は、例えば、3本のアーム291乃至293を備える。操作部281は、術者71aにより操作され、本体282のアーム291乃至293を、遠隔操作する。術者71aは、操作部281に備えられているディスプレイを見ながら、本体282のアーム291乃至293を操作する。本体282のアーム291乃至293は、電気メス、内視鏡、鉗子等である。 The operation unit 281 is a device for operating the main body 282. The main body 282 includes, for example, three arms 291 to 293. The operation unit 281 is operated by the operator 71a to remotely operate the arms 291 to 293 of the main body 282. The surgeon 71a operates the arms 291 to 293 of the main body 282 while looking at the display provided in the operation unit 281. The arms 291 to 293 of the main body 282 are an electric knife, an endoscope, forceps, and the like.
 モニタ部283は、本体282の近くに設置され、手術の様子をモニタリングするためのモニタである。術者71bは、モニタ部283を見て、必要に応じ、手術の支援を行う。 The monitor unit 283 is a monitor that is installed near the main body 282 and monitors the state of the operation. The surgeon 71b looks at the monitor unit 283 and supports surgery as necessary.
 このような医療用のロボットにおいては、図7に示すように、サブカメラを設けることができる。図7を参照するに、アーム292は、上記した内視鏡と同様の機能を有するアームカメラであり、メインカメラとして機能する。アーム291とアーム293は、鉗子やメスなどである。アーム291とアーム293には、それぞれサブカメラ252とサブカメラ253が装着されている。 Such a medical robot can be provided with a sub camera as shown in FIG. Referring to FIG. 7, the arm 292 is an arm camera having the same function as the above-described endoscope, and functions as a main camera. The arms 291 and 293 are forceps, a scalpel, and the like. A sub camera 252 and a sub camera 253 are attached to the arm 291 and the arm 293, respectively.
 アーム291,293のアームとしての機能、例えば、メスや鉗子といった機能を妨げない位置に、サブカメラ252,253が装着されている。またサブカメラ252,253は、図5を参照して説明したサブカメラ251と同じく、医療用のカメラとして認証済のカメラであるようにすることができ、例えば、カプセル内視鏡などと称される内視鏡をサブカメラ252,253として用いることができる。 The sub-cameras 252 and 253 are mounted at positions that do not hinder the functions of the arms 291 and 293 as an arm, for example, the functions of a knife and forceps. Similarly to the sub camera 251 described with reference to FIG. 5, the sub cameras 252 and 253 may be cameras that have been authenticated as medical cameras, and are referred to as capsule endoscopes, for example. Can be used as the sub cameras 252 and 253.
 なお、サブカメラ252,253は、アーム291,293に対して着脱自在に装着されていても良いし、アーム291,293の一部として組み込まれているような構成としても良い。 The sub cameras 252 and 253 may be detachably attached to the arms 291 and 293, or may be configured as part of the arms 291 and 293.
 図7に示した例では、アーム292のアームカメラをメインカメラとし、アームカメラ以外の術具に取り付けられているカメラをサブカメラとして用いるようにした例である。図7では、サブカメラ251が2台の場合を示したが、1台、または複数台(3台以上)のサブカメラ252(253)が複数の術具にそれぞれ装着され、体腔内に挿入されていても良い。 In the example shown in FIG. 7, the arm camera of the arm 292 is used as a main camera, and a camera attached to a surgical instrument other than the arm camera is used as a sub camera. Although FIG. 7 shows a case where there are two sub cameras 251, one or a plurality of (three or more) sub cameras 252 (253) are respectively attached to a plurality of surgical instruments and inserted into a body cavity. May be.
 図7に示した構成の場合も、メインカメラとサブカメラは、術者71からの指示により、切り替えられるようにしても良い。すなわち、図7に示した場合、ある時点では、アーム292のアームカメラがメインカメラであり、サブカメラ252,253がサブカメラであるが、ユーザからの切り替えの指示があった場合、アームカメラがサブカメラであり、サブカメラ252,253のうちの指示があった方がメインカメラとして機能する仕組みを設けても良い。 In the case of the configuration shown in FIG. 7, the main camera and the sub camera may be switched according to an instruction from the operator 71. That is, in the case shown in FIG. 7, at a certain point in time, the arm camera of the arm 292 is the main camera and the sub cameras 252 and 253 are sub cameras. A mechanism may be provided in which the sub camera is instructed by one of the sub cameras 252 and 253 to function as the main camera.
 <画像処理部の構成>
 図3に示したような画面を生成し、表示装置53への表示を制御する画像処理部83(図2)の構成が図8となる。以下の説明においては、図4に示したようにメインカメラとしての内視鏡20aとサブカメラとしての内視鏡20bが、体腔内に挿入されている場合を例に挙げて説明する。
<Configuration of image processing unit>
FIG. 8 shows the configuration of the image processing unit 83 (FIG. 2) that generates the screen as shown in FIG. 3 and controls the display on the display device 53. In the following description, the case where the endoscope 20a as the main camera and the endoscope 20b as the sub camera are inserted into the body cavity as shown in FIG. 4 will be described as an example.
 画像処理部83は、仮想道具描画重畳部411と画像変換処理部412とを備える。仮想道具描画重畳部411には、メインカメラからの画像データ、この場合、内視鏡20aからの画像データが供給される。画像変換処理部412には、サブカメラからの画像データ、この場合、内視鏡20bからの画像データが供給される。 The image processing unit 83 includes a virtual tool drawing superimposing unit 411 and an image conversion processing unit 412. The virtual tool drawing superimposing unit 411 is supplied with image data from the main camera, in this case, image data from the endoscope 20a. The image conversion processing unit 412 is supplied with image data from the sub camera, in this case, image data from the endoscope 20b.
 また画像変換処理部412には、位置センサ421からのデータも供給される。位置センサ421は、内視鏡20aや内視鏡20bの位置を検出するセンサである。例えば、GPS(Global Positioning System)等を用いたセンサとすることができる。 Further, data from the position sensor 421 is also supplied to the image conversion processing unit 412. The position sensor 421 is a sensor that detects the position of the endoscope 20a or the endoscope 20b. For example, a sensor using GPS (Global Positioning System) or the like can be used.
 位置センサ421は、内視鏡20aと内視鏡20b(メインカメラとサブカメラ)の位置関係、例えば、どの程度離れているか、なす角は何度であるか等が把握するために設けられている。これらの情報が得られれば、位置センサ421は、どのようなセンサを用いても良い。 The position sensor 421 is provided to grasp the positional relationship between the endoscope 20a and the endoscope 20b (main camera and sub-camera), for example, how far away they are, and how many angles are formed. Yes. As long as these pieces of information are obtained, any sensor may be used as the position sensor 421.
 なお、サブカメラからの画像を、メインカメラからの画像に重畳(PinP)する場合は、モーションキャプチャー技術(マーカ、画像、磁気など)と、ジャイロセンサによる傾きなどの情報を取得する必要がある。 In addition, when superimposing the image from the sub camera on the image from the main camera (PinP), it is necessary to acquire information such as motion capture technology (marker, image, magnetism, etc.) and tilt by the gyro sensor.
 しかしながら、例えば、縫合時の奥行き情報を見る目的であれば、横からの画像が見られれば良いため、メインカメラに対してサブカメラがどの方向にあるか程度の情報であっても良く、高い精度の位置測定でなくても良い。 However, for example, for the purpose of seeing the depth information at the time of stitching, it is only necessary to see an image from the side, so information about the direction in which the sub camera is located with respect to the main camera may be high. It may not be accurate position measurement.
 また、これらの情報を、位置センサ421以外から得られるように構成することも可能であり、そのような場合には、位置センサ421を設けない構成としても良い。例えば、メインカメラからの画像を解析することで、サブカメラの位置を検出するようにした場合、位置センサ421を設けない構成としても良い。 In addition, it is possible to configure such information to be obtained from other than the position sensor 421. In such a case, the position sensor 421 may not be provided. For example, when the position of the sub camera is detected by analyzing the image from the main camera, the position sensor 421 may not be provided.
 また図6、図7を参照して説明した医療用のロボットにおけるアーム291乃至293は、基本的にアーム同士の相対的な位置関係が事前にわかっているため、その位置関係を用いることができるため、ロボットの場合には、位置センサ421を備えない構成とすることもできる。 In addition, since the arms 291 to 293 in the medical robot described with reference to FIGS. 6 and 7 basically know the relative positional relationship between the arms in advance, the positional relationship can be used. Therefore, in the case of a robot, the position sensor 421 may not be provided.
 図9を参照し、画像処理部83における処理について説明する。図9の左上図は、メインカメラである内視鏡20aで撮像されている画像を示す。図9の右上図は、サブカメラである内視鏡20bで撮像されている画像を示す。 The processing in the image processing unit 83 will be described with reference to FIG. The upper left diagram in FIG. 9 shows an image captured by the endoscope 20a that is the main camera. The upper right diagram in FIG. 9 shows an image captured by the endoscope 20b as a sub camera.
 図9の下図は、内視鏡20aで撮像されている画像に、内視鏡20bで撮像されている画像が重畳された画像を示す。図9の下図は、図3に示した画像であり、ここでは、図3を参照して説明した画像が生成される場合を例に挙げて説明を行う。 The lower part of FIG. 9 shows an image obtained by superimposing an image captured by the endoscope 20b on an image captured by the endoscope 20a. The lower diagram of FIG. 9 is the image shown in FIG. 3, and here, the case where the image described with reference to FIG. 3 is generated will be described as an example.
 図9の左上図に示したように、メインカメラである内視鏡20aでは、鉗子231、鉗子232、および糸233が撮像されている。このような画像201の画像データが、仮想道具描画重畳部411に供給される。 As shown in the upper left diagram of FIG. 9, in the endoscope 20a that is the main camera, the forceps 231, the forceps 232, and the thread 233 are imaged. Such image data of the image 201 is supplied to the virtual tool drawing superimposing unit 411.
 図9の右上図に示したように、サブカメラである内視鏡20bでは、鉗子231、鉗子232、および糸233が撮像されている。このような画像202の画像データが、画像変換処理部412に供給される。 As shown in the upper right view of FIG. 9, in the endoscope 20b as a sub camera, the forceps 231, the forceps 232, and the thread 233 are imaged. Such image data of the image 202 is supplied to the image conversion processing unit 412.
 内視鏡20aと内視鏡20bは、共に、鉗子231、鉗子232、および糸233を撮像しているが、異なる位置に挿入されているために、異なる画像を撮像している。図9に示した例では、内視鏡20aと内視鏡20bは、90度の角度を有する位置関係で、体腔内に挿入されている。よって、内視鏡20aで撮像されている画像201を、正面から撮像された画像とした場合、内視鏡20bで撮像されている画像202は、側面から撮像された画像となる。 Both the endoscope 20a and the endoscope 20b image the forceps 231, the forceps 232, and the thread 233, but are different images because they are inserted at different positions. In the example shown in FIG. 9, the endoscope 20a and the endoscope 20b are inserted into the body cavity in a positional relationship having an angle of 90 degrees. Therefore, when the image 201 captured by the endoscope 20a is an image captured from the front, the image 202 captured by the endoscope 20b is an image captured from the side.
 画像変換処理部412は、サブカメラである内視鏡20bからの画像202から、メインカメラである内視鏡20aからの画像201に重畳する画像を切り出したり、拡大したり、縮小したり、変換したりする。すなわち、画像変換処理部412は、鏡211内に表示させる画像を、画像202から生成する。 The image conversion processing unit 412 cuts out, enlarges, reduces, or converts an image to be superimposed on the image 201 from the endoscope 20a as the main camera from the image 202 from the endoscope 20b as the sub camera. To do. That is, the image conversion processing unit 412 generates an image to be displayed in the mirror 211 from the image 202.
 画像変換処理部412は、変換などの処理を施すとき、必要に応じて位置センサ421からの位置情報を用いて処理を行う。 The image conversion processing unit 412 performs processing using position information from the position sensor 421 as necessary when performing processing such as conversion.
 仮想道具描画重畳部411は、メインカメラからの画像201上に、鏡211を描画する。そして、その描画した鏡211内に、画像変換処理部412からの画像が表示される画像を生成する。仮想道具描画重畳部411により生成された画像の画像データは、表示装置53に供給され、図9の下図に示したような画像が、表示装置53上に表示される。 The virtual tool drawing superimposing unit 411 draws the mirror 211 on the image 201 from the main camera. Then, an image in which the image from the image conversion processing unit 412 is displayed in the drawn mirror 211 is generated. The image data of the image generated by the virtual tool drawing superimposing unit 411 is supplied to the display device 53, and an image as shown in the lower diagram of FIG. 9 is displayed on the display device 53.
 <画像生成に係わる処理>
 更に図10に示したフローチャートを参照し、図3(図9の下図)に示したような画像が生成されるときの画像処理部83における処理について説明を加える。
<Processing related to image generation>
Furthermore, with reference to the flowchart shown in FIG. 10, the processing in the image processing unit 83 when an image as shown in FIG. 3 (lower diagram in FIG. 9) is generated will be described.
 図10を参照して説明する処理は、縫合作業時における処理を例に挙げて説明する。よって他の作業時の処理は、その作業に合う処理とされる。例えば、切開作業、患部の除去作業、洗浄作業などの作業に応じて、適宜、図10に示したフローチャートの処理は変更される。 The processing described with reference to FIG. 10 will be described by taking the processing at the time of sewing work as an example. Therefore, the process at the time of other work is a process suitable for the work. For example, the processing of the flowchart shown in FIG. 10 is changed as appropriate according to operations such as an incision operation, an affected area removal operation, and a cleaning operation.
 ステップS101において、縫合作業であるか否かが判定される。ステップS101において、縫合作業ではないと判定された場合、ステップS108に処理が進められる。上記したように、作業毎に異なるため、縫合作業ではないと判定された場合、他の作業、例えば、切開作業であるか否か、患部の除去作業であるのか否かなどが、順次判定されるようにしても良い。 In step S101, it is determined whether or not the sewing operation is performed. If it is determined in step S101 that the sewing operation is not performed, the process proceeds to step S108. As described above, since it differs from work to work, when it is determined that the work is not a suturing work, other work, for example, whether it is an incision work or an affected part removal work, is sequentially determined. You may make it.
 縫合作業であるのか否かの判定は、例えば、術者71の発生を音声認識することで行われるようにすることができる。例えば、術者71が、「縫合を始める」、「糸と針」といったような縫合に関するキーワードを発したか否かを判定することで、ステップS101における判定が行われるようにすることができる。 The determination as to whether or not the operation is a suturing operation can be performed by, for example, recognizing the occurrence of the operator 71 by voice. For example, the determination in step S101 can be performed by determining whether or not the operator 71 has issued a keyword relating to suturing such as “start sewing” and “thread and needle”.
 また術式で手術の流れがパターン化されていることを利用して、縫合作業であるか否かを判定するようにしても良い。また手術の流れで、縫合作業であるか否かを判定する処理は、上記した音声認識による判定とともに行われ、それらの結果が統合されることで、判定が行われるようにしても良い。 Also, it may be determined whether or not the operation is a suturing operation using the fact that the flow of the operation is patterned by an operation method. The process for determining whether or not the operation is a suturing operation in the operation flow may be performed together with the determination based on the voice recognition described above, and the determination may be performed by integrating the results.
 また、画像認識で縫合作業であるか否かが判定されるようにしても良い。例えば、縫合作業の時には、糸や針が用いられるため、メインカメラやサブカメラで撮像されている画像から、糸や針が検出されるか否かを判定し、糸や針が検出されたときには、縫合作業であると判定されるようにしても良い。 Further, it may be determined whether or not the stitching operation is performed by image recognition. For example, since a thread or a needle is used during a sewing operation, it is determined whether or not a thread or a needle is detected from an image captured by a main camera or a sub camera. When a thread or a needle is detected It may be determined that the sewing operation is performed.
 音声認識と画像認識で、ステップS101の判定が行われるようにすることも可能である。また、ここでは例示していない他の方法で、ステップS101の判定が行われようにしても良い。 It is also possible to make the determination in step S101 by voice recognition and image recognition. Further, the determination in step S101 may be performed by another method not illustrated here.
 ステップS101において、縫合作業であると判定された場合、ステップS102に処理が進められる。ステップS102において、デフォルト値が設定される。この設定されるデフォルト値について、図11を参照して説明する。 If it is determined in step S101 that it is a suturing operation, the process proceeds to step S102. In step S102, a default value is set. The default value to be set will be described with reference to FIG.
 仮想道具である鏡211の表示されるのは、サブカメラ(内視鏡20b)がある側である。図9を参照して説明した位置関係に内視鏡20aと内視鏡20bがある場合、内視鏡20aに対して内視鏡20bは、図中右側にあるため、鏡211も、画面内の右側に表示される。まずこのように、サブカメラの画像を表示する側がデフォルト値として設定される。 The mirror 211 which is a virtual tool is displayed on the side where the sub camera (endoscope 20b) is present. When the endoscope 20a and the endoscope 20b are in the positional relationship described with reference to FIG. 9, since the endoscope 20b is on the right side in the figure with respect to the endoscope 20a, the mirror 211 is also within the screen. Displayed on the right side of. First, in this way, the side displaying the image of the sub camera is set as a default value.
 また鏡211が表示される位置は、サブカメラ(内視鏡20b)に近い側にある鉗子から、鉗子の先端部分の長さだけ離れた位置とされる。図11に示した例では、サブカメラ(鏡211)に近い鉗子は、鉗子232であるため、この鉗子232から、鉗子の先端部分の長さ(長さd1)だけ離れた位置に、鏡211が表示される。 Also, the position where the mirror 211 is displayed is a position away from the forceps on the side close to the sub camera (endoscope 20b) by the length of the tip portion of the forceps. In the example shown in FIG. 11, since the forceps close to the sub camera (mirror 211) is the forceps 232, the mirror 211 is positioned away from the forceps 232 by the length (length d1) of the distal end portion of the forceps. Is displayed.
 なお、鉗子232と鏡211の間の長さd1は、鉗子232の中心と鏡211の中心の長さでも良いし、鉗子232の側面(鏡211に近い側)と、鏡211の側面(鉗子232に近い側)の長さでも良い。すなわち、鉗子232と鏡211のどの点を基準として、鉗子先端の長さ分だけ離すかは、どのように設定されても良い。 The length d1 between the forceps 232 and the mirror 211 may be the length between the center of the forceps 232 and the center of the mirror 211, or the side surface of the forceps 232 (side closer to the mirror 211) and the side surface of the mirror 211 (forceps). The length on the side close to 232 may be used. That is, how the point between the forceps 232 and the mirror 211 is separated by the length of the forceps tip may be set in any way.
 このように、鏡211の表示位置、換言すれば、鉗子からの距離が、デフォルト値として設定される。 Thus, the display position of the mirror 211, in other words, the distance from the forceps is set as a default value.
 また、鏡211の大きさもデフォルト値として設定される。例えば、鏡211の大きさは、鉗子の先端部分の大きさを半径とすることができる。鏡211の形状を円形とした場合、鉗子の先端部分の大きさ(図11では長さd1)を半径とする円形状とされる。また鏡211を、図11に示したような楕円形状とした場合、鉗子の先端部分の大きさ(長さd1)の2倍が長径または短径とする楕円形状とされる。 Also, the size of the mirror 211 is set as a default value. For example, the size of the mirror 211 can be the radius of the tip of the forceps. When the shape of the mirror 211 is a circle, the shape is a circular shape having a radius of the size (length d1 in FIG. 11) of the distal end portion of the forceps. Further, when the mirror 211 has an elliptical shape as shown in FIG. 11, it has an elliptical shape having a major axis or a minor axis that is twice the size (length d1) of the distal end portion of the forceps.
 このように、鉗子の先端部分の大きさを基準として、仮想道具の鏡211の表示位置や大きさなどを設定するのは、術者71にとって、鉗子は使い慣れた道具であり、そのような道具の大きさを基準とすることで、仮想道具の大きさも認識しやすくなると考えられるからである。なお、ここでは鉗子を例に挙げて説明したが、メイン画像における主要被写体(例えば、鉗子)の大きさを基準とすることができる。 Thus, setting the display position and size of the mirror 211 of the virtual tool on the basis of the size of the tip portion of the forceps is a tool familiar to the operator 71, and such a tool. This is because the size of the virtual tool can be easily recognized by using the size of the tool as a reference. Here, the forceps is described as an example, but the size of the main subject (for example, forceps) in the main image can be used as a reference.
 また鏡211の角度は、デフォルト値としては、45度とされる。鏡211の角度が90度のときは、鏡211の側面のみが見える状態であり、画像201に対して垂直に位置している状態である。換言すれば、鏡211の角度が90度の時には、鏡面がこちら側を向いていない状態である。 The angle of the mirror 211 is 45 degrees as a default value. When the angle of the mirror 211 is 90 degrees, only the side surface of the mirror 211 can be seen, and the mirror 211 is positioned perpendicular to the image 201. In other words, when the angle of the mirror 211 is 90 degrees, the mirror surface is not facing this side.
 鏡211の角度が0度の時は、鏡211の鏡面がこちら側に向いている状態であり、画像201に対して平行に位置する状態である。このような状態のときは、鏡211には、サブカメラからの画像は表示されていない状態である。 When the angle of the mirror 211 is 0 degree, the mirror surface of the mirror 211 is directed to this side and is in a state of being parallel to the image 201. In such a state, an image from the sub camera is not displayed on the mirror 211.
 または、サブカメラとメインカメラが同一方向に設置されていた場合、鏡211の角度が0度の時は、画像201と略同一の画像が表示されている状態である。 Or, when the sub camera and the main camera are installed in the same direction, when the angle of the mirror 211 is 0 degree, an image substantially the same as the image 201 is displayed.
 よって、0度と90度の中間である45度が、鏡211のデフォルト値として設定される。なおここではデフォルト値を45度として説明を続けるが、内視鏡20aと内視鏡20b(メインカメラとサブカメラ)の位置関係により、角度に関するデフォルト値は45度以外に設定されても良い。換言すれば、角度に関するデフォルト値は、固定値ではなく、そのときの状況に応じて設定される可変値として設定されていても良い。 Therefore, 45 degrees, which is between 0 degrees and 90 degrees, is set as the default value of the mirror 211. Although the description will be continued assuming that the default value is 45 degrees here, the default value related to the angle may be set to other than 45 degrees depending on the positional relationship between the endoscope 20a and the endoscope 20b (main camera and sub camera). In other words, the default value related to the angle may be set as a variable value set according to the situation at that time, not a fixed value.
 例えば、サブカメラに最も近い位置にある鉗子232の真横が映し出せる角度、換言すれば、メインカメラが撮像している鉗子232の面と垂直になる方向から撮像したときの画像が映し出せる角度に、デフォルト値が設定されるようにしても良い。 For example, the angle at which the side of the forceps 232 closest to the sub camera can be projected, in other words, the angle at which the image captured when viewed from the direction perpendicular to the surface of the forceps 232 captured by the main camera can be projected. A default value may be set.
 これら以外のデフォルト値が設定されるようにしても良い。また上記したデフォルト値は一例であり、他のデフォルト値が設定されるようにしても良い。 Other default values may be set. Further, the above-described default value is an example, and another default value may be set.
 また、デフォルト値は学習により設定されるようにしても良い。例えば、デフォルトとして鏡211の位置や大きさが設定されるが、設定後、術者71により鏡211の位置、大きさ、角度などは変更されることがある。 Also, the default value may be set by learning. For example, although the position and size of the mirror 211 are set as defaults, the position, size, angle, and the like of the mirror 211 may be changed by the operator 71 after the setting.
 術者71により、鏡211の表示位置、大きさ、角度などが変更された場合、その履歴を記録しておく(学習する)。そして、よく使われる位置、大きさ、角度などをデフォルト値として設定されるようにしても良い。 When the operator 71 changes the display position, size, angle, etc. of the mirror 211, the history is recorded (learned). Then, frequently used positions, sizes, angles, etc. may be set as default values.
 また、手術のシーケンスに応じてデフォルト値が設定されるようにしても良い。手術のシーケンスに応じてとは、どのような手術であるか、例えば、患部を取り除く手術であるのか、骨を削る手術であるのかなど、またどのような作業工程であるか、例えば、切開作業であるのか、縫合作業であるのか等に応じて、デフォルト値が設定されるようにしても良い。 Also, a default value may be set according to the surgical sequence. Depending on the sequence of the operation, what kind of operation is performed, for example, an operation for removing an affected part, an operation for cutting a bone, and what kind of work process is performed, for example, an incision operation The default value may be set according to whether it is a sewing operation or the like.
 図10に示したフローチャートを参照した説明に戻り、ステップS102において、デフォルト値が設定されると、ステップS103に処理が進められる。ステップS103において、サブカメラの画像が変換される。換言すれば、仮想道具(鏡211)の鏡面内に表示する画像が生成される。 Returning to the description with reference to the flowchart shown in FIG. 10, when a default value is set in step S102, the process proceeds to step S103. In step S103, the sub-camera image is converted. In other words, an image to be displayed on the mirror surface of the virtual tool (mirror 211) is generated.
 図12を参照し、仮想道具(鏡211)の鏡面内に表示する画像の生成に係わる処理について説明する。 Referring to FIG. 12, a process related to generation of an image to be displayed in the mirror surface of the virtual tool (mirror 211) will be described.
 図12は、メインカメラ、サブカメラ、鏡211の位置関係について説明するための図である。図12において、メインカメラの撮像面を、メインカメラ501とし、サブカメラの撮像面をサブカメラ502とする。 FIG. 12 is a diagram for explaining the positional relationship between the main camera, the sub camera, and the mirror 211. In FIG. 12, the imaging surface of the main camera is the main camera 501, and the imaging surface of the sub camera is the sub camera 502.
 図12に示した例では、メインカメラ501とサブカメラ502が、90度以下の鋭角をなして設置されている場合を示している。 The example shown in FIG. 12 shows a case where the main camera 501 and the sub camera 502 are installed with an acute angle of 90 degrees or less.
 図12を参照するに、メインカメラ501とサブカメラ502は、物体511を撮像している。メインカメラ501は、物体511のx軸、y軸からなる面側を撮像している。サブカメラ502は、物体511のy’軸、z’軸からなる面側を撮像している。 Referring to FIG. 12, the main camera 501 and the sub camera 502 image the object 511. The main camera 501 captures an image of the surface side of the object 511 including the x axis and the y axis. The sub camera 502 captures an image of the surface of the object 511 formed by the y ′ axis and the z ′ axis.
 メインカメラ501は、物体511を撮像することで、図12に示すような画像201を撮像する。図12では説明のため、画像201に鏡211を重畳して図示してある。 The main camera 501 captures an image 201 as illustrated in FIG. 12 by capturing the object 511. In FIG. 12, a mirror 211 is superimposed on the image 201 for explanation.
 鏡211内の画像が、サブカメラ502で撮像される画像202内のどの部分の画像であるかが求められる。例えば、鏡211内の点Pは、サブカメラ502で撮像されている画像202内の点Qである。 It is determined which part of the image 202 captured by the sub camera 502 the image in the mirror 211 is. For example, a point P in the mirror 211 is a point Q in the image 202 captured by the sub camera 502.
 このような対応関係を求めるために、仮想壁xwが求められる。仮想壁xwは、物体511が作る壁であり、例えば、物体511のエッジが抽出され、そのエッジを通る直線とされる。鏡211の点Pは、仮想壁xw上の点Rを映しており、点R(点P)が、サブカメラ502で撮像されている画像202内の点Qに相当する。 In order to obtain such correspondence, a virtual wall xw is required. The virtual wall xw is a wall created by the object 511. For example, the edge of the object 511 is extracted and is a straight line passing through the edge. The point P of the mirror 211 reflects the point R on the virtual wall xw, and the point R (point P) corresponds to the point Q in the image 202 captured by the sub camera 502.
 図12において、鏡211の中心の点を点Cとし、そのx軸の座標をxcと記述する。また鏡211内の求めたい点Pのx軸の座標をxpと記述する。また、鏡211の点cと、メインカメラ501との距離を距離dxcとし、鏡211の点Pと、メインカメラ501との距離を距離dxpとする。 In FIG. 12, the center point of the mirror 211 is a point C, and the x-axis coordinate is described as xc. The x-axis coordinate of the point P to be obtained in the mirror 211 is described as xp. The distance between the point c of the mirror 211 and the main camera 501 is a distance dxc, and the distance between the point P of the mirror 211 and the main camera 501 is a distance dxp.
 また、鏡211上の点Pと仮想面xw上の点Rとを結ぶ線と、鏡211との垂線とがなす角を角度αとする。また、メインカメラ501とサブカメラ502とは、角度βの関係を満たす位置に配置されているとする。また仮想面xw上の点Rのx軸の座標をxwと記述する。 In addition, an angle formed by a line connecting the point P on the mirror 211 and the point R on the virtual plane xw and a perpendicular to the mirror 211 is defined as an angle α. Further, it is assumed that the main camera 501 and the sub camera 502 are arranged at positions that satisfy the relationship of the angle β. The x-axis coordinate of the point R on the virtual plane xw is described as xw.
 このようなとき、鏡211上の点Pに対応するサブカメラ502の画像202内の点Qの座標(z’軸の座標)は、以下の式で求められる。まず、仮想面xw上の点Rと、鏡211の点Pの距離daは、次式(1)で求められる。
 距離da=(xp-xw)・tan(π/2-2α)
     =(xp-xw)/tan(2α)     ・・・(1)
In such a case, the coordinate of the point Q in the image 202 of the sub camera 502 corresponding to the point P on the mirror 211 (coordinate of the z ′ axis) is obtained by the following equation. First, the distance da between the point R on the virtual plane xw and the point P of the mirror 211 is obtained by the following equation (1).
Distance da = (xp−xw) · tan (π / 2−2α)
= (Xp-xw) / tan (2α) (1)
 また、鏡211の中心点Cと、点Pの距離dbは、次式(2)で求められる。
 距離db=(xc-xp)・tan(α)
     =dxp-dxc   ・・・(2)
Further, the distance db between the center point C of the mirror 211 and the point P is obtained by the following equation (2).
Distance db = (xc−xp) · tan (α)
= Dxp-dxc (2)
 角度βが90度の場合、式(1)、式(2)より点Rのz’座標は次式(3)で求まる。
dxp=dxc+(xc-xp)・tan(α)-(xp-xw)/tan(2α)  ・・・(3)
When the angle β is 90 degrees, the z ′ coordinate of the point R is obtained by the following equation (3) from the equations (1) and (2).
dxp = dxc + (xc−xp) tan (α) − (xp−xw) / tan (2α) (3)
 角度βが90度以外の場合には、式(3)から求まった座標をさらにβで回転した座標となる。また図12はy座標とy’座標が並行のためにy’はyと同じ座標となるが、並行でない場合にはz’座標を求めたのと同様な座標計算を行う必要がある。 When the angle β is other than 90 degrees, the coordinates obtained from the equation (3) are further rotated by β. In FIG. 12, since the y coordinate and the y 'coordinate are parallel, y' is the same coordinate as y. However, if it is not parallel, it is necessary to perform coordinate calculation similar to that for obtaining the z 'coordinate.
 このようにして、鏡211内に表示させるサブカメラ502が撮像している画像202が特定され、切り出される。 In this way, the image 202 captured by the sub camera 502 displayed in the mirror 211 is specified and cut out.
 鏡211内の画像を拡大、または縮小する場合、鏡211を凸面鏡として扱い、各面での垂線を変えるだけで、基本的には、上記した場合と同様の処理で扱うことができる。 When enlarging or reducing the image in the mirror 211, the mirror 211 can be handled as a convex mirror, and basically the same processing as described above can be performed by changing the perpendicular line on each surface.
 鏡211内の画像を生成するとき、拡大、縮小、明るさ調整、エッジ強調などの画像信号処理も行われるようにしても良い。現実の鏡ではできない処理も、仮想道具としての鏡211では画像処理として行うことができるため、そのような画像処理を行うようにしても良い。 When an image in the mirror 211 is generated, image signal processing such as enlargement, reduction, brightness adjustment, edge enhancement, etc. may be performed. Since processing that cannot be performed with an actual mirror can be performed as image processing with the mirror 211 as a virtual tool, such image processing may be performed.
 図10に示したフローチャートを参照した説明に戻る。ステップS103において、サブカメラの画像が仮想道具に適した画像に変換されると、ステップS104に処理が進められる。 Returning to the description referring to the flowchart shown in FIG. When the image of the sub camera is converted into an image suitable for the virtual tool in step S103, the process proceeds to step S104.
 ステップS104において、仮想道具描画重畳部411は、メインカメラで撮像された画像201上に、仮想道具(鏡211)を描画し、重畳する。また、仮想道具描画重畳部411は、描画した仮想道具内に、ステップS103の処理で、画像変換処理部412が生成した画像を重畳する。 In step S104, the virtual tool drawing / superimposing unit 411 draws and superimposes the virtual tool (mirror 211) on the image 201 captured by the main camera. Further, the virtual tool drawing superimposing unit 411 superimposes the image generated by the image conversion processing unit 412 in the process of step S103 on the drawn virtual tool.
 このような処理がなされることで、例えば、図9の下図(図3)に示したような画面が、術者71に提供される。 By performing such processing, for example, a screen as shown in the lower diagram of FIG. 9 (FIG. 3) is provided to the operator 71.
 ステップS105において、制御値が入力されたか否かが判定される。例えば、音声入力により、“鏡を近づけて”、“鏡を消して”、“縫合終了”といったような指示が術者71により出された場合に、制御値が入力されたと判定される。 In step S105, it is determined whether or not a control value has been input. For example, it is determined that the control value has been input when an instruction such as “close the mirror”, “turn off the mirror”, or “end suture” is issued by the operator 71 by voice input.
 ステップS105において、制御値は入力されていないと判定された場合、ステップS105に処理が戻され、それ以降の処理が繰り返される。すなわちこの場合、その時点で設定されている制御値に基づくサブカメラの画像の変換や仮想道具の描画などの処理が継続される。 If it is determined in step S105 that no control value has been input, the process returns to step S105, and the subsequent processes are repeated. That is, in this case, processing such as conversion of the image of the sub camera and drawing of the virtual tool based on the control value set at that time is continued.
 一方、ステップS105において、制御値が入力されたと判定された場合、ステップS106に処理が進められる。ステップS106において、縫合終了か否かが判定される。この判定は、ステップS105において入力された制御値が、縫合の終了を示す制御値であったか否かが判定されることにより行われる。例えば、音声入力で制御値が入力される場合、“縫合終了”といった縫合の終了を示すキーワードが発せられたか否かを判定することで行われる。 On the other hand, if it is determined in step S105 that a control value has been input, the process proceeds to step S106. In step S106, it is determined whether or not the sewing is finished. This determination is performed by determining whether or not the control value input in step S105 is a control value indicating the end of stitching. For example, when a control value is input by voice input, it is performed by determining whether or not a keyword indicating the end of sewing such as “end of sewing” has been issued.
 ステップS106において、縫合終了ではないと判定された場合、ステップS107に処理が進められる。ステップS107において、入力された制御値に基づき、変更値が設定される。変更値が設定された後、処理は、ステップS103に戻され、それ以降の処理が、変更値に基づき繰り返される。 If it is determined in step S106 that the stitching has not ended, the process proceeds to step S107. In step S107, a change value is set based on the input control value. After the change value is set, the process returns to step S103, and the subsequent processing is repeated based on the change value.
 例えば、“鏡を近づけて”といった指示が出された場合、鏡211を物体511(図12)側に近づける制御値が入力されたと判定され、鏡211を物体511(図12)側に近づけたときに鏡211に表示される画像の生成や、近づけた位置での鏡211の描画が行われる。 For example, when an instruction such as “Move the mirror closer” is issued, it is determined that a control value for moving the mirror 211 closer to the object 511 (FIG. 12) is input, and the mirror 211 is moved closer to the object 511 (FIG. 12). Sometimes an image displayed on the mirror 211 is generated and the mirror 211 is drawn at a close position.
 一方、ステップS106において、縫合の終了が指示されたと判定された場合、ステップS108に処理が進められる。ステップS108には、ステップS101において、縫合作業ではないと判定された場合もくる。 On the other hand, if it is determined in step S106 that the end of sewing has been instructed, the process proceeds to step S108. In step S108, there is a case where it is determined in step S101 that the sewing operation is not performed.
 ステップS108において、手術の終了は指示されていないと判定された場合、ステップS101に処理が戻され、それ以降の処理が繰り返される。一方、ステップS108において、手術の終了が指示されたと判定された場合、図10に示したフローチャートに基づく処理は終了される。 In step S108, when it is determined that the end of the operation is not instructed, the process is returned to step S101, and the subsequent processes are repeated. On the other hand, if it is determined in step S108 that the end of the operation has been instructed, the process based on the flowchart shown in FIG. 10 is ended.
 このように、術者71が現実世界で使い慣れている道具を、仮想道具として、メインカメラからの画像に重畳し、仮想道具に、サブカメラからの画像を重畳するようにすることで、上記したような効果、例えば、術者71が視認しやすい方法で、サブカメラの画像を提供することが可能となる。 As described above, the tool used by the surgeon 71 in the real world is superimposed on the image from the main camera as a virtual tool, and the image from the sub camera is superimposed on the virtual tool. Such an effect, for example, the image of the sub camera can be provided by a method that is easy for the operator 71 to visually recognize.
 なお上記した実施の形態においては、仮想道具として、鏡211を例に挙げて説明したが、仮想道具としては、鏡以外のものであっても良い。例えば、仮想道具をルーペとし、ルーペ内に、拡大画像が表示されるようにしても良い。このようなルーペを仮想道具として表示することで、全体像を把握しつつも、注目領域を拡大した観察が可能となる。 In the above-described embodiment, the mirror 211 is described as an example of the virtual tool. However, the virtual tool may be other than the mirror. For example, the virtual tool may be a loupe, and an enlarged image may be displayed in the loupe. By displaying such a loupe as a virtual tool, it is possible to observe an enlarged region of interest while grasping the whole image.
 また、仮想道具をスポットライトとし、例えば、メインカメラからの画像やサブカメラからの画像を明るく映し出すようにしても良い。このようなスポットライトを仮想道具として表示することで、部分的に明るさを制御したい場合に有効となる。 Also, the virtual tool may be a spotlight, and for example, an image from the main camera or an image from the sub camera may be projected brightly. Displaying such a spotlight as a virtual tool is effective when it is desired to partially control the brightness.
 なお、この場合のスポットライトに照らされ、明るく表示される画像は、メインカメラで撮像された画像の一部とすることができる。すなわち仮想道具がスポットライトの場合、メインカメラで撮像された画像上にスポットライトが重畳され、そのスポットライトで照らされるメイン画像が明るく表示された画像がユーザに提示される。 Note that an image that is brightly displayed and illuminated by the spotlight in this case can be a part of an image captured by the main camera. That is, when the virtual tool is a spotlight, the spotlight is superimposed on the image captured by the main camera, and an image in which the main image illuminated by the spotlight is displayed brightly is presented to the user.
 また、明るくする場所を指示するとき、従来であれば、画面内の位置を指示する必要があったが、本技術によれば、仮想道具としてのスポットライトを動かすことで明るくしたい場所を指示することができるため、操作が簡便となる。 In addition, when instructing a place to be brightened, it was conventionally necessary to instruct a position in the screen, but according to the present technology, a place to be brightened is instructed by moving a spotlight as a virtual tool. Therefore, the operation becomes simple.
 また、仮想道具をレンズフィルタとして、特殊光画像が、そのレンズフィルタ内に表示されるようにし、特殊光多重の画像が実現されるようにしても良い。素性の異なる画像を、自然に共存させることができ、画像を切り替えたい場所の指示も簡便に行うことができる。 Also, a special tool image may be used as a lens filter so that a special light image is displayed in the lens filter, and a special light multiplexed image may be realized. Images with different features can coexist naturally, and the location where the image is to be switched can be easily designated.
 <他の表示例>
 図13に表示装置53に表示される他の画面例を示す。図13に示した画面例は、メインカメラで、サブカメラ502-1とサブカメラ502-2が撮像されている。このように、メインカメラで、サブカメラが撮像されているときには、そのサブカメラで撮像されている画像が、メインカメラで撮像されているサブカメラの近傍に表示される。
<Other display examples>
FIG. 13 shows another screen example displayed on the display device 53. In the screen example shown in FIG. 13, the sub camera 502-1 and the sub camera 502-2 are captured by the main camera. As described above, when the sub camera is captured by the main camera, the image captured by the sub camera is displayed in the vicinity of the sub camera captured by the main camera.
 図13に示した例では、メインカメラにサブカメラ502-1が撮像されたために、そのサブカメラ502-1の近傍(図13では、サブカメラ502-1の左側)に、サブカメラ502-1で撮像されている画像202-1が表示されている。また、メインカメラにサブカメラ502-2が撮像されたために、そのサブカメラ502-2の近傍(図13では、サブカメラ502-2の下側)に、サブカメラ502-2で撮像されている画像202-2が表示されている。 In the example shown in FIG. 13, since the sub camera 502-1 is captured by the main camera, the sub camera 502-1 is located in the vicinity of the sub camera 502-1 (on the left side of the sub camera 502-1 in FIG. 13). The image 202-1 captured at is displayed. Further, since the sub camera 502-2 is imaged by the main camera, the sub camera 502-2 is imaged in the vicinity of the sub camera 502-2 (below the sub camera 502-2 in FIG. 13). An image 202-2 is displayed.
 このような表示が行われることで、メインカメラとサブカメラの位置関係、サブカメラ同士の位置関係などの情報や、サブカメラが向いている方向(角度)などの情報を、ユーザに提示することが可能となる。また、そのような情報が提示されることで、サブカメラで撮像されている画像が、どこを映し出した画像であるかを、容易に理解させることが可能となる。 By such display, information such as the positional relationship between the main camera and the sub-camera, the positional relationship between the sub-cameras, and the information such as the direction (angle) that the sub-camera is facing is presented to the user. Is possible. In addition, by presenting such information, it is possible to easily understand where the image captured by the sub camera is an image.
 なお、メインカメラでサブカメラが撮像されていないときにも、サブカメラの画像をメインカメラの画像に重畳して表示するようにしても良い。このようにした場合、サブカメラの絵をコンピュータグラフィックスで描画し、その近傍にサブカメラで撮像されている画像が表示されるようにしても良い。 It should be noted that even when the sub camera is not picked up by the main camera, the sub camera image may be displayed superimposed on the main camera image. In this case, a picture of the sub camera may be drawn by computer graphics, and an image captured by the sub camera may be displayed in the vicinity thereof.
 またこのようにした場合、コンピュータグラフィックスで描画されるサブカメラが、表示される位置は、メインカメラとサブカメラの位置関係、サブカメラ同士の位置関係が反映された位置とされる。よって、メインカメラ、サブカメラのそれぞれの位置に関する情報が取得されていることが前提とされる。 In this case, the position where the sub camera drawn by computer graphics is displayed is a position reflecting the positional relationship between the main camera and the sub camera and the positional relationship between the sub cameras. Therefore, it is assumed that information on the positions of the main camera and the sub camera is acquired.
 例えば、図8を参照して説明したように、位置センサ421を備え、その位置センサ421からの情報で、メインカメラ、サブカメラのそれぞれの位置に関する情報が取得される。また、図6,図7を参照してロボットの場合、アーム291乃至293の位置関係は、アームの位置制御情報及び各々のカメラの画角情報(ズーム倍率等)から事前に把握できるため、その把握されている情報を用いることも可能である。 For example, as described with reference to FIG. 8, the position sensor 421 is provided, and information on the positions of the main camera and the sub camera is acquired by information from the position sensor 421. 6 and 7, in the case of a robot, the positional relationship between the arms 291 to 293 can be grasped in advance from the arm position control information and the angle of view information (zoom magnification, etc.) of each camera. It is also possible to use known information.
 なお、このように、メインカメラでサブカメラが撮像されていないときにも、サブカメラの画像をメインカメラの画像に重畳して表示するようにした場合、例えば、サブカメラの画像の枠は、点線などで表示し、メインカメラで撮像している範囲外にサブカメラがあることを示すような表示が行われるようにしても良い。 As described above, even when the sub camera is not captured by the main camera, when the sub camera image is superimposed on the main camera image and displayed, for example, the sub camera image frame is The display may be performed by displaying a dotted line or the like and indicating that the sub camera is outside the range captured by the main camera.
 他の表示例を図14に示す。図14に示した表示例は、図13に示した表示例と基本的に同様であり、鉗子35bやアーム291がメインカメラに撮像されたときには、その鉗子35bやアーム291の近傍に、鉗子35bやアーム291に装着されているサブカメラからの画像が表示される。 FIG. 14 shows another display example. The display example shown in FIG. 14 is basically the same as the display example shown in FIG. 13, and when the forceps 35b and arm 291 are imaged by the main camera, the forceps 35b and arm 291 are located near the forceps 35b and arm 291. And an image from a sub camera attached to the arm 291 is displayed.
 例えば、図7に示したロボットにおいて、図14に示した画像が、表示装置53に表示されている場合、アーム291とアーム294(図7では不図示)は、メインカメラであるアーム292で撮像されている。 For example, in the robot shown in FIG. 7, when the image shown in FIG. 14 is displayed on the display device 53, the arm 291 and the arm 294 (not shown in FIG. 7) are picked up by the arm 292 which is the main camera. Has been.
 この場合、アーム291の近傍に、アーム291に装着されているサブカメラ252で撮像されている画像202-1が表示される。また、アーム294の近傍に、アーム294に装着されているサブカメラ(不図示)で撮像されている画像202-2が表示される。 In this case, the image 202-1 captured by the sub camera 252 attached to the arm 291 is displayed in the vicinity of the arm 291. In addition, an image 202-2 captured by a sub camera (not shown) attached to the arm 294 is displayed in the vicinity of the arm 294.
 アーム293は、メインカメラであるアーム292(アームカメラ)では撮像されていない。このようにメインカメラで撮像されていないアーム293は、コンピュータグラフィックで描画される。そして、その描画されたアーム293の近傍に、アーム293に装着されているサブカメラ253で撮像されている画像202-3が表示される。 The arm 293 is not imaged by the arm 292 (arm camera) which is the main camera. As described above, the arm 293 not captured by the main camera is drawn by computer graphics. Then, in the vicinity of the drawn arm 293, an image 202-3 captured by the sub camera 253 attached to the arm 293 is displayed.
 このような表示が行われることで、メインカメラとサブカメラの位置関係、サブカメラ同士の位置関係などの情報や、サブカメラが向いている方向(角度)などの情報を、ユーザに提示することが可能となる。また、そのような情報が提示されることで、サブカメラで撮像されている画像が、どこを映し出した画像であるかを、容易に理解させることが可能となる。 By such display, information such as the positional relationship between the main camera and the sub-camera, the positional relationship between the sub-cameras, and the information such as the direction (angle) that the sub-camera is facing is presented to the user. Is possible. In addition, by presenting such information, it is possible to easily understand where the image captured by the sub camera is an image.
 図13、図14を参照して説明したように、サブカメラで撮像されている画像を、メインカメラで撮像されている画像に重畳して表示するようにした場合、奥行き情報も考慮した表示が行われるようにしても良い。例えば、奥行き方向に離れた位置にあるサブカメラで撮像されている画像は、小さく表示されるようにしても良い。 As described with reference to FIGS. 13 and 14, when the image captured by the sub camera is displayed superimposed on the image captured by the main camera, the display in consideration of the depth information is displayed. It may be performed. For example, an image captured by a sub camera located at a position distant in the depth direction may be displayed small.
 また、ロボットの場合などでステレオカメラが使用され、奥行き情報、例えば、ロボットと患者の距離、患部までの距離などの情報が得られる場合、その情報も用いられた表示が行われるようにしても良い。例えば、アームの奥行き位置に、サブカメラからの画像が表示されるようにしても良い。 When a stereo camera is used in the case of a robot and depth information, for example, information such as the distance between the robot and the patient and the distance to the affected part is obtained, a display using that information may also be performed. good. For example, an image from the sub camera may be displayed at the depth position of the arm.
 <サブカメラの他の取り付け箇所>
 上記した実施の形態においては、例えば、図4、図5、または図7を参照して説明したように、メインカメラとサブカメラは、異なる術具に装着されている場合を例に挙げて説明した。例えば、図4に示した例では、メインカメラは内視鏡20aであり、この内視鏡20aとは異なる内視鏡20bがサブカメラとして用いられる例を挙げて説明した。
<Other mounting locations of the sub camera>
In the above-described embodiment, for example, as described with reference to FIG. 4, FIG. 5, or FIG. did. For example, in the example shown in FIG. 4, the main camera is the endoscope 20a, and the endoscope 20b different from the endoscope 20a has been described as an example.
 このように、メインカメラとサブカメラが異なる術具に装着されている場合に、本技術の適用範囲が限定されるわけではない。例えば、図15に示すように、同一の術具にメインカメラとサブカメラが装着されているような場合にも、上記した本技術を適用することは可能である。 As described above, the scope of application of the present technology is not limited when the main camera and the sub camera are mounted on different surgical instruments. For example, as shown in FIG. 15, the present technology described above can be applied even when the main camera and the sub camera are mounted on the same surgical instrument.
 図15を参照するに、内視鏡601は、その先端に、メインカメラ601aを備えている。内視鏡601は、筐体の一部に、サブカメラ602aとサブカメラ602bを備える。このように、内視鏡601を、メインカメラ601aとサブカメラ602が備えられている構成にすることも可能であり、このような内視鏡601という1つの術具に、メインカメラ601aとサブカメラ602が備えられているような場合にも、本技術を適用することは可能である。 Referring to FIG. 15, the endoscope 601 includes a main camera 601a at the tip thereof. The endoscope 601 includes a sub camera 602a and a sub camera 602b in a part of a housing. As described above, the endoscope 601 may be configured to include the main camera 601a and the sub camera 602. The main camera 601a and the sub camera 601 are included in one surgical instrument such as the endoscope 601. The present technology can be applied even when the camera 602 is provided.
 <記録媒体について>
 上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
<About recording media>
The series of processes described above can be executed by hardware or can be executed by software. When a series of processing is executed by software, a program constituting the software is installed in the computer. Here, the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
 図16は、上述した一連の処理をプログラムにより実行するコンピュータのハードウエアの構成例を示すブロック図である。コンピュータにおいて、CPU(Central Processing Unit)1001、ROM(Read Only Memory)1002、RAM(Random Access Memory)1003は、バス1004により相互に接続されている。バス1004には、更に、入出力インタフェース1005が接続されている。入出力インタフェース1005には、入力部1006、出力部1007、記憶部1008、通信部1009、およびドライブ1010が接続されている。 FIG. 16 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program. In the computer, a CPU (Central Processing Unit) 1001, a ROM (Read Only Memory) 1002, and a RAM (Random Access Memory) 1003 are connected to each other via a bus 1004. An input / output interface 1005 is further connected to the bus 1004. An input unit 1006, an output unit 1007, a storage unit 1008, a communication unit 1009, and a drive 1010 are connected to the input / output interface 1005.
 入力部1006は、キーボード、マウス、マイクロフォンなどよりなる。出力部1007は、ディスプレイ、スピーカなどよりなる。記憶部1008は、ハードディスクや不揮発性のメモリなどよりなる。通信部1009は、ネットワークインタフェースなどよりなる。ドライブ1010は、磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリなどのリムーバブルメディア1011を駆動する。 The input unit 1006 includes a keyboard, a mouse, a microphone, and the like. The output unit 1007 includes a display, a speaker, and the like. The storage unit 1008 includes a hard disk, a nonvolatile memory, and the like. The communication unit 1009 includes a network interface. The drive 1010 drives a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータでは、CPU1001が、例えば、記憶部1008に記憶されているプログラムを、入出力インタフェース1005およびバス1004を介して、RAM1003にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 1001 loads, for example, the program stored in the storage unit 1008 to the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program. Is performed.
 コンピュータ(CPU1001)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブルメディア1011に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 The program executed by the computer (CPU 1001) can be provided by being recorded on the removable medium 1011 as a package medium, for example. The program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータでは、プログラムは、リムーバブルメディア1011をドライブ1010に装着することにより、入出力インタフェース1005を介して、記憶部1008にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部1009で受信し、記憶部1008にインストールすることができる。その他、プログラムは、ROM1002や記憶部1008に、あらかじめインストールしておくことができる。 In the computer, the program can be installed in the storage unit 1008 via the input / output interface 1005 by attaching the removable medium 1011 to the drive 1010. Further, the program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the storage unit 1008. In addition, the program can be installed in advance in the ROM 1002 or the storage unit 1008.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
 また、本明細書において、システムとは、複数の装置により構成される装置全体を表すものである。 In addition, in this specification, the system represents the entire apparatus composed of a plurality of apparatuses.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、また他の効果があってもよい。 It should be noted that the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
 なお、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Note that the embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
 なお、本技術は以下のような構成も取ることができる。
(1)
 複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、
 前記サブ画像を重畳用画像に変換する変換部と、
 前記変換された画像を前記メイン画像内の所定の位置に重畳する重畳部と
 を備える医療用画像処理装置。
(2)
 前記変換部は、表示位置が移動された場合、その移動に応じたサブ画像に変換する
 前記(1)に記載の医療用画像処理装置。
(3)
 前記サブ画像の表示位置と表示領域は、前記メイン画像における主要被写体の大きさを基準として設定される
 前記(1)または(2)に記載の医療用画像処理装置。
(4)
 前記サブ画像は、前記メイン画像を撮像した撮像部と前記サブ画像を撮像した撮像部の位置関係と同一の位置関係を満たす位置に表示される
 前記(1)乃至(3)のいずれかに記載の医療用画像処理装置。
(5)
 前記複数の撮像部は、内視鏡である
 前記(1)乃至(4)のいずれかに記載の医療用画像処理装置。
(6)
 前記複数の撮像部のうち、前記サブ画像を撮像する撮像部は、術具に備えられる
 前記(1)乃至(5)のいずれかに記載の医療用画像処理装置。
(7)
 前記複数の撮像部のうち、前記サブ画像を撮像する撮像部は、アームに備えられる
 前記(1)乃至(5)のいずれかに記載の医療用画像処理装置。
(8)
 複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳する重畳部を備え、
 前記メイン画像に、前記サブ画像を撮像している撮像部が表示され、その撮像部の近傍に前記サブ画像が表示される
 医療用画像処理装置。
(9)
 前記サブ画像を撮像している撮像部は、前記メイン画像を撮像している撮像部で撮像された撮像部である
 前記(8)に記載の医療用画像処理装置。
(10)
 前記サブ画像を撮像している撮像部は、前記サブ画像を撮像している撮像部が備えられている術具を模写した絵で表示され、
 前記術具を模写した絵は、前記メイン画像を撮像した撮像部と前記サブ画像を撮像した撮像部の位置関係と同一の位置関係を満たす位置に表示される
 前記(8)に記載の医療用画像処理装置。
(11)
 複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、
 前記サブ画像を重畳用画像に変換し、
 前記変換された画像を前記メイン画像内の所定の位置に重畳する
 ステップを含む医療用画像処理方法。
(12)
 複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳し、
 前記メイン画像に、前記サブ画像を撮像している撮像部を表示し、その撮像部の近傍に前記サブ画像を表示する
 ステップを含む医療用画像処理方法。
(13)
 コンピュータに、
 複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、
 前記サブ画像を重畳用画像に変換し、
 前記変換された画像を前記メイン画像内の所定の位置に重畳する
 ステップを含む処理を実行させるためのプログラム。
(14)
 コンピュータに、
 複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳し、
 前記メイン画像に、前記サブ画像を撮像している撮像部を表示し、その撮像部の近傍に前記サブ画像を表示する
 ステップを含む処理を実行させるためのプログラム。
In addition, this technique can also take the following structures.
(1)
One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image,
A conversion unit that converts the sub-image into a superimposition image;
A medical image processing apparatus comprising: a superimposing unit that superimposes the converted image on a predetermined position in the main image.
(2)
The said conversion part converts into the sub image according to the movement, when a display position is moved. The medical image processing apparatus as described in said (1).
(3)
The medical image processing apparatus according to (1) or (2), wherein a display position and a display area of the sub image are set based on a size of a main subject in the main image.
(4)
The sub image is displayed at a position satisfying the same positional relationship as the positional relationship between the imaging unit that captured the main image and the imaging unit that captured the sub image. Medical image processing device.
(5)
The medical image processing apparatus according to any one of (1) to (4), wherein the plurality of imaging units are endoscopes.
(6)
The medical image processing apparatus according to any one of (1) to (5), wherein an imaging unit that captures the sub-image among the plurality of imaging units is provided in a surgical instrument.
(7)
The medical image processing apparatus according to any one of (1) to (5), wherein an imaging unit that captures the sub-image among the plurality of imaging units is provided in an arm.
(8)
A superimposition unit that superimposes one of the images captured by the plurality of imaging units as a main image, another image as a sub image, and superimposing the sub image on the main image;
A medical image processing apparatus in which an imaging unit that captures the sub image is displayed in the main image, and the sub image is displayed in the vicinity of the imaging unit.
(9)
The medical image processing apparatus according to (8), wherein the imaging unit that captures the sub image is an imaging unit that is captured by the imaging unit that captures the main image.
(10)
The imaging unit that captures the sub-image is displayed with a picture that mimics the surgical instrument provided with the imaging unit that captures the sub-image,
The picture that replicates the surgical instrument is displayed at a position that satisfies the same positional relationship as the positional relationship between the imaging unit that captured the main image and the imaging unit that captured the sub-image. Image processing device.
(11)
One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image,
Converting the sub-image into an image for superimposition;
A medical image processing method including a step of superimposing the converted image on a predetermined position in the main image.
(12)
One of the images captured by the plurality of imaging units is a main image, another image is a sub image, and the sub image is superimposed on the main image.
A medical image processing method, comprising: displaying an imaging unit capturing the sub-image on the main image, and displaying the sub-image in the vicinity of the imaging unit.
(13)
On the computer,
One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image,
Converting the sub-image into an image for superimposition;
A program for executing a process including a step of superimposing the converted image on a predetermined position in the main image.
(14)
On the computer,
One of the images captured by the plurality of imaging units is a main image, another image is a sub image, and the sub image is superimposed on the main image.
A program for executing a process including a step of displaying an imaging unit capturing the sub-image on the main image and displaying the sub-image near the imaging unit.
 10 内視鏡手術システム, 20 内視鏡, 35 鉗子, 53 表示装置, 83 画像処理部, 201 画像, 211 鏡, 231,232 鉗子, 233 糸, 251 サブカメラ, 281 操作部, 282 本体, 283 モニタ部,291乃至293 アーム, 411 仮想道具描画重畳部, 412 画像変換処理部, 421 位置センサ 10 Endoscopic Surgery System, 20 Endoscope, 35 Forceps, 53 Display Device, 83 Image Processing Unit, 201 Image, 211 Mirror, 211,232 Forceps, 233 Thread, 251 Sub-Camera, 281 Operation Unit, 282 Main Body, 283 Monitor unit, 291 to 293 arm, 411 virtual tool drawing superimposing unit, 412 image conversion processing unit, 421 position sensor

Claims (14)

  1.  複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、
     前記サブ画像を重畳用画像に変換する変換部と、
     前記変換された画像を前記メイン画像内の所定の位置に重畳する重畳部と
     を備える医療用画像処理装置。
    One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image,
    A conversion unit that converts the sub-image into a superimposition image;
    A medical image processing apparatus comprising: a superimposing unit that superimposes the converted image on a predetermined position in the main image.
  2.  前記変換部は、表示位置が移動された場合、その移動に応じたサブ画像に変換する
     請求項1に記載の医療用画像処理装置。
    The medical image processing apparatus according to claim 1, wherein when the display position is moved, the conversion unit converts the display position into a sub-image according to the movement.
  3.  前記サブ画像の表示位置と表示領域は、前記メイン画像における主要被写体の大きさを基準として設定される
     請求項1に記載の医療用画像処理装置。
    The medical image processing apparatus according to claim 1, wherein a display position and a display area of the sub image are set based on a size of a main subject in the main image.
  4.  前記サブ画像は、前記メイン画像を撮像した撮像部と前記サブ画像を撮像した撮像部の位置関係と同一の位置関係を満たす位置に表示される
     請求項1に記載の医療用画像処理装置。
    The medical image processing apparatus according to claim 1, wherein the sub image is displayed at a position satisfying a positional relationship identical to a positional relationship between an imaging unit that captured the main image and an imaging unit that captured the sub image.
  5.  前記複数の撮像部は、内視鏡である
     請求項1に記載の医療用画像処理装置。
    The medical image processing apparatus according to claim 1, wherein the plurality of imaging units are endoscopes.
  6.  前記複数の撮像部のうち、前記サブ画像を撮像する撮像部は、術具に備えられる
     請求項1に記載の医療用画像処理装置。
    The medical image processing apparatus according to claim 1, wherein among the plurality of imaging units, an imaging unit that captures the sub-image is provided in a surgical instrument.
  7.  前記複数の撮像部のうち、前記サブ画像を撮像する撮像部は、アームに備えられる
     請求項1に記載の医療用画像処理装置。
    The medical image processing apparatus according to claim 1, wherein among the plurality of imaging units, an imaging unit that captures the sub-image is provided in an arm.
  8.  複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳する重畳部を備え、
     前記メイン画像に、前記サブ画像を撮像している撮像部が表示され、その撮像部の近傍に前記サブ画像が表示される
     医療用画像処理装置。
    A superimposition unit that superimposes one of the images captured by the plurality of imaging units as a main image, another image as a sub image, and superimposing the sub image on the main image;
    A medical image processing apparatus in which an imaging unit that captures the sub image is displayed in the main image, and the sub image is displayed in the vicinity of the imaging unit.
  9.  前記サブ画像を撮像している撮像部は、前記メイン画像を撮像している撮像部で撮像された撮像部である
     請求項8に記載の医療用画像処理装置。
    The medical image processing apparatus according to claim 8, wherein the imaging unit that captures the sub image is an imaging unit that is captured by the imaging unit that captures the main image.
  10.  前記サブ画像を撮像している撮像部は、前記サブ画像を撮像している撮像部が備えられている術具を模写した絵で表示され、
     前記術具を模写した絵は、前記メイン画像を撮像した撮像部と前記サブ画像を撮像した撮像部の位置関係と同一の位置関係を満たす位置に表示される
     請求項8に記載の医療用画像処理装置。
    The imaging unit that captures the sub-image is displayed with a picture that mimics the surgical instrument provided with the imaging unit that captures the sub-image,
    The medical image according to claim 8, wherein the picture replicating the surgical instrument is displayed at a position that satisfies the same positional relationship as the positional relationship between the imaging unit that captured the main image and the imaging unit that captured the sub image. Processing equipment.
  11.  複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、
     前記サブ画像を重畳用画像に変換し、
     前記変換された画像を前記メイン画像内の所定の位置に重畳する
     ステップを含む医療用画像処理方法。
    One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image,
    Converting the sub-image into an image for superimposition;
    A medical image processing method including a step of superimposing the converted image on a predetermined position in the main image.
  12.  複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳し、
     前記メイン画像に、前記サブ画像を撮像している撮像部を表示し、その撮像部の近傍に前記サブ画像を表示する
     ステップを含む医療用画像処理方法。
    One of the images captured by the plurality of imaging units is a main image, another image is a sub image, and the sub image is superimposed on the main image.
    A medical image processing method, comprising: displaying an imaging unit capturing the sub-image on the main image, and displaying the sub-image in the vicinity of the imaging unit.
  13.  コンピュータに、
     複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、
     前記サブ画像を重畳用画像に変換し、
     前記変換された画像を前記メイン画像内の所定の位置に重畳する
     ステップを含む処理を実行させるためのプログラム。
    On the computer,
    One of the images captured by the plurality of imaging units is a main image, the other image is a sub-image,
    Converting the sub-image into an image for superimposition;
    A program for executing a process including a step of superimposing the converted image on a predetermined position in the main image.
  14.  コンピュータに、
     複数の撮像部でそれぞれ撮像された画像のうちの1の画像をメイン画像とし、他の画像をサブ画像とし、前記サブ画像を前記メイン画像に重畳し、
     前記メイン画像に、前記サブ画像を撮像している撮像部を表示し、その撮像部の近傍に前記サブ画像を表示する
     ステップを含む処理を実行させるためのプログラム。
    On the computer,
    One of the images captured by the plurality of imaging units is a main image, another image is a sub image, and the sub image is superimposed on the main image.
    A program for executing a process including a step of displaying an imaging unit capturing the sub-image on the main image and displaying the sub-image near the imaging unit.
PCT/JP2017/029919 2016-09-05 2017-08-22 Medical image processing device, medical image processing method, and program WO2018043205A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016172421 2016-09-05
JP2016-172421 2016-09-05

Publications (1)

Publication Number Publication Date
WO2018043205A1 true WO2018043205A1 (en) 2018-03-08

Family

ID=61301764

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/029919 WO2018043205A1 (en) 2016-09-05 2017-08-22 Medical image processing device, medical image processing method, and program

Country Status (1)

Country Link
WO (1) WO2018043205A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021075306A1 (en) * 2019-10-17 2021-04-22 ソニー株式会社 Surgical information processing device, surgical information processing method, and surgical information processing program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030578A1 (en) * 2006-08-02 2008-02-07 Inneroptic Technology Inc. System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
JP2011509715A (en) * 2008-01-10 2011-03-31 タイコ ヘルスケア グループ リミテッド パートナーシップ Imaging system for a surgical device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030578A1 (en) * 2006-08-02 2008-02-07 Inneroptic Technology Inc. System and method of providing real-time dynamic imagery of a medical procedure site using multiple modalities
JP2011509715A (en) * 2008-01-10 2011-03-31 タイコ ヘルスケア グループ リミテッド パートナーシップ Imaging system for a surgical device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021075306A1 (en) * 2019-10-17 2021-04-22 ソニー株式会社 Surgical information processing device, surgical information processing method, and surgical information processing program

Similar Documents

Publication Publication Date Title
JP7067467B2 (en) Information processing equipment for medical use, information processing method, information processing system for medical use
CN111278344B (en) Surgical Arm System and Surgical Arm Control System
WO2018123613A1 (en) Medical image processing apparatus, medical image processing method, and program
JP7151109B2 (en) Medical imaging device and medical observation system
JP7480477B2 (en) Medical observation system, control device and control method
JP7095693B2 (en) Medical observation system
WO2018088105A1 (en) Medical support arm and medical system
JPWO2018168261A1 (en) CONTROL DEVICE, CONTROL METHOD, AND PROGRAM
WO2019239942A1 (en) Surgical observation device, surgical observation method, surgical light source device, and light irradiation method for surgery
WO2018088113A1 (en) Joint driving actuator and medical system
JP2021003531A (en) Surgery support system, control device, and control method
JP7135869B2 (en) Light emission control device, light emission control method, program, light emitting device, and imaging device
WO2021049220A1 (en) Medical support arm and medical system
WO2018221068A1 (en) Information processing device, information processing method and information processing program
WO2019181242A1 (en) Endoscope and arm system
JP7092111B2 (en) Imaging device, video signal processing device and video signal processing method
US11883120B2 (en) Medical observation system, medical signal processing device, and medical signal processing device driving method
WO2018043205A1 (en) Medical image processing device, medical image processing method, and program
WO2021256168A1 (en) Medical image-processing system, surgical image control device, and surgical image control method
WO2017221491A1 (en) Control device, control system, and control method
WO2020203164A1 (en) Medical system, information processing device, and information processing method
WO2020009127A1 (en) Medical observation system, medical observation device, and medical observation device driving method
JPWO2020045014A1 (en) Medical system, information processing device and information processing method
JP7420141B2 (en) Image processing device, imaging device, image processing method, program
WO2022269992A1 (en) Medical observation system, information processing device, and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17846210

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17846210

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP