WO2022163362A1 - Image processing device, image processing method, and surgical microscope system - Google Patents

Image processing device, image processing method, and surgical microscope system Download PDF

Info

Publication number
WO2022163362A1
WO2022163362A1 PCT/JP2022/000855 JP2022000855W WO2022163362A1 WO 2022163362 A1 WO2022163362 A1 WO 2022163362A1 JP 2022000855 W JP2022000855 W JP 2022000855W WO 2022163362 A1 WO2022163362 A1 WO 2022163362A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lens
stereo
display
display image
Prior art date
Application number
PCT/JP2022/000855
Other languages
French (fr)
Japanese (ja)
Inventor
知之 大月
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022163362A1 publication Critical patent/WO2022163362A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/13Ophthalmic microscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery

Definitions

  • the present disclosure relates to an image processing device, an image processing method, and a surgical microscope system.
  • intraocular lens As a method of refractive correction in ophthalmology, by inserting an artificial lens called an intraocular lens (IOL) into the eye, it is widely used to eliminate the refractive error of the crystalline lens and improve visual functions such as visual acuity. It is done.
  • the most widely used intraocular lens is an intraocular lens that is inserted into the lens capsule as a replacement for the lens removed by cataract surgery.
  • various intraocular lenses such as those that are fixed (dwelled) in the ciliary sulcus (Phakic IOL).
  • Patent Literature 1 a tomographic image of the eye is used to facilitate understanding of the positional relationship between the surgical tool and the treatment target portion of the eye.
  • OCT optical coherence tomography
  • an image processing apparatus and an image processing apparatus that are capable of simplifying the apparatus configuration and reducing the size of the apparatus while making it easier for the operator to understand the positional relationship between the surgical tool and the region to be treated of the eye.
  • a method and an operating microscope system are proposed.
  • An image processing apparatus includes an image input unit that receives an operative field image of a patient's eye, a three-dimensional model reception unit that receives a three-dimensional model of part or all of the lens of the eye, A stereo image generator that generates a stereo image of the crystalline lens from the three-dimensional model, and a display image generator that generates a display image including the surgical field image and the stereo image based on the surgical field image and the stereo image. And prepare.
  • an image processing device receives an operative field image of a patient's eye, receives a three-dimensional model of part or all of the lens of the eye, and from the three-dimensional model generating a stereo image of the lens; and generating a display image including the surgical field image and the stereo image based on the surgical field image and the stereo image.
  • a surgical microscope system includes a surgical microscope that obtains an surgical field image of a patient's eye, an image processing device that generates a display image, and a display device that displays the display image, wherein the image is
  • the processing device includes an image input unit that receives the operative field image, a three-dimensional model reception unit that receives a three-dimensional model of part or all of the lens of the eye, and a stereo image of the lens from the three-dimensional model. and a display image generator for generating the display image including the surgical field image and the stereo image based on the surgical field image and the stereo image.
  • FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope system according to an embodiment of the present disclosure
  • FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope according to an embodiment of the present disclosure
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an image processing device according to an embodiment of the present disclosure
  • FIG. 4 is a diagram showing an example of a tomographic image for 3D model generation according to an embodiment of the present disclosure
  • FIG. 3 is a diagram illustrating an example of a stereoscopic image for 3D model generation according to an embodiment of the present disclosure
  • FIG. 3 illustrates an example stereo image corresponding to a 3D model according to an embodiment of the present disclosure
  • FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope system according to an embodiment of the present disclosure
  • FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope according to an embodiment of the present disclosure
  • FIG. 1 is a diagram
  • FIG. 3 is a diagram showing Example 1 of a display image according to the embodiment of the present disclosure
  • FIG. FIG. 10 is a diagram showing Example 2 of a display image according to the embodiment of the present disclosure
  • FIG. 10 is a diagram showing Example 3 of a display image according to the embodiment of the present disclosure
  • FIG. 10 is a diagram showing Example 4 of a display image according to the embodiment of the present disclosure
  • FIG. 11 is a diagram showing Example 5 of a display image according to an embodiment of the present disclosure
  • 1 is a diagram illustrating an example of a schematic configuration of a computer according to an embodiment of the present disclosure
  • Embodiment 1-1 Example of schematic configuration of surgical microscope system 1-2.
  • Example of schematic configuration of surgical microscope 1-3 Schematic Configuration of Image Processing Apparatus and Example of Image Processing 1-4. Action and effect 2.
  • FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope system 1 according to this embodiment.
  • the surgical microscope system 1 has a surgical microscope 10 and a patient bed 20.
  • This surgical microscope system 1 is a system used for eye surgery. The patient undergoes eye surgery while lying on the patient bed 20 . An operator, who is a doctor, performs surgery while observing the patient's eye through the surgical microscope 10 .
  • the surgical microscope 10 has an objective lens 11, an eyepiece lens 12, an image processing device 13, and a monitor 14.
  • the objective lens 11 and the eyepiece lens 12 are lenses for magnifying and observing the eye of the patient to be operated.
  • the image processing device 13 outputs various images, various information, etc. by performing predetermined image processing on the image captured through the objective lens 11 .
  • the monitor 14 displays an image captured through the objective lens 11, various images generated by the image processing device 13, various information, and the like. This monitor 14 may be provided separately from the surgical microscope 10 .
  • the operator looks into the eyepiece 12 and performs surgery while observing the patient's eye through the objective lens 11. Further, the operator performs surgery while confirming various images (for example, an image before image processing, an image after image processing, etc.) and various information displayed on the monitor 14 .
  • various images for example, an image before image processing, an image after image processing, etc.
  • FIG. 2 is a diagram showing an example of a schematic configuration of the surgical microscope 10 according to this embodiment.
  • the surgical microscope 10 includes, in addition to the objective lens 11, the eyepiece lens 12, the image processing device 13, and the monitor 14, a light source 51, an observation optical system 52, a front image photographing section 53, and a display It has a section 54 , an interface section 55 and a speaker 56 .
  • the monitor 14 and the presentation unit 54 correspond to a display device.
  • the light source 51 emits illumination light under the control of the control unit 13A included in the image processing device 13 to illuminate the eyes of the patient.
  • the observation optical system 52 is composed of optical elements such as the objective lens 11, a half mirror 52a, and lenses (not shown).
  • the observation optical system 52 guides the light (observation light) reflected from the patient's eye to the eyepiece 12 and the front image capturing section 53 .
  • the light reflected from the patient's eye enters the half mirror 52a as observation light via the objective lens 11, a lens (not shown), or the like.
  • Approximately half of the observation light incident on the half mirror 52 a passes through the half mirror 52 a as it is, and enters the eyepiece 12 via the transmissive presentation unit 54 .
  • the other half of the observation light incident on the half mirror 52 a is reflected by the half mirror 52 a and enters the front image capturing section 53 .
  • the front image capturing unit 53 is composed of, for example, a video camera.
  • the front image photographing unit 53 receives the observation light incident from the observation optical system 52 and photoelectrically converts it to obtain an image of the patient's eye observed from the front, that is, an image of the patient's eye photographed approximately in the eye axis direction. A front image is taken.
  • the front image capturing unit 53 captures (captures) a front image under the control of the image processing device 13 and supplies the obtained front image to the image processing device 13 .
  • the eyepiece 12 collects the observation light incident from the observation optical system 52 via the presentation unit 54 and forms an optical image of the patient's eye. An optical image of the patient's eye is thereby observed by the operator looking through the eyepiece 12 .
  • the presentation unit 54 is composed of a transmissive display device or the like, and is arranged between the eyepiece 12 and the observation optical system 52 .
  • the presentation unit 54 transmits observation light incident from the observation optical system 52 and makes it enter the eyepiece 12, and also displays various images (for example, a front image, a stereo image, etc.) and various information supplied from the image processing device 13. are also presented (displayed) as necessary.
  • various images, various information, and the like may be presented, for example, superimposed on the optical image of the patient's eye, or may be presented in the periphery of the optical image so as not to interfere with the optical image.
  • the image processing device 13 has a control section 13A that controls the operation of the surgical microscope 10 as a whole.
  • the control section 13A changes the illumination conditions of the light source 51 or changes the zoom magnification of the observation optical system 52 .
  • the control unit 13A controls image acquisition by the front image photographing unit 53 based on the operation information of the operator or the like supplied from the interface unit 55 and the like.
  • the interface unit 55 is composed of, for example, a communication unit and the like.
  • the communication unit receives commands from an operation unit such as a touch panel superimposed on the monitor 14, a controller, a remote controller (not shown), or the like, and communicates with external devices.
  • the interface unit 55 supplies the image processing apparatus 13 with information and the like according to the operation of the operator.
  • the interface unit 55 also outputs device control information for controlling the external device, which is supplied from the image processing apparatus 13, to the external device.
  • the monitor 14 displays various images such as front images and stereo images and various information according to control by the control unit 13A of the image processing device 13 .
  • the speaker 56 when a dangerous situation is detected during surgery, the speaker 56 emits a buzzer sound, a melody sound, or the like in order to notify the operator of the dangerous situation in accordance with the control by the control unit 13A of the image processing device 13. Outputs sound, message (voice), and the like.
  • the surgical microscope 10 may be provided with a rotating light or indicator light (lamp) for informing the operator or the like of a dangerous situation.
  • a display image is generated by the image processing device 13 based on various images such as a front image and a stereo image, and the generated display image is displayed on the monitor 14, the presentation unit 54, or the like. Displaying by the device allows the operator to grasp the positional relationship between the surgical instrument and the surgical target site of the eye.
  • FIG. 3 is a diagram showing an example of a schematic configuration (configuration and processing flow) of the image processing apparatus 13 according to this embodiment.
  • the treatment target site is the lens capsule
  • an inexperienced operator does not know the relationship between the surgical tool and the depth of the posterior capsule, and the surgical tool may damage the posterior capsule.
  • stereo image presentation processing based on a 3D model obtained by preoperatively measuring the lens (which may include the lens capsule) will be described.
  • the image processing device 13 includes a 3D model reception unit (three-dimensional model reception unit) 13a, a stereo image generation unit 13b, an image input unit 13c, and a display image generation unit 13d.
  • a 3D model reception unit three-dimensional model reception unit
  • a stereo image generation unit 13b stereo image generation unit
  • an image input unit 13c image input unit
  • a display image generation unit 13d display image generation unit
  • the 3D model receiving unit 13a receives from an external device a 3D model of the treatment target site measured preoperatively in the preoperative planning.
  • the 3D model may include only the site to be treated, or may include other sites for size adjustment and posture adjustment, which will be described later.
  • the 3D model may include the corneal limbus for size matching based on the cornea in the surgical field image, or may include the iris for posture matching using the iris pattern.
  • the 3D model may also include a preoperative frontal image of the target eye whose positional relationship with the 3D model is known for use in alignment. For example, blood vessels around the cornea in the preoperative front image can be used for posture adjustment.
  • FIG. 4 is a diagram showing an example of a tomographic image for 3D model generation according to this embodiment.
  • FIG. 5 is a diagram showing an example of a stereoscopic image (a stereoscopic image viewed from a specific direction) for generating a 3D model according to this embodiment.
  • a tomogram multiple tomograms
  • a stereoscopic image are used to construct a 3D model. This 3D model is supplied to the stereo image generator 13b.
  • the 3D model is obtained by OCT (Optical Coherence Tomography), so the 3D model of the lens behind the iris is missing.
  • OCT Optical Coherence Tomography
  • the 3D model of the lens behind the iris is missing.
  • OCT Optical Coherence Tomography
  • the following description is based on the premise that a 3D model of the lens without chipping has been obtained.
  • OCT irradiates the eye to be treated with light such as near-infrared light, reconstructs the reflected waves from each tissue of the eye, and creates an image (a tomographic image that is a cross-sectional image in the depth direction of the eye). It is a technology to generate.
  • images (tomographic images) captured by a Scheimpflug camera may be used for the 3D model.
  • the stereo image generating unit 13b generates two images of the crystalline lens viewed from two specific viewpoints (corresponding to the left eye and the right eye) based on the 3D model received by the 3D model receiving unit 13a. Generate a stereo image (stereogram) corresponding to the model.
  • the stereo image generator 13b supplies the stereo image of the treatment target site to the display image generator 13d.
  • FIG. 6 is a diagram showing an example of a stereo image corresponding to the 3D model according to this embodiment.
  • the stereo image shows, for example, the lens (which may include the lens capsule) by contour lines.
  • a crystalline lens corresponds to a crystalline lens alone or a crystalline lens having a lens capsule.
  • the stereo image is an image with contour lines to improve visibility, but is not limited to this, and may be an image that gives the operator a 3D feel.
  • the stereo image is an image with contour lines to improve visibility, but is not limited to this, and may be an image that gives the operator a 3D feel.
  • FIG. 6 and the examples of FIGS. 7 to 11 below only one side of the two images forming the stereo image is shown for the sake of simplification of explanation. shown.
  • the image input unit 13c receives an operating field image (front image) from the front image capturing unit 53 (see FIG. 2), and receives the operating field image (for example, an operating field image at the start of surgery or an operating field image). real-time operating field image, etc.) is supplied to the display image generation unit 13d. In addition, the image input unit 13c supplies the operative field image to the stereo image generation unit 13b as necessary.
  • an operating field image front image
  • the operating field image for example, an operating field image at the start of surgery or an operating field image.
  • the display image generation unit 13d generates a display image by adding the stereo image of the treatment target site supplied from the stereo image generation unit 13b to the surgical field image supplied from the image input unit 13c. For example, the display image generation unit 13d superimposes and synthesizes the stereo image on the surgical field image to generate a display image including the surgical field image and the stereo image.
  • FIG. 7 is a diagram showing Example 1 of a display image according to the present embodiment.
  • the stereo image is positioned on the upper right side of the operative field image (in FIG. 7) and superimposed on the operative field image to generate the display image.
  • This display image is displayed by both or one of the monitor 14 and the presentation unit 54 .
  • the operator cannot grasp the depth of the transparent posterior lens capsule from only the front image of the surgical field image received by the image input unit 13c.
  • the operator can grasp the three-dimensional shape of the crystalline lens by visually recognizing the stereo image showing the outline of the crystalline lens. This makes it easier for the operator to grasp the positional relationship between the surgical tool and the lens capsule.
  • the size, three-dimensional posture, and position of the stereo image of the treatment target site presented in the display image are not aligned with the surgical field image, but they may all be aligned.
  • the sizes, postures, and positions are aligned independently, the ease of grasping the positional relationship is improved, although it is inferior to the case where all are aligned.
  • the posture among these when the three-dimensional posture is aligned, it is particularly easy to grasp the positional relationship. It helps to accurately grasp the shape of the lens, which has anisotropic shape around the axis, and as a result, the grasp of the positional relationship is improved.
  • FIG. 8 is a diagram showing example 2 of a display image according to the present embodiment.
  • the stereo image is positioned on the upper right side of the operative field image (in FIG. 8) and superimposed on the operative field image to generate the display image.
  • a stereo image is generated so that its size and orientation are the same as those of the surgical target site (lens) in the operative field image.
  • the size of the stereo image of the treatment target area is aligned with the size of the treatment target area in the surgical field image.
  • the display image generator 13d may adjust the size of the cornea in the operating field image and the size of the cornea in the 3D model to be the same.
  • the size of the treatment target area matches the surgical field image, making it easier for the operator to guess the positional relationship between the treatment target area and the surgical tool.
  • the depth of the lens is known, so the operator can easily predict the depth relationship between the surgical tool and the posterior capsule.
  • the posture of the stereo image of the treatment target site is adjusted so that it is three-dimensionally aligned with the posture of the eye in the surgical field image.
  • the iris pattern and the blood vessel pattern in the corneal periphery in both the surgical field image and the 3D model may be used.
  • the display image generation unit 13d matches the patterns of the iris or blood vessels in both the operative field image and the 3D model, and aligns the posture of the stereo image of the treatment target region three-dimensionally with the posture of the eye in the operative field image. good too.
  • the stereo image of the treatment target area and the posture of the treatment target area in the surgical field image are aligned, so the operator can easily guess the positional relationship between the treatment target area and the surgical tool.
  • the orientation around the eye axis (the position in the rotational direction) may be adjusted.
  • the orientation around the eye axis may be adjusted so that the upper eye region (portion) in the displayed image in the stereo image of the treatment target site is the same as the surgical field image.
  • the iris pattern and the corneal peripheral blood vessel pattern in both the surgical field image and the 3D model may be used.
  • the display image generation unit 13d matches the patterns of the iris or blood vessels in both the operative field image and the 3D model, and changes the orientation of the stereo image of the treatment target region around the eye axis to the direction around the eye axis of the eye in the operative field image. You can align it.
  • the direction around the eye axis is defined, for example, by the angle in the direction of rotation around the eye axis with respect to a reference line orthogonal to the eye axis.
  • the treatment target site does not necessarily have the same shape around the eye axis. Therefore, it is effective to align the directions around the eye axis.
  • FIG. 9 is a diagram showing Example 3 of a display image according to this embodiment.
  • the stereo image is positioned on the treatment target region of the surgical field image and superimposed on the surgical field image to generate the display image.
  • a stereo image is generated so that the size, posture, and position are the same as those of the treatment target region (lens) in the surgical field image.
  • position alignment is performed so that the position of the stereo image of the treatment target area matches the position of the treatment target area in the surgical field image.
  • the iris pattern and the corneal peripheral blood vessel pattern in both the surgical field image and the 3D model may be used.
  • the display image generator 13d may match the iris or blood vessel patterns in both the surgical field image and the 3D model, and align the position of the stereo image of the treatment target site with the position of the eye in the surgical field image.
  • the stereo image is displayed superimposed on the treatment target site in the surgical field, making it easier for the operator to guess the positional relationship between the surgical tool and the treatment target site.
  • the operator can observe the actual positional relationship instead of guessing.
  • FIG. 10 is a diagram showing Example 4 of a display image according to this embodiment.
  • a part of the corresponding stereo image and a part of the treatment target part of the operative field image are enlarged and positioned at the upper left of the operative field image (in FIG. 10).
  • a display image is generated by being superimposed on the operative field image.
  • FIG. 10 shows an example of enlarged presentation of the surgical field image and the stereo image of the treatment target region, which are of the same size only.
  • two stereo images of the treatment target site are presented, but only the enlarged stereo image may be presented.
  • the same observation optical system as that for acquiring the surgical field image is used to capture the same stereo inward angle. Adjustments may be made to obtain results.
  • the three-dimensional position of the surgical tool estimated from the parallax information in the surgical field image, and the shape information of the 3D model of the treatment target site whose size, three-dimensional posture and position are aligned with the surgical field image are used to determine the surgical tool. and the treatment target site may be calculated, and information about the calculated distance and warning information about warning when the distance is short may be presented.
  • the information since the information is presented based on the measurement results rather than subjectively by the operator, the separation distance between the surgical tool and the treatment target site becomes easy to understand. Note that only the distance information may be presented without presenting the stereo image of the treatment target site. There are various methods of presenting information about this distance.
  • FIG. 11 is a diagram showing example 5 of a display image according to the present embodiment.
  • the separation distance is presented by a bar-shaped indicator. This enables the operator to grasp the separation distance between the surgical tool and the treatment target site.
  • the display image generation unit 13d aligns the size, orientation, and position of the lens of the stereo image with the size, orientation, and position of the lens of the surgical field image, and calculates the separation distance between the lens of the stereo image and the surgical instrument. Then, the display image generation unit 13d generates a bar-shaped indicator as separation distance information regarding the calculated separation distance and adds it to the display image.
  • the display image generation unit 13d may change the display mode, such as the color of the vicinity of the distal end of the surgical instrument and the color of the image frame attached to the periphery of the image, according to the separation distance.
  • the color may be green when the surgical tool is far from the site to be treated, and may change to orange or red as the surgical tool and the site to be treated are closer.
  • the sound may be presented by the speaker 56 so that the frequency of the presented sound increases as the surgical tool and the site to be treated approach.
  • a warning image or voice may be presented when the distance becomes less than a certain value.
  • the control unit 13A of the image processing device 13 determines whether or not the separation distance is equal to or less than a predetermined value, and if it is determined that the separation distance is equal to or less than the predetermined value, the controller instructs the display device to present a warning image. and control to instruct the speaker 56 to present audio.
  • the display image generation unit 13d aligns the size, orientation, and position of the lens in the stereo image with the size, orientation, and position of the lens in the surgical field image in accordance with the control by the control unit 13A, and displays the lens in the stereo image and the surgical tool. Calculate the distance between Then, according to the calculated separation distance, when the separation distance becomes equal to or less than a predetermined value, the display image generation unit 13d generates and displays warning information (for example, sentences, symbols, graphics, etc.) indicating that the separation distance is close. Add to image.
  • warning information for example, sentences, symbols, graphics, etc.
  • control unit 13A of the image processing device 13 determines whether or not the separation distance is equal to or less than a predetermined value. Performs control to instruct operation stop such as stopping ultrasonic vibration.
  • Various display images as described above are used, and these display images may be selectable by the operator, assistant, or the like.
  • the selection of the display image is realized by an input operation on the operation unit by the operator or the assistant.
  • an operator, an assistant, or the like operates an operation unit to select a display mode for displaying a desired display image.
  • the display image generator 13d generates a display image based on the selected display mode.
  • the size, position, etc. of the images may be changed by the operator, the assistant, or the like.
  • the display image generation unit 13d generates a display image by changing the size, position, etc. of the image according to the input operation to the operation unit by the operator or the assistant.
  • the image input unit 13c receives the operative field image
  • the 3D model receiving unit 13a receives a three-dimensional model of part or all of the crystalline lens of the patient's eye.
  • the stereo image generation unit 13b generates a stereo image of the crystalline lens from the three-dimensional model
  • the display image generation unit 13d generates a display image including the operative field image and the stereo image based on the operative field image and the stereo image.
  • display images including the operative field image and the stereo image can be presented to the operator. This makes it easier for the operator to understand the positional relationship between the surgical tool and the region of the eye to be treated. Therefore, it is possible for the operator to easily understand the positional relationship between the surgical instrument and the region of the eye to be treated, while simplifying the configuration of the device and reducing the size of the device.
  • the display image generation unit 13d generates a display image by aligning the size of the lens of the stereo image with the size of the lens of the surgical field image. This makes it possible to present a display image in which the size of the crystalline lens in the stereo image is the same as the size of the crystalline lens in the surgical field image. It can be easily understood.
  • the display image generation unit 13d generates a display image by aligning the orientation of the lens of the stereo image around the eye axis with the orientation of the lens of the surgical field image around the eye axis. This makes it possible to present a display image in which the orientation of the lens in the stereo image around the eye axis is the same as the orientation of the lens in the surgical field image around the eye axis. The positional relationship with the target part can be made easier to understand.
  • the display image generation unit 13d generates a display image by aligning the orientation of the lens of the stereo image with the orientation of the lens of the surgical field image. This makes it possible to present a display image in which the orientation of the crystalline lens in the stereo image is the same as the orientation of the crystalline lens in the surgical field image. It can be easily understood.
  • the display image generation unit 13d generates a display image by superimposing the stereo image on the operative field image.
  • the display image generation unit 13d generates a display image by superimposing the stereo image on the operative field image.
  • the display image generation unit 13d aligns the position of the lens of the stereo image with the position of the lens of the surgical field image to generate the display image. This makes it possible to present a display image in which the position of the crystalline lens in the stereo image is the same as the position of the crystalline lens in the surgical field image. It can be easily understood.
  • the display image generation unit 13d expands one or both of the lens of the stereo image and the lens of the surgical field image to generate a display image.
  • the display image generation unit 13d expands one or both of the lens of the stereo image and the lens of the surgical field image to generate a display image.
  • the display image generation unit 13d aligns the size, orientation, and position of the lens of the stereo image with the size, orientation, and position of the lens of the surgical field image, calculates the separation distance between the lens of the stereo image and the surgical tool, and calculates Spacing distance information about the spacing distance is generated and added to the display image.
  • the display image generation unit 13d aligns the size, orientation, and position of the lens of the stereo image with the size, orientation, and position of the lens of the surgical field image, calculates the separation distance between the lens of the stereo image and the surgical tool, and calculates Spacing distance information about the spacing distance is generated and added to the display image.
  • the display image generation unit 13d aligns the size, orientation, and position of the lens of the stereo image with the size, orientation, and position of the lens of the surgical field image, calculates the separation distance between the lens of the stereo image and the surgical tool, and calculates Warning information is generated according to the separation distance and added to the display image. As a result, it is possible to present warning information based on the measurement results rather than the operator's subjectivity.
  • Example of schematic configuration of computer> The series of processes described above can be executed by hardware or by software.
  • a program that constitutes the software is installed in the computer.
  • the computer includes, for example, a computer built into dedicated hardware and a general-purpose personal computer capable of executing various functions by installing various programs.
  • FIG. 12 is a diagram showing an example of a schematic configuration of a computer 500 that executes the series of processes described above by a program.
  • the computer 500 has a CPU (Central Processing Unit) 510, a ROM (Read Only Memory) 520, and a RAM (Random Access Memory) 530.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 510 , ROM 520 and RAM 530 are interconnected by a bus 540 .
  • An input/output interface 550 is also connected to the bus 540 .
  • An input unit 560 , an output unit 570 , a recording unit 580 , a communication unit 590 and a drive 600 are connected to the input/output interface 550 .
  • the input unit 560 is composed of a keyboard, mouse, microphone, imaging device, and the like.
  • the output unit 570 is configured with a display, a speaker, and the like.
  • the recording unit 580 is composed of a hard disk, a nonvolatile memory, or the like.
  • the communication unit 590 is configured by a network interface or the like.
  • a drive 600 drives a removable recording medium 610 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory.
  • the CPU 510 loads, for example, the program recorded in the recording unit 580 into the RAM 530 via the input/output interface 550 and the bus 540, and executes it. A series of processes are performed.
  • a program executed by the computer 500 that is, the CPU 510 can be provided by being recorded on a removable recording medium 610 such as a package medium, for example. Also, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the recording unit 580 via the input/output interface 550 by loading the removable recording medium 610 into the drive 600 . Also, the program can be received by the communication unit 590 and installed in the recording unit 580 via a wired or wireless transmission medium. In addition, the program can be installed in the ROM 520 or the recording unit 580 in advance.
  • the program executed by the computer 500 may be a program in which processing is performed in chronological order according to the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed in
  • a system means a set of multiple components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a single device housing a plurality of modules in one housing, are both systems. .
  • this technology can take the configuration of cloud computing in which one function is shared by multiple devices via a network and processed jointly.
  • each step described in the above process flow can be executed by a single device or shared by a plurality of devices.
  • one step includes multiple processes
  • the multiple processes included in the one step can be executed by one device or shared by multiple devices.
  • the present technology can also take the following configuration.
  • An image processing device comprising: (2) The display image generation unit is aligning the size of the lens of the stereo image with the size of the lens of the surgical field image to generate the display image;
  • the image processing apparatus according to (1) above.
  • the display image generation unit is generating the display image by aligning the orientation of the lens of the stereo image around the eye axis with the orientation of the lens of the surgical field image around the eye axis; The image processing apparatus according to (1) or (2) above.
  • the display image generation unit is generating the display image by aligning the orientation of the lens in the stereo image with the orientation of the lens in the surgical field image; The image processing apparatus according to any one of (1) to (3) above.
  • the display image generation unit is generating the display image by superimposing the stereo image on the operative field image; The image processing apparatus according to any one of (1) to (4) above.
  • the display image generation unit is generating the display image by aligning the position of the lens in the stereo image with the position of the lens in the surgical field image;
  • the image processing apparatus according to (5) above.
  • the display image generation unit is magnifying one or both of the lens of the stereo image and the lens of the surgical field image to generate the display image;
  • the image processing apparatus according to any one of (1) to (6) above.
  • the display image generation unit is Aligning the size, orientation, and position of the lens in the stereo image with the size, orientation, and position of the lens in the surgical field image, calculating the separation distance between the lens in the stereo image and the surgical instrument, and calculating the calculated separation generating separation distance information about the distance and adding it to the displayed image;
  • the image processing apparatus according to any one of (1) to (7) above.
  • the display image generation unit is Aligning the size, orientation, and position of the lens in the stereo image with the size, orientation, and position of the lens in the surgical field image, calculating the separation distance between the lens in the stereo image and the surgical instrument, and calculating the calculated separation generating warning information according to the distance and adding it to the display image;
  • the image processing apparatus according to any one of (1) to (8) above.
  • the image processing device receiving surgical field images for the patient's eye; receiving a three-dimensional model of part or all of the lens of the eye; generating a stereo image of the lens from the three-dimensional model; generating a display image including the operative field image and the stereo image based on the operative field image and the stereo image;
  • An image processing method comprising: (11) a surgical microscope for obtaining an operative field image of the patient's eye; an image processing device that generates a display image; a display device for displaying the display image; with The image processing device is an image input unit that receives the operative field image; a three-dimensional model receiving unit that receives a three-dimensional model of part or all of the lens of the eye; a stereo image generator that generates a stereo image of the crystalline lens from the three-dimensional model; a display image generation unit that generates the display image including the surgical field image and the stereo image based on the surgical field image and the stereo image;
  • a surgical microscope system having (12) An image processing method using the image processing apparatus according

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Eye Examination Apparatus (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

An image processing device (13) according to an embodiment of the present disclosure comprises an image input unit (13c) that receives an operating field image of a patient eye, a three-dimensional model reception unit (13a) that receives a three-dimensional model of some or all of the lens of the eye, a stereo image generating unit (13b) that generates a stereo image of the lens from the three-dimensional model, and a display image generating unit (13d) that generates a display image including the operating field image and the stereo image on the basis of the operating field image and the stereo image.

Description

画像処理装置、画像処理方法及び手術顕微鏡システムImage processing device, image processing method and surgical microscope system
 本開示は、画像処理装置、画像処理方法及び手術顕微鏡システムに関する。 The present disclosure relates to an image processing device, an image processing method, and a surgical microscope system.
 眼科における屈折矯正の方法として、眼内レンズ(IOL:Intraocular lens)と呼ばれる人工レンズを眼内に挿入することで、水晶体等の屈折異常を解消し、視力等の視機能を改善することが広く行われている。眼内レンズとしては、白内障手術によって除去された水晶体の代替として、水晶体嚢内に挿入される眼内レンズが最も広く用いられている。水晶体嚢内以外にも、例えば毛様溝等に固定(留置)されるもの(Phakic IOL)等、様々な眼内レンズが存在する。 As a method of refractive correction in ophthalmology, by inserting an artificial lens called an intraocular lens (IOL) into the eye, it is widely used to eliminate the refractive error of the crystalline lens and improve visual functions such as visual acuity. It is done. The most widely used intraocular lens is an intraocular lens that is inserted into the lens capsule as a replacement for the lens removed by cataract surgery. In addition to the lens capsule, there are various intraocular lenses such as those that are fixed (dwelled) in the ciliary sulcus (Phakic IOL).
 術者が眼内レンズを眼内に設ける眼科手術を行う際、術者は術野画像を参照しながら、術具と眼の施術対象部位との位置関係を把握することが困難であるため、術具により眼に意図しない傷害を加えてしまうことがある。例えば、術者が白内障手術において水晶体核の溝堀を行っている際、術具と水晶体の後嚢の深さの関係が分からないため、術具で後嚢を破損してしまうことがある。そこで、特許文献1では、眼の断層画像を活用して術具と眼の施術対象部位との位置関係等を把握しやすくしている。 When an operator performs an ophthalmic surgery to place an intraocular lens in the eye, it is difficult for the operator to grasp the positional relationship between the surgical tool and the target area of the eye while referring to the image of the surgical field. Surgical tools may cause unintended injury to the eye. For example, when an operator is performing cataract surgery by grooving the nucleus of the lens, the posterior capsule may be damaged by the surgical instrument because the operator does not know the depth relationship between the surgical instrument and the posterior capsule of the lens. Therefore, in Patent Literature 1, a tomographic image of the eye is used to facilitate understanding of the positional relationship between the surgical tool and the treatment target portion of the eye.
国際公開第2017/065018号WO2017/065018
 しかしながら、術具と眼の施術対象部位との位置関係の把握のため、断層画像を活用する場合には、手術顕微鏡に断層画像撮影のための光干渉断層計(OCT:Optical Coherence Tomography)等を具備する必要が生じ、装置が複雑になるとともに鏡筒が大きくなってしまう。 However, in order to grasp the positional relationship between the surgical tool and the part of the eye to be treated, when using tomographic images, an optical coherence tomography (OCT) for taking tomographic images should be attached to the surgical microscope. It becomes necessary to provide such a device, which complicates the device and increases the size of the lens barrel.
 そこで、本開示では、装置構成の簡略化及び装置サイズの抑制を実現しつつ、術者にとって術具と眼の施術対象部位との位置関係を分かりやすくすることが可能な画像処理装置、画像処理方法及び手術顕微鏡システムを提案する。 Therefore, in the present disclosure, an image processing apparatus and an image processing apparatus that are capable of simplifying the apparatus configuration and reducing the size of the apparatus while making it easier for the operator to understand the positional relationship between the surgical tool and the region to be treated of the eye. A method and an operating microscope system are proposed.
 本開示の実施形態に係る画像処理装置は、患者の眼に対する術野画像を受領する画像入力部と、前記眼の水晶体の一部又は全部の三次元モデルを受領する三次元モデル受領部と、前記三次元モデルから前記水晶体のステレオ画像を生成するステレオ画像生成部と、前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む表示画像を生成する表示画像生成部と、を備える。 An image processing apparatus according to an embodiment of the present disclosure includes an image input unit that receives an operative field image of a patient's eye, a three-dimensional model reception unit that receives a three-dimensional model of part or all of the lens of the eye, A stereo image generator that generates a stereo image of the crystalline lens from the three-dimensional model, and a display image generator that generates a display image including the surgical field image and the stereo image based on the surgical field image and the stereo image. And prepare.
 本開示の実施形態に係る画像処理方法は、画像処理装置が、患者の眼に対する術野画像を受領し、前記眼の水晶体の一部又は全部の三次元モデルを受領し、前記三次元モデルから前記水晶体のステレオ画像を生成し、前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む表示画像を生成する、ことを含む。 In an image processing method according to an embodiment of the present disclosure, an image processing device receives an operative field image of a patient's eye, receives a three-dimensional model of part or all of the lens of the eye, and from the three-dimensional model generating a stereo image of the lens; and generating a display image including the surgical field image and the stereo image based on the surgical field image and the stereo image.
 本開示の実施形態に係る手術顕微鏡システムは、患者の眼に対する術野画像を得る手術顕微鏡と、表示画像を生成する画像処理装置と、前記表示画像を表示する表示装置と、を備え、前記画像処理装置は、前記術野画像を受領する画像入力部と、前記眼の水晶体の一部又は全部の三次元モデルを受領する三次元モデル受領部と、前記三次元モデルから前記水晶体のステレオ画像を生成するステレオ画像生成部と、前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む前記表示画像を生成する表示画像生成部と、を有する。 A surgical microscope system according to an embodiment of the present disclosure includes a surgical microscope that obtains an surgical field image of a patient's eye, an image processing device that generates a display image, and a display device that displays the display image, wherein the image is The processing device includes an image input unit that receives the operative field image, a three-dimensional model reception unit that receives a three-dimensional model of part or all of the lens of the eye, and a stereo image of the lens from the three-dimensional model. and a display image generator for generating the display image including the surgical field image and the stereo image based on the surgical field image and the stereo image.
本開示の実施形態に係る手術顕微鏡システムの概略構成の一例を示す図である。1 is a diagram showing an example of a schematic configuration of a surgical microscope system according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係る手術顕微鏡の概略構成の一例を示す図である。1 is a diagram showing an example of a schematic configuration of a surgical microscope according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係る画像処理装置の概略構成の一例を示す図である。1 is a diagram illustrating an example of a schematic configuration of an image processing device according to an embodiment of the present disclosure; FIG. 本開示の実施形態に係る3Dモデル生成用の断層像の一例を示す図である。FIG. 4 is a diagram showing an example of a tomographic image for 3D model generation according to an embodiment of the present disclosure; 本開示の実施形態に係る3Dモデル生成用の立体像の一例を示す図である。FIG. 3 is a diagram illustrating an example of a stereoscopic image for 3D model generation according to an embodiment of the present disclosure; 本開示の実施形態に係る3Dモデルに対応するステレオ画像の一例を示す図である。FIG. 3 illustrates an example stereo image corresponding to a 3D model according to an embodiment of the present disclosure; 本開示の実施形態に係る表示画像の例1を示す図である。FIG. 3 is a diagram showing Example 1 of a display image according to the embodiment of the present disclosure; FIG. 本開示の実施形態に係る表示画像の例2を示す図である。FIG. 10 is a diagram showing Example 2 of a display image according to the embodiment of the present disclosure; 本開示の実施形態に係る表示画像の例3を示す図である。FIG. 10 is a diagram showing Example 3 of a display image according to the embodiment of the present disclosure; 本開示の実施形態に係る表示画像の例4を示す図である。FIG. 10 is a diagram showing Example 4 of a display image according to the embodiment of the present disclosure; 本開示の実施形態に係る表示画像の例5を示す図である。FIG. 11 is a diagram showing Example 5 of a display image according to an embodiment of the present disclosure; 本開示の実施形態に係るコンピュータの概略構成の一例を示す図である。1 is a diagram illustrating an example of a schematic configuration of a computer according to an embodiment of the present disclosure; FIG.
 以下に、本開示の実施形態について図面に基づいて詳細に説明する。なお、この実施形態により本開示に係る装置や方法、システム等が限定されるものではない。また、以下の各実施形態において、基本的に同一の部位には同一の符号を付することにより重複する説明を省略する。 Below, embodiments of the present disclosure will be described in detail based on the drawings. Note that the apparatus, method, system, etc. according to the present disclosure are not limited by this embodiment. Further, in each of the following embodiments, basically the same parts are denoted by the same reference numerals, thereby omitting duplicate descriptions.
 以下に説明される1又は複数の実施形態(実施例、変形例を含む)は、各々が独立に実施されることが可能である。一方で、以下に説明される複数の実施形態は少なくとも一部が他の実施形態の少なくとも一部と適宜組み合わせて実施されてもよい。これら複数の実施形態は、互いに異なる新規な特徴を含み得る。したがって、これら複数の実施形態は、互いに異なる目的又は課題を解決することに寄与し得、互いに異なる効果を奏し得る。 Each of one or more embodiments (including examples and modifications) described below can be implemented independently. On the other hand, at least some of the embodiments described below may be implemented in combination with at least some of the other embodiments as appropriate. These multiple embodiments may include novel features that differ from each other. Therefore, these multiple embodiments can contribute to solving different purposes or problems, and can produce different effects.
 以下に示す項目順序に従って本開示を説明する。
 1.実施形態
 1-1.手術顕微鏡システムの概略構成の一例
 1-2.手術顕微鏡の概略構成の一例
 1-3.画像処理装置の概略構成及び画像処理の一例
 1-4.作用・効果
 2.コンピュータの概略構成の一例
 3.付記
The present disclosure will be described according to the order of items shown below.
1. Embodiment 1-1. Example of schematic configuration of surgical microscope system 1-2. Example of schematic configuration of surgical microscope 1-3. Schematic Configuration of Image Processing Apparatus and Example of Image Processing 1-4. Action and effect 2. An example of a schematic configuration of a computer3. Supplementary note
 <1.実施形態>
 <1-1.手術顕微鏡システムの概略構成の一例>
 本実施形態に係る手術顕微鏡システム1の概略構成の一例について図1を参照して説明する。図1は、本実施形態に係る手術顕微鏡システム1の概略構成の一例を示す図である。
<1. embodiment>
<1-1. Example of schematic configuration of surgical microscope system>
An example of a schematic configuration of a surgical microscope system 1 according to this embodiment will be described with reference to FIG. FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope system 1 according to this embodiment.
 図1に示すように、手術顕微鏡システム1は、手術顕微鏡10と、患者用ベッド20とを有する。この手術顕微鏡システム1は、眼の手術に用いられるシステムである。患者は、患者用ベッド20に横たわった状態で眼の手術を受ける。また、医師である術者は、手術顕微鏡10により患者の眼を観察しながら手術を行う。 As shown in FIG. 1, the surgical microscope system 1 has a surgical microscope 10 and a patient bed 20. This surgical microscope system 1 is a system used for eye surgery. The patient undergoes eye surgery while lying on the patient bed 20 . An operator, who is a doctor, performs surgery while observing the patient's eye through the surgical microscope 10 .
 手術顕微鏡10は、対物レンズ11と、接眼レンズ12と、画像処理装置13と、モニタ14とを有している。 The surgical microscope 10 has an objective lens 11, an eyepiece lens 12, an image processing device 13, and a monitor 14.
 対物レンズ11及び接眼レンズ12は、手術対象となる患者の眼を拡大観察するためのレンズである。 The objective lens 11 and the eyepiece lens 12 are lenses for magnifying and observing the eye of the patient to be operated.
 画像処理装置13は、対物レンズ11を介して撮影された画像に対して所定の画像処理を行うことにより、各種画像や各種情報等を出力する。 The image processing device 13 outputs various images, various information, etc. by performing predetermined image processing on the image captured through the objective lens 11 .
 モニタ14は、対物レンズ11を介して撮影された画像、また、画像処理装置13により生成された各種画像や各種情報等を表示する。このモニタ14は、手術顕微鏡10と別体に設けられてもよい。 The monitor 14 displays an image captured through the objective lens 11, various images generated by the image processing device 13, various information, and the like. This monitor 14 may be provided separately from the surgical microscope 10 .
 この手術顕微鏡システム1において、例えば、術者は、接眼レンズ12を覗き、対物レンズ11を介して患者の眼を観察しながら手術を行う。また、術者は、モニタ14に表示される各種画像(例えば、画像処理前の画像や画像処理済の画像等)や各種情報等を確認しながら手術を行う。 In this surgical microscope system 1, for example, the operator looks into the eyepiece 12 and performs surgery while observing the patient's eye through the objective lens 11. Further, the operator performs surgery while confirming various images (for example, an image before image processing, an image after image processing, etc.) and various information displayed on the monitor 14 .
 <1-2.手術顕微鏡の概略構成の一例>
 本実施形態に係る手術顕微鏡10の概略構成の一例について図2を参照して説明する。図2は、本実施形態に係る手術顕微鏡10の概略構成の一例を示す図である。
<1-2. Example of schematic configuration of surgical microscope>
An example of the schematic configuration of the surgical microscope 10 according to this embodiment will be described with reference to FIG. FIG. 2 is a diagram showing an example of a schematic configuration of the surgical microscope 10 according to this embodiment.
 図2に示すように、手術顕微鏡10は、上記の対物レンズ11、接眼レンズ12、画像処理装置13及びモニタ14に加え、光源51と、観察光学系52と、正面画像撮影部53と、提示部54と、インターフェース部55と、スピーカ56とを有している。なお、モニタ14や提示部54は、表示装置に相当する。 As shown in FIG. 2, the surgical microscope 10 includes, in addition to the objective lens 11, the eyepiece lens 12, the image processing device 13, and the monitor 14, a light source 51, an observation optical system 52, a front image photographing section 53, and a display It has a section 54 , an interface section 55 and a speaker 56 . Note that the monitor 14 and the presentation unit 54 correspond to a display device.
 光源51は、画像処理装置13が備える制御部13Aによる制御に従って照明光を射出し、患者の眼を照明する。 The light source 51 emits illumination light under the control of the control unit 13A included in the image processing device 13 to illuminate the eyes of the patient.
 観察光学系52は、例えば、対物レンズ11やハーフミラー52a、図示しないレンズ等の光学素子から構成されている。この観察光学系52は、患者の眼から反射された光(観察光)を接眼レンズ12及び正面画像撮影部53へと導く。 The observation optical system 52 is composed of optical elements such as the objective lens 11, a half mirror 52a, and lenses (not shown). The observation optical system 52 guides the light (observation light) reflected from the patient's eye to the eyepiece 12 and the front image capturing section 53 .
 詳しくは、患者の眼から反射された光が、観察光として、対物レンズ11や図示しないレンズ等を介してハーフミラー52aに入射する。ハーフミラー52aに入射した観察光のうちの略半分は、ハーフミラー52aをそのまま透過し、透過型の提示部54を介して接眼レンズ12へと入射される。一方、ハーフミラー52aに入射した観察光の残りの半分は、ハーフミラー52aで反射されて正面画像撮影部53へと入射する。 Specifically, the light reflected from the patient's eye enters the half mirror 52a as observation light via the objective lens 11, a lens (not shown), or the like. Approximately half of the observation light incident on the half mirror 52 a passes through the half mirror 52 a as it is, and enters the eyepiece 12 via the transmissive presentation unit 54 . On the other hand, the other half of the observation light incident on the half mirror 52 a is reflected by the half mirror 52 a and enters the front image capturing section 53 .
 正面画像撮影部53は、例えば、ビデオカメラ等により構成されている。この正面画像撮影部53は、観察光学系52から入射した観察光を受光して光電変換することで、患者の眼を正面から観察した画像、つまり患者の眼を略眼軸方向から撮影した画像である正面画像を撮影する。正面画像撮影部53は、画像処理装置13の制御に従って正面画像を撮影(撮像)し、得られた正面画像を画像処理装置13に供給する。 The front image capturing unit 53 is composed of, for example, a video camera. The front image photographing unit 53 receives the observation light incident from the observation optical system 52 and photoelectrically converts it to obtain an image of the patient's eye observed from the front, that is, an image of the patient's eye photographed approximately in the eye axis direction. A front image is taken. The front image capturing unit 53 captures (captures) a front image under the control of the image processing device 13 and supplies the obtained front image to the image processing device 13 .
 接眼レンズ12は、提示部54を介して観察光学系52から入射された観察光を集光して、患者の眼の光学像を結像させる。これにより、患者の眼の光学像が接眼レンズ12を覗いている術者によって観察される。 The eyepiece 12 collects the observation light incident from the observation optical system 52 via the presentation unit 54 and forms an optical image of the patient's eye. An optical image of the patient's eye is thereby observed by the operator looking through the eyepiece 12 .
 提示部54は、透過型の表示デバイス等により構成されており、接眼レンズ12と観察光学系52との間に配置されている。この提示部54は、観察光学系52から入射した観察光を透過させて接眼レンズ12に入射させるとともに、画像処理装置13から供給された各種画像(例えば、正面画像やステレオ画像等)や各種情報も必要に応じて提示(表示)する。各種画像や各種情報等は、例えば、患者の眼の光学像に重畳されて提示されてもよいし、光学像を邪魔しないように、光学像の周辺部に提示されてもよい。 The presentation unit 54 is composed of a transmissive display device or the like, and is arranged between the eyepiece 12 and the observation optical system 52 . The presentation unit 54 transmits observation light incident from the observation optical system 52 and makes it enter the eyepiece 12, and also displays various images (for example, a front image, a stereo image, etc.) and various information supplied from the image processing device 13. are also presented (displayed) as necessary. Various images, various information, and the like may be presented, for example, superimposed on the optical image of the patient's eye, or may be presented in the periphery of the optical image so as not to interfere with the optical image.
 画像処理装置13は、手術顕微鏡10全体の動作を制御する制御部13Aを有する。例えば、制御部13Aは、光源51の照明条件を変更したり、観察光学系52のズーム倍率を変更したりする。また、制御部13Aは、インターフェース部55から供給される術者等の操作情報等に基づいて、正面画像撮影部53の画像取得を制御する。 The image processing device 13 has a control section 13A that controls the operation of the surgical microscope 10 as a whole. For example, the control section 13A changes the illumination conditions of the light source 51 or changes the zoom magnification of the observation optical system 52 . Further, the control unit 13A controls image acquisition by the front image photographing unit 53 based on the operation information of the operator or the like supplied from the interface unit 55 and the like.
 インターフェース部55は、例えば、通信部等により構成されている。通信部は、モニタ14に重畳して設けられたタッチパネルや、コントローラ、図示しないリモートコントローラ等の操作部からの指令を受信したり、外部装置との通信を行ったりする。このインターフェース部55は、術者等の操作に応じた情報等を画像処理装置13に供給する。また、インターフェース部55は、画像処理装置13から供給される、外部機器を制御するための機器制御情報等を外部機器に出力する。 The interface unit 55 is composed of, for example, a communication unit and the like. The communication unit receives commands from an operation unit such as a touch panel superimposed on the monitor 14, a controller, a remote controller (not shown), or the like, and communicates with external devices. The interface unit 55 supplies the image processing apparatus 13 with information and the like according to the operation of the operator. The interface unit 55 also outputs device control information for controlling the external device, which is supplied from the image processing apparatus 13, to the external device.
 モニタ14は、画像処理装置13の制御部13Aによる制御に応じて、正面画像やステレオ画像等の各種画像や各種情報を表示する。 The monitor 14 displays various images such as front images and stereo images and various information according to control by the control unit 13A of the image processing device 13 .
 スピーカ56は、画像処理装置13の制御部13Aによる制御に応じて、例えば、手術中に危険な状況を検出した場合、その危険状況を術者等に報知するため、ブザー音やメロディ音等の音、また、メッセージ(音声)等を出力する。なお、手術顕微鏡10は、危険状況を術者等に報知するための回転灯や表示灯(ランプ)を備えていてもよい。 For example, when a dangerous situation is detected during surgery, the speaker 56 emits a buzzer sound, a melody sound, or the like in order to notify the operator of the dangerous situation in accordance with the control by the control unit 13A of the image processing device 13. Outputs sound, message (voice), and the like. Note that the surgical microscope 10 may be provided with a rotating light or indicator light (lamp) for informing the operator or the like of a dangerous situation.
 以上のような構成の手術顕微鏡システム1では、正面画像やステレオ画像等の各種画像に基づいて表示画像を画像処理装置13により生成し、その生成した表示画像をモニタ14や提示部54等の表示装置により表示することで、術者に術具と眼の施術対象部位との位置関係を把握させることができる。 In the surgical microscope system 1 configured as described above, a display image is generated by the image processing device 13 based on various images such as a front image and a stereo image, and the generated display image is displayed on the monitor 14, the presentation unit 54, or the like. Displaying by the device allows the operator to grasp the positional relationship between the surgical instrument and the surgical target site of the eye.
 <1-3.画像処理装置の概略構成及び画像処理の一例>
 本実施形態に係る画像処理装置13の概略構成及び画像処理の一例について図3を参照して説明する。図3は、本実施形態に係る画像処理装置13の概略構成(構成及び処理の流れ)の一例を示す図である。
<1-3. Schematic Configuration of Image Processing Apparatus and Example of Image Processing>
A schematic configuration of the image processing apparatus 13 according to the present embodiment and an example of image processing will be described with reference to FIG. FIG. 3 is a diagram showing an example of a schematic configuration (configuration and processing flow) of the image processing apparatus 13 according to this embodiment.
 以下では、施術対象部位を水晶体嚢とする場合の例を説明する。特に経験の浅い術者において術具と後嚢の深さの関係が分からないため、術具により後嚢破損を起こしてしまうことがある。このような深さ関係の分かりにくさを解消するため、水晶体(水晶体嚢を含んでもよい)を術前計測した3Dモデルに基づくステレオ画像の提示処理について説明する。 In the following, an example of the case where the treatment target site is the lens capsule will be explained. In particular, an inexperienced operator does not know the relationship between the surgical tool and the depth of the posterior capsule, and the surgical tool may damage the posterior capsule. In order to eliminate such difficulty in understanding the depth relationship, stereo image presentation processing based on a 3D model obtained by preoperatively measuring the lens (which may include the lens capsule) will be described.
 図3に示すように、画像処理装置13は、3Dモデル受領部(三次元モデル受領部)13aと、ステレオ画像生成部13bと、画像入力部13cと、表示画像生成部13dとを有している。 As shown in FIG. 3, the image processing device 13 includes a 3D model reception unit (three-dimensional model reception unit) 13a, a stereo image generation unit 13b, an image input unit 13c, and a display image generation unit 13d. there is
 3Dモデル受領部13aは、術前計画において術前に計測した施術対象部位の3Dモデルを外部装置から受領する。3Dモデルは、施術対象部位のみを含むようにしてもよいし、後述のサイズ合わせや姿勢合わせのために他の部位を含むようにしてもよい。また、3Dモデルは、術野画像における角膜を基準にしてサイズ合わせを行うために角膜輪部を含むようにしてもよいし、虹彩パターンにより姿勢合わせをするために虹彩を含むようにしてもよい。また、3Dモデルは、姿勢合わせに用いるために3Dモデルとの位置関係が既知である対象眼の術前正面画像を含むようにしても良い。姿勢合わせの際には、例えば、術前正面画像における角膜周辺の血管を用いることができる。 The 3D model receiving unit 13a receives from an external device a 3D model of the treatment target site measured preoperatively in the preoperative planning. The 3D model may include only the site to be treated, or may include other sites for size adjustment and posture adjustment, which will be described later. In addition, the 3D model may include the corneal limbus for size matching based on the cornea in the surgical field image, or may include the iris for posture matching using the iris pattern. The 3D model may also include a preoperative frontal image of the target eye whose positional relationship with the 3D model is known for use in alignment. For example, blood vessels around the cornea in the preoperative front image can be used for posture adjustment.
 図4は、本実施形態に係る3Dモデル生成用の断層像の一例を示す図である。図5は、本実施形態に係る3Dモデル生成用の立体像(特定方向から見た場合の立体像)の一例を示す図である。断層像(複数の断層像)や立体像の一方又は両方が用いられ、3Dモデルが構築される。この3Dモデルがステレオ画像生成部13bに供給される。 FIG. 4 is a diagram showing an example of a tomographic image for 3D model generation according to this embodiment. FIG. 5 is a diagram showing an example of a stereoscopic image (a stereoscopic image viewed from a specific direction) for generating a 3D model according to this embodiment. One or both of a tomogram (multiple tomograms) and a stereoscopic image are used to construct a 3D model. This 3D model is supplied to the stereo image generator 13b.
 なお、図4や図5の例では、3DモデルをOCT(Optical Coherence Tomography:光干渉断層計)により取得した例を図示しているため、虹彩の後ろ側について水晶体の3Dモデルが欠けている。例えば、超音波診断装置を用いれば、欠けのない水晶体の3Dモデルを取得することも可能である。また、3Dの基準モデルを用意し、その基準モデルから欠けた部分を補い、欠けのない水晶体の3Dモデルを取得することも可能である。以下の説明では、簡単のため、欠けのない水晶体の3Dモデルが得られている前提で説明を行う。 In the examples of FIGS. 4 and 5, the 3D model is obtained by OCT (Optical Coherence Tomography), so the 3D model of the lens behind the iris is missing. For example, if an ultrasonic diagnostic apparatus is used, it is possible to obtain a 3D model of the lens without chipping. It is also possible to prepare a 3D reference model, compensate for missing portions from the reference model, and obtain a 3D model of the crystalline lens with no defects. For the sake of simplicity, the following description is based on the premise that a 3D model of the lens without chipping has been obtained.
 ここで、OCTとは、施術対象の眼に近赤外等の光を照射し、眼の各組織による反射波を再構成して画像(眼の奥行き方向の断面の画像である断層画像)を生成する技術である。なお、OCTにより撮像された画像以外にも、シャインプルークカメラにより撮像された画像(断層像)が3Dモデルに用いられてもよい。 Here, OCT irradiates the eye to be treated with light such as near-infrared light, reconstructs the reflected waves from each tissue of the eye, and creates an image (a tomographic image that is a cross-sectional image in the depth direction of the eye). It is a technology to generate. In addition to images captured by OCT, images (tomographic images) captured by a Scheimpflug camera may be used for the 3D model.
 図3に戻り、ステレオ画像生成部13bは、3Dモデル受領部13aにより受領した3Dモデルに基づいて、特定の2視点(左目と右目に相当)から見た水晶体の2枚の画像、すなわち、3Dモデルに対応するステレオ画像(ステレオグラム)を生成する。ステレオ画像生成部13bは、施術対象部位のステレオ画像を表示画像生成部13dに供給する。 Returning to FIG. 3, the stereo image generating unit 13b generates two images of the crystalline lens viewed from two specific viewpoints (corresponding to the left eye and the right eye) based on the 3D model received by the 3D model receiving unit 13a. Generate a stereo image (stereogram) corresponding to the model. The stereo image generator 13b supplies the stereo image of the treatment target site to the display image generator 13d.
 図6は、本実施形態に係る3Dモデルに対応するステレオ画像の一例を示す図である。図6に示すように、ステレオ画像は、例えば、輪郭線により水晶体(水晶体嚢を含んでもよい)を示す。水晶体とは、水晶体単体、あるいは、水晶体嚢を有する水晶体に相当する。 FIG. 6 is a diagram showing an example of a stereo image corresponding to the 3D model according to this embodiment. As shown in FIG. 6, the stereo image shows, for example, the lens (which may include the lens capsule) by contour lines. A crystalline lens corresponds to a crystalline lens alone or a crystalline lens having a lens capsule.
 なお、図6の例では、ステレオ画像は、視認性を高めるために輪郭線による画像であるが、これに限られるものではなく、術者に3Dを感じさせる画像であればよい。この図6の例、また、以下の図7から図11の例では、説明の簡略化のため、ステレオ画像を構成する2枚の画像のうち片側のみが示されるが、実際にはステレオ画像が示される。 In the example of FIG. 6, the stereo image is an image with contour lines to improve visibility, but is not limited to this, and may be an image that gives the operator a 3D feel. In the example of FIG. 6 and the examples of FIGS. 7 to 11 below, only one side of the two images forming the stereo image is shown for the sake of simplification of explanation. shown.
 図3に戻り、画像入力部13cは、正面画像撮影部53(図2参照)から術野画像(正面画像)を受領し、受領した術野画像(例えば、手術開始時の術野画像や手術中の実時間の術野画像等)を表示画像生成部13dに供給する。また、画像入力部13cは、必要に応じて術野画像をステレオ画像生成部13bに供給する。 Returning to FIG. 3, the image input unit 13c receives an operating field image (front image) from the front image capturing unit 53 (see FIG. 2), and receives the operating field image (for example, an operating field image at the start of surgery or an operating field image). real-time operating field image, etc.) is supplied to the display image generation unit 13d. In addition, the image input unit 13c supplies the operative field image to the stereo image generation unit 13b as necessary.
 表示画像生成部13dは、画像入力部13cから供給された術野画像に、ステレオ画像生成部13bから供給された施術対象部位のステレオ画像を付加することで、表示画像を生成する。例えば、表示画像生成部13dは、術野画像にステレオ画像を重ねて合成し、術野画像及びステレオ画像を含む表示画像を生成する。 The display image generation unit 13d generates a display image by adding the stereo image of the treatment target site supplied from the stereo image generation unit 13b to the surgical field image supplied from the image input unit 13c. For example, the display image generation unit 13d superimposes and synthesizes the stereo image on the surgical field image to generate a display image including the surgical field image and the stereo image.
 (表示画像の例1)
 図7は、本実施形態に係る表示画像の例1を示す図である。この図7の例では、ステレオ画像が術野画像の右上(図7中)に位置付けられて術野画像に重ねられ、表示画像が生成される。この表示画像は、モニタ14及び提示部54の両方又は一方により表示される。
(Display image example 1)
FIG. 7 is a diagram showing Example 1 of a display image according to the present embodiment. In the example of FIG. 7, the stereo image is positioned on the upper right side of the operative field image (in FIG. 7) and superimposed on the operative field image to generate the display image. This display image is displayed by both or one of the monitor 14 and the presentation unit 54 .
 ここで、水晶体の形状には個人差があるが、画像入力部13cが受領する術野画像の正面画像のみでは、術者は、透明である水晶体後嚢の深さを把握できない。術者は、上記のような水晶体の輪郭線を示すステレオ画像を視認することで、その三次元的な形状を把握することが可能となる。これにより、術者は、術具と水晶体嚢との位置関係を把握することが容易となる。 Here, although there are individual differences in the shape of the lens, the operator cannot grasp the depth of the transparent posterior lens capsule from only the front image of the surgical field image received by the image input unit 13c. The operator can grasp the three-dimensional shape of the crystalline lens by visually recognizing the stereo image showing the outline of the crystalline lens. This makes it easier for the operator to grasp the positional relationship between the surgical tool and the lens capsule.
 また、上記の表示画像の例1では、表示画像に提示する施術対象部位のステレオ画像のサイズ、三次元的な姿勢及び位置を術野画像と揃えていないが、これらを全て揃えてもよい。この場合には、術野画像に水晶体のステレオ画像の輪郭線を重畳することも可能となるため、術具と後嚢との位置関係は非常に分かりやすくなる。サイズ、姿勢及び位置それぞれを単独で揃えた場合にも、全て揃えた場合には劣るものの位置関係の把握しやすさが改善される。また、これらのうち姿勢に関しては三次元的な姿勢が揃っている場合、特に位置関係が把握しやすくなるが、眼軸を基準として上側にくる眼領域(部分)が揃っているだけでも、眼軸周りに形状の異方性のある水晶体形状の正確な把握を助け、結果として位置関係の把握が改善される。 In addition, in example 1 of the display image described above, the size, three-dimensional posture, and position of the stereo image of the treatment target site presented in the display image are not aligned with the surgical field image, but they may all be aligned. In this case, it is possible to superimpose the outline of the stereoscopic image of the crystalline lens on the surgical field image, so that the positional relationship between the surgical instrument and the posterior capsule becomes very easy to understand. Even when the sizes, postures, and positions are aligned independently, the ease of grasping the positional relationship is improved, although it is inferior to the case where all are aligned. Regarding the posture among these, when the three-dimensional posture is aligned, it is particularly easy to grasp the positional relationship. It helps to accurately grasp the shape of the lens, which has anisotropic shape around the axis, and as a result, the grasp of the positional relationship is improved.
 (表示画像の例2)
 図8は、本実施形態に係る表示画像の例2を示す図である。この図8の例では、ステレオ画像が術野画像の右上(図8中)に位置付けられて術野画像に重ねられ、表示画像が生成される。ステレオ画像は、サイズ及び姿勢が術野画像の施術対象部位(水晶体)と同じになるように生成される。
(Display image example 2)
FIG. 8 is a diagram showing example 2 of a display image according to the present embodiment. In the example of FIG. 8, the stereo image is positioned on the upper right side of the operative field image (in FIG. 8) and superimposed on the operative field image to generate the display image. A stereo image is generated so that its size and orientation are the same as those of the surgical target site (lens) in the operative field image.
 サイズに関して、例えば、施術対象部位のステレオ画像のサイズが術野画像の施術対象部位のサイズに揃えられる。これらのサイズを揃えるため、例えば、表示画像生成部13dは、術野画像における角膜のサイズと3Dモデルにおける角膜のサイズとを等しくするように調整してもよい。 Regarding the size, for example, the size of the stereo image of the treatment target area is aligned with the size of the treatment target area in the surgical field image. In order to make these sizes uniform, for example, the display image generator 13d may adjust the size of the cornea in the operating field image and the size of the cornea in the 3D model to be the same.
 このように、施術対象部位のサイズが術野画像と揃うため、術者は施術対象部位と術具との位置関係を推測しやすくなる。例えば、水晶体核の溝堀を行う場合には、水晶体の深さが分かるため、術者は術具と後嚢の深さ関係を予測しやすくなる。 In this way, the size of the treatment target area matches the surgical field image, making it easier for the operator to guess the positional relationship between the treatment target area and the surgical tool. For example, when grooving the lens nucleus, the depth of the lens is known, so the operator can easily predict the depth relationship between the surgical tool and the posterior capsule.
 姿勢に関して、例えば、施術対象部位のステレオ画像の姿勢が三次元的に術野画像における眼の姿勢と揃うように調整される。この姿勢を揃えるための基準としては、術野画像と3Dモデルの双方における虹彩パターンや角膜周辺部の血管パターンを用いるようにしてもよい。例えば、表示画像生成部13dは、術野画像と3Dモデルの双方における虹彩又は血管のパターンを一致させ、施術対象部位のステレオ画像の姿勢を三次元的に術野画像の眼の姿勢に揃えてもよい。 Regarding the posture, for example, the posture of the stereo image of the treatment target site is adjusted so that it is three-dimensionally aligned with the posture of the eye in the surgical field image. As a reference for aligning the postures, the iris pattern and the blood vessel pattern in the corneal periphery in both the surgical field image and the 3D model may be used. For example, the display image generation unit 13d matches the patterns of the iris or blood vessels in both the operative field image and the 3D model, and aligns the posture of the stereo image of the treatment target region three-dimensionally with the posture of the eye in the operative field image. good too.
 このように、施術対象部位のステレオ画像と術野画像における施術対象部位の姿勢が揃うので、術者は施術対象部位と術具との位置関係を推測しやすくなる。 In this way, the stereo image of the treatment target area and the posture of the treatment target area in the surgical field image are aligned, so the operator can easily guess the positional relationship between the treatment target area and the surgical tool.
 なお、施術対象部位のステレオ画像と術野画像との眼軸周りの相対関係を基準として、施術対象部位のステレオ画像が術野画像と同じになるように眼軸周りの向き(眼軸周りの回転方向の位置)を調整してもよい。例えば、施術対象部位のステレオ画像において表示画像にて上側にくる眼領域(部分)が術野画像と同じになるように眼軸周りの向きを調整してもよい。この向きを揃えるための基準としては、術野画像と3Dモデルの双方における虹彩パターンや角膜周辺部の血管パターンを用いるようにしてもよい。例えば、表示画像生成部13dは、術野画像と3Dモデルの双方における虹彩又は血管のパターンを一致させ、施術対象部位のステレオ画像の眼軸周りの向きを術野画像の眼の眼軸周りの向きに揃えてもよい。なお、眼軸周りの向きは、例えば、眼軸に直交する基準線に対して眼軸周りの回転方向への角度で規定される。 In addition, based on the relative relationship around the eye axis between the stereo image of the treatment target area and the image of the surgical field, the orientation around the eye axis (the position in the rotational direction) may be adjusted. For example, the orientation around the eye axis may be adjusted so that the upper eye region (portion) in the displayed image in the stereo image of the treatment target site is the same as the surgical field image. As a reference for aligning the directions, the iris pattern and the corneal peripheral blood vessel pattern in both the surgical field image and the 3D model may be used. For example, the display image generation unit 13d matches the patterns of the iris or blood vessels in both the operative field image and the 3D model, and changes the orientation of the stereo image of the treatment target region around the eye axis to the direction around the eye axis of the eye in the operative field image. You can align it. The direction around the eye axis is defined, for example, by the angle in the direction of rotation around the eye axis with respect to a reference line orthogonal to the eye axis.
 このように、施術対象部位のステレオ画像と術野画像における施術対象部位とにおいて、眼軸周りの向きを揃えることにより、術野における形状の推測がつきやすくなり、術者は術具と施術対象部位の位置関係を推測しやすくなる。例えば、施術対象部位は必ずしも眼軸周りに等しい形状ではない。このため、眼軸周りの向きを揃えることは有効である。 In this way, by aligning the directions around the eye axis in the stereo image of the treatment target area and the treatment target area in the surgical field image, it becomes easier to guess the shape of the surgical field, and the operator can use the surgical tool and the treatment target. It becomes easier to guess the positional relationship of parts. For example, the treatment target site does not necessarily have the same shape around the eye axis. Therefore, it is effective to align the directions around the eye axis.
 (表示画像の例3)
 図9は、本実施形態に係る表示画像の例3を示す図である。この図9の例では、ステレオ画像が術野画像の施術対象部位上に位置付けられて術野画像に重ねられ、表示画像が生成される。ステレオ画像は、サイズ、姿勢及び位置が術野画像の施術対象部位(水晶体)と同じになるように生成される。
(Display image example 3)
FIG. 9 is a diagram showing Example 3 of a display image according to this embodiment. In the example of FIG. 9, the stereo image is positioned on the treatment target region of the surgical field image and superimposed on the surgical field image to generate the display image. A stereo image is generated so that the size, posture, and position are the same as those of the treatment target region (lens) in the surgical field image.
 位置に関して、施術対象部位のステレオ画像の位置が術野画像における施術対象部位の位置と一致するように位置合わせを行う。位置を揃えるための基準としては、術野画像と3Dモデルの双方における虹彩パターンや角膜周辺部の血管パターンを用いるようにしてもよい。例えば、表示画像生成部13dは、術野画像と3Dモデルの双方における虹彩又は血管のパターンを一致させ、施術対象部位のステレオ画像の位置を術野画像の眼の位置に揃えてもよい。 Regarding the position, position alignment is performed so that the position of the stereo image of the treatment target area matches the position of the treatment target area in the surgical field image. As a reference for aligning the positions, the iris pattern and the corneal peripheral blood vessel pattern in both the surgical field image and the 3D model may be used. For example, the display image generator 13d may match the iris or blood vessel patterns in both the surgical field image and the 3D model, and align the position of the stereo image of the treatment target site with the position of the eye in the surgical field image.
 このように、ステレオ画像が術野の施術対象部位上に重畳表示されるので、術者は術具と施術対象部位との位置関係を推測しやすくなる。特に、ステレオ画像及び術野の施術対象部位においてサイズ及び三次元的な姿勢も揃っている場合には、術者は推測でなく実際の位置関係を観察することができる。 In this way, the stereo image is displayed superimposed on the treatment target site in the surgical field, making it easier for the operator to guess the positional relationship between the surgical tool and the treatment target site. In particular, when the sizes and three-dimensional postures of the stereo images and the regions to be treated in the surgical field are the same, the operator can observe the actual positional relationship instead of guessing.
 (表示画像の例4)
 図10は、本実施形態に係る表示画像の例4を示す図である。この図10の例では、対応するステレオ画像の一部及び術野画像の施術対象部位の一部が拡大され、術野画像の左上(図10中)に位置付けられ、上記表示画像の例1に係る術野画像に重ねられて、表示画像が生成される。
(Display image example 4)
FIG. 10 is a diagram showing Example 4 of a display image according to this embodiment. In the example of FIG. 10, a part of the corresponding stereo image and a part of the treatment target part of the operative field image are enlarged and positioned at the upper left of the operative field image (in FIG. 10). A display image is generated by being superimposed on the operative field image.
 なお、図10には、例としてサイズのみが揃っている術野画像と施術対象部位のステレオ画像の拡大提示の例を示している。この図10の例では、施術対象部位のステレオ画像が2つ提示されているが、拡大した側のステレオ画像だけが提示されてもよい。 It should be noted that FIG. 10 shows an example of enlarged presentation of the surgical field image and the stereo image of the treatment target region, which are of the same size only. In the example of FIG. 10, two stereo images of the treatment target site are presented, but only the enlarged stereo image may be presented.
 このように、術者に施術対象部位を拡大して提示することで、術者は施術対象部位を詳細に観察できるので、術具と施術対象部位との位置関係が更に分かりやすくなる。また、拡大した画像において術野画像と施術対象部位のステレオ画像のサイズ、三次元的姿勢及び位置が揃っていると、術具と施術対象部位との位置関係が特に分かりやすくなるが、揃っていなくても分かりやすさは改善する。また、術野画像と施術対象部位のステレオ画像の片側だけが拡大提示されても、双方提示される場合に比べれば劣るものの、分かりやすさは改善するので、上記のいずれかの手法により施術対象部位の拡大を行うようにしてもよい。 In this way, by presenting the area to be treated to the operator in an enlarged manner, the operator can observe the area to be treated in detail, making it easier to understand the positional relationship between the surgical tool and the area to be treated. In addition, if the sizes, three-dimensional postures, and positions of the surgical field image and the stereo image of the treatment target area are aligned in the enlarged image, the positional relationship between the surgical tool and the treatment target area will be particularly easy to understand. Clarity is improved without it. In addition, even if only one side of the operative field image and the stereo image of the treatment target area is enlarged and presented, it is inferior to the case where both are presented, but the clarity is improved. You may make it enlarge a site|part.
 なお、以上の例で述べた施術対象部位のステレオ画像を生成する際には、術野画像を取得するのと同様の観察光学系に基づき同様のステレオ内向角を用いて撮影した場合と同様の結果が得られるような調整を行ってもよい。 In addition, when generating a stereo image of the treatment target site described in the above example, the same observation optical system as that for acquiring the surgical field image is used to capture the same stereo inward angle. Adjustments may be made to obtain results.
 さらに、術野画像における視差情報から推定した術具の三次元的な位置と、サイズ、三次元的な姿勢及び位置を術野画像に揃えた施術対象部位の3Dモデルの形状情報から、術具と施術対象部位との離間距離を算出し、算出した離間距離に関する情報や距離が短い時の警告に関する警告情報を提示してもよい。この場合、術者の主観でなく計測結果に基づいて情報が提示されるので、術具と施術対象部位との離間距離が分かりやすくなる。なお、施術対象部位のステレオ画像の提示を行わず、離間距離情報の提示のみを行ってもよい。この離間距離に関する情報提示には様々な方法がある。 Furthermore, the three-dimensional position of the surgical tool estimated from the parallax information in the surgical field image, and the shape information of the 3D model of the treatment target site whose size, three-dimensional posture and position are aligned with the surgical field image, are used to determine the surgical tool. and the treatment target site may be calculated, and information about the calculated distance and warning information about warning when the distance is short may be presented. In this case, since the information is presented based on the measurement results rather than subjectively by the operator, the separation distance between the surgical tool and the treatment target site becomes easy to understand. Note that only the distance information may be presented without presenting the stereo image of the treatment target site. There are various methods of presenting information about this distance.
 (表示画像の例5)
 図11は、本実施形態に係る表示画像の例5を示す図である。図11の例では、離間距離が棒状のインジケータで提示される。これにより、術者は、術具と施術対象部位との離間距離を把握することが可能となる。
(Display image example 5)
FIG. 11 is a diagram showing example 5 of a display image according to the present embodiment. In the example of FIG. 11, the separation distance is presented by a bar-shaped indicator. This enables the operator to grasp the separation distance between the surgical tool and the treatment target site.
 例えば、表示画像生成部13dは、ステレオ画像の水晶体のサイズ、姿勢及び位置を術野画像の水晶体のサイズ、姿勢及び位置に揃え、ステレオ画像の水晶体と術具との離間距離を算出する。そして、表示画像生成部13dは、算出した離間距離に関する離間距離情報として棒状のインジケータを生成して表示画像に加える。 For example, the display image generation unit 13d aligns the size, orientation, and position of the lens of the stereo image with the size, orientation, and position of the lens of the surgical field image, and calculates the separation distance between the lens of the stereo image and the surgical instrument. Then, the display image generation unit 13d generates a bar-shaped indicator as separation distance information regarding the calculated separation distance and adds it to the display image.
 なお、表示画像生成部13dは、離間距離に応じて、術具先端付近の色や画像の周辺につける画枠の色等の表示態様を変えてもよい。例えば、術具と施術対象部位とが遠い際には色を緑として、術具と施術対象部位とが近づくにつれてオレンジや赤に変化させてもよい。また、術具と施術対象部位とが近づくほど、提示する音の周波数が高くなるように音声の提示をスピーカ56により行ったりしてもよい。 Note that the display image generation unit 13d may change the display mode, such as the color of the vicinity of the distal end of the surgical instrument and the color of the image frame attached to the periphery of the image, according to the separation distance. For example, the color may be green when the surgical tool is far from the site to be treated, and may change to orange or red as the surgical tool and the site to be treated are closer. Also, the sound may be presented by the speaker 56 so that the frequency of the presented sound increases as the surgical tool and the site to be treated approach.
 また、離間距離情報を提示するのではなく、離間距離が一定値以下になると、警告の画像提示や音声提示を行うようにしてもよい。例えば、画像処理装置13の制御部13Aが、離間距離が所定値以下であるか否かを判断し、離間距離が所定値以下であると判断すると、表示装置に警告の画像提示を指示する制御やスピーカ56に音声提示を指示する制御を行う。 Also, instead of presenting the distance information, a warning image or voice may be presented when the distance becomes less than a certain value. For example, the control unit 13A of the image processing device 13 determines whether or not the separation distance is equal to or less than a predetermined value, and if it is determined that the separation distance is equal to or less than the predetermined value, the controller instructs the display device to present a warning image. and control to instruct the speaker 56 to present audio.
 例えば、表示画像生成部13dは、制御部13Aによる制御に応じて、ステレオ画像の水晶体のサイズ、姿勢及び位置を術野画像の水晶体のサイズ、姿勢及び位置に揃え、ステレオ画像の水晶体と術具との離間距離を算出する。そして、表示画像生成部13dは、算出した離間距離に応じて、離間距離が所定値以下となると、離間距離が近い旨を示す警告情報(例えば、文章や記号、図形等)を生成し、表示画像に加える。 For example, the display image generation unit 13d aligns the size, orientation, and position of the lens in the stereo image with the size, orientation, and position of the lens in the surgical field image in accordance with the control by the control unit 13A, and displays the lens in the stereo image and the surgical tool. Calculate the distance between Then, according to the calculated separation distance, when the separation distance becomes equal to or less than a predetermined value, the display image generation unit 13d generates and displays warning information (for example, sentences, symbols, graphics, etc.) indicating that the separation distance is close. Add to image.
 さらに、離間距離が一定値以下になった場合、連携する超音波乳化吸引装置を制御することにより、術具先端の超音波振動を止める等の形で、後嚢破損のような合併症の発生を未然に防ぐようにしてもよい。 Furthermore, when the separation distance falls below a certain value, complications such as posterior capsule damage occur in the form of stopping the ultrasonic vibration of the tip of the surgical instrument by controlling the associated ultrasonic emulsification suction device. may be prevented.
 例えば、画像処理装置13の制御部13Aは、離間距離が所定値以下であるか否かを判断し、離間距離が所定値以下であると判断すると、超音波乳化吸引装置に、術具先端の超音波振動を止める等の動作停止を指示する制御を行う。 For example, the control unit 13A of the image processing device 13 determines whether or not the separation distance is equal to or less than a predetermined value. Performs control to instruct operation stop such as stopping ultrasonic vibration.
 以上のような各種の表示画像が用いられるが、それらの表示画像を術者や助手等により選択可能にしてもよい。表示画像の選択は、術者や助手等による操作部に対する入力操作により実現される。例えば、術者や助手等は、操作部を操作して希望する表示画像を表示する表示モードを選択する。この選択に応じて、表示画像生成部13dは、選択された表示モードに基づく表示画像を生成する。同様に、各種画像に関して、画像のサイズや位置等を術者や助手等により変更可能にしてもよい。表示画像生成部13dは、術者や助手等による操作部に対する入力操作に応じて、画像のサイズや位置等を変更して表示画像を生成する。 Various display images as described above are used, and these display images may be selectable by the operator, assistant, or the like. The selection of the display image is realized by an input operation on the operation unit by the operator or the assistant. For example, an operator, an assistant, or the like operates an operation unit to select a display mode for displaying a desired display image. According to this selection, the display image generator 13d generates a display image based on the selected display mode. Similarly, regarding various images, the size, position, etc. of the images may be changed by the operator, the assistant, or the like. The display image generation unit 13d generates a display image by changing the size, position, etc. of the image according to the input operation to the operation unit by the operator or the assistant.
 <1-4.作用・効果>
 以上説明したように、本実施形態によれば、画像入力部13cが術野画像を受領し、また、3Dモデル受領部13aが患者の眼の水晶体の一部又は全部の三次元モデルを受領し、ステレオ画像生成部13bが三次元モデルから水晶体のステレオ画像を生成し、表示画像生成部13dが術野画像及びステレオ画像に基づいて術野画像及びステレオ画像を含む表示画像を生成する。これにより、その術野画像及びステレオ画像を含む表示画像を術者に提示することが可能になるので、手術顕微鏡10に断層画像撮影のための光干渉断層計等の装置を設けなくても、術者にとって術具と眼の施術対象部位との位置関係を分かりやすくできる。したがって、装置構成の簡略化及び装置サイズの抑制を実現しつつ、術者にとって術具と眼の施術対象部位との位置関係を分かりやすくできる。
<1-4. Action/Effect>
As described above, according to the present embodiment, the image input unit 13c receives the operative field image, and the 3D model receiving unit 13a receives a three-dimensional model of part or all of the crystalline lens of the patient's eye. , the stereo image generation unit 13b generates a stereo image of the crystalline lens from the three-dimensional model, and the display image generation unit 13d generates a display image including the operative field image and the stereo image based on the operative field image and the stereo image. As a result, display images including the operative field image and the stereo image can be presented to the operator. This makes it easier for the operator to understand the positional relationship between the surgical tool and the region of the eye to be treated. Therefore, it is possible for the operator to easily understand the positional relationship between the surgical instrument and the region of the eye to be treated, while simplifying the configuration of the device and reducing the size of the device.
 また、表示画像生成部13dは、ステレオ画像の水晶体のサイズを術野画像の水晶体のサイズに揃え、表示画像を生成する。これにより、ステレオ画像の水晶体のサイズが術野画像の水晶体のサイズと同じである表示画像を提示することが可能になるので、術者にとって術具と眼の施術対象部位との位置関係をより分かりやすくできる。 In addition, the display image generation unit 13d generates a display image by aligning the size of the lens of the stereo image with the size of the lens of the surgical field image. This makes it possible to present a display image in which the size of the crystalline lens in the stereo image is the same as the size of the crystalline lens in the surgical field image. It can be easily understood.
 また、表示画像生成部13dは、ステレオ画像の水晶体の眼軸周りの向きを術野画像の水晶体の眼軸周りの向きに揃え、表示画像を生成する。これにより、ステレオ画像の水晶体の眼軸周りの向きが術野画像の水晶体の眼軸周りの向きと同じである表示画像を提示することが可能になるので、術者にとって術具と眼の施術対象部位との位置関係をより分かりやすくできる。 In addition, the display image generation unit 13d generates a display image by aligning the orientation of the lens of the stereo image around the eye axis with the orientation of the lens of the surgical field image around the eye axis. This makes it possible to present a display image in which the orientation of the lens in the stereo image around the eye axis is the same as the orientation of the lens in the surgical field image around the eye axis. The positional relationship with the target part can be made easier to understand.
 また、表示画像生成部13dは、ステレオ画像の水晶体の姿勢を術野画像の水晶体の姿勢に揃え、表示画像を生成する。これにより、ステレオ画像の水晶体の姿勢が術野画像の水晶体の姿勢と同じである表示画像を提示することが可能になるので、術者にとって術具と眼の施術対象部位との位置関係をより分かりやすくできる。 In addition, the display image generation unit 13d generates a display image by aligning the orientation of the lens of the stereo image with the orientation of the lens of the surgical field image. This makes it possible to present a display image in which the orientation of the crystalline lens in the stereo image is the same as the orientation of the crystalline lens in the surgical field image. It can be easily understood.
 また、表示画像生成部13dは、ステレオ画像を術野画像に重ねて表示画像を生成する。これにより、ステレオ画像の水晶体と術野画像の水晶体とを近づけた表示画像を提示することが可能になるので、術者にとって術具と眼の施術対象部位との位置関係をより分かりやすくできる。 In addition, the display image generation unit 13d generates a display image by superimposing the stereo image on the operative field image. As a result, it is possible to present a display image in which the crystalline lens of the stereo image and the crystalline lens of the surgical field image are brought closer to each other, so that the operator can more easily understand the positional relationship between the surgical instrument and the surgical target site of the eye.
 また、表示画像生成部13dは、ステレオ画像の水晶体の位置を術野画像の水晶体の位置に揃え、表示画像を生成する。これにより、ステレオ画像の水晶体の位置が術野画像の水晶体の位置と同じである表示画像を提示することが可能になるので、術者にとって術具と眼の施術対象部位との位置関係をより分かりやすくできる。 In addition, the display image generation unit 13d aligns the position of the lens of the stereo image with the position of the lens of the surgical field image to generate the display image. This makes it possible to present a display image in which the position of the crystalline lens in the stereo image is the same as the position of the crystalline lens in the surgical field image. It can be easily understood.
 また、表示画像生成部13dは、ステレオ画像の水晶体及び術野画像の水晶体のどちら一方又は両方を拡大し、表示画像を生成する。これにより、ステレオ画像の水晶体及び術野画像の水晶体のどちら一方又は両方を拡大した表示画像を提示することが可能になるので、術者にとって術具と眼の施術対象部位との位置関係をより分かりやすくできる。 In addition, the display image generation unit 13d expands one or both of the lens of the stereo image and the lens of the surgical field image to generate a display image. As a result, it is possible to present a display image in which one or both of the crystalline lens of the stereo image and the crystalline lens of the surgical field image are magnified, so that the operator can better understand the positional relationship between the surgical tool and the region of the eye to be treated. It can be easily understood.
 また、表示画像生成部13dは、ステレオ画像の水晶体のサイズ、姿勢及び位置を術野画像の水晶体のサイズ、姿勢及び位置に揃え、ステレオ画像の水晶体と術具との離間距離を算出し、算出した離間距離に関する離間距離情報を生成して表示画像に加える。これにより、ステレオ画像の水晶体のサイズ、姿勢及び位置が術野画像の水晶体のサイズ、姿勢及び位置と同じであって離間距離情報を含む表示画像を提示することが可能になるので、術者にとって術具と眼の施術対象部位との位置関係をより分かりやすくできる。 In addition, the display image generation unit 13d aligns the size, orientation, and position of the lens of the stereo image with the size, orientation, and position of the lens of the surgical field image, calculates the separation distance between the lens of the stereo image and the surgical tool, and calculates Spacing distance information about the spacing distance is generated and added to the display image. As a result, it is possible to present a display image in which the size, orientation, and position of the lens in the stereo image are the same as the size, orientation, and position of the lens in the surgical field image, and which includes separation distance information. It is possible to make the positional relationship between the surgical tool and the eye to be treated easier to understand.
 また、表示画像生成部13dは、ステレオ画像の水晶体のサイズ、姿勢及び位置を術野画像の水晶体のサイズ、姿勢及び位置に揃え、ステレオ画像の水晶体と術具との離間距離を算出し、算出した離間距離に応じて警告情報を生成し、表示画像に加える。これにより、術者の主観でなく計測結果に基づく警告情報を提示することができる。 In addition, the display image generation unit 13d aligns the size, orientation, and position of the lens of the stereo image with the size, orientation, and position of the lens of the surgical field image, calculates the separation distance between the lens of the stereo image and the surgical tool, and calculates Warning information is generated according to the separation distance and added to the display image. As a result, it is possible to present warning information based on the measurement results rather than the operator's subjectivity.
 <2.コンピュータの概略構成の一例>
 上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウェアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータ等が含まれる。
<2. Example of schematic configuration of computer>
The series of processes described above can be executed by hardware or by software. When executing a series of processes by software, a program that constitutes the software is installed in the computer. Here, the computer includes, for example, a computer built into dedicated hardware and a general-purpose personal computer capable of executing various functions by installing various programs.
 図12は、上述した一連の処理をプログラムにより実行するコンピュータ500の概略構成の一例を示す図である。 FIG. 12 is a diagram showing an example of a schematic configuration of a computer 500 that executes the series of processes described above by a program.
 図12に示すように、コンピュータ500は、CPU(Central Processing Unit)510と、ROM(Read Only Memory)520と、RAM(Random Access Memory)530とを有している。 As shown in FIG. 12, the computer 500 has a CPU (Central Processing Unit) 510, a ROM (Read Only Memory) 520, and a RAM (Random Access Memory) 530.
 CPU510、ROM520及びRAM530は、バス540により相互に接続されている。このバス540には、さらに、入出力インターフェース550が接続されている。この入出力インターフェース550には、入力部560、出力部570、記録部580、通信部590及びドライブ600が接続されている。 The CPU 510 , ROM 520 and RAM 530 are interconnected by a bus 540 . An input/output interface 550 is also connected to the bus 540 . An input unit 560 , an output unit 570 , a recording unit 580 , a communication unit 590 and a drive 600 are connected to the input/output interface 550 .
 入力部560は、キーボードやマウス、マイクロフォン、撮像素子等により構成されている。出力部570は、ディスプレイやスピーカ等により構成されている。記録部580は、ハードディスクや不揮発性のメモリ等により構成されている。通信部590は、ネットワークインターフェース等により構成されている。ドライブ600は、磁気ディスクや光ディスク、光磁気ディスク又は半導体メモリ等のリムーバブル記録媒体610を駆動する。 The input unit 560 is composed of a keyboard, mouse, microphone, imaging device, and the like. The output unit 570 is configured with a display, a speaker, and the like. The recording unit 580 is composed of a hard disk, a nonvolatile memory, or the like. The communication unit 590 is configured by a network interface or the like. A drive 600 drives a removable recording medium 610 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory.
 以上のように構成されるコンピュータ500では、CPU510が、例えば、記録部580に記録されているプログラムを、入出力インターフェース550及びバス540を介して、RAM530にロードして実行することにより、上述した一連の処理が行われる。 In the computer 500 configured as described above, the CPU 510 loads, for example, the program recorded in the recording unit 580 into the RAM 530 via the input/output interface 550 and the bus 540, and executes it. A series of processes are performed.
 コンピュータ500、すなわちCPU510が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブル記録媒体610に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 A program executed by the computer 500, that is, the CPU 510 can be provided by being recorded on a removable recording medium 610 such as a package medium, for example. Also, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータ500では、プログラムは、リムーバブル記録媒体610をドライブ600に装着することにより、入出力インターフェース550を介して、記録部580にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部590で受信し、記録部580にインストールすることができる。その他、プログラムは、ROM520や記録部580に、あらかじめインストールしておくことができる。 In the computer 500 , the program can be installed in the recording unit 580 via the input/output interface 550 by loading the removable recording medium 610 into the drive 600 . Also, the program can be received by the communication unit 590 and installed in the recording unit 580 via a wired or wireless transmission medium. In addition, the program can be installed in the ROM 520 or the recording unit 580 in advance.
 なお、コンピュータ500が実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 It should be noted that the program executed by the computer 500 may be a program in which processing is performed in chronological order according to the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed in
 また、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 Also, in this specification, a system means a set of multiple components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a single device housing a plurality of modules in one housing, are both systems. .
 また、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Further, the embodiments of the present technology are not limited to the above-described embodiments, and various modifications are possible without departing from the gist of the present technology.
 例えば、本技術は、1つの機能を、ネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, this technology can take the configuration of cloud computing in which one function is shared by multiple devices via a network and processed jointly.
 また、上述の処理の流れで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 In addition, each step described in the above process flow can be executed by a single device or shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Furthermore, when one step includes multiple processes, the multiple processes included in the one step can be executed by one device or shared by multiple devices.
 また、本明細書に記載された効果はあくまで例示であって限定されるものではなく、本明細書に記載されたもの以外の効果があってもよい。 Also, the effects described in this specification are merely examples and are not limited, and there may be effects other than those described in this specification.
 <3.付記>
 なお、本技術は以下のような構成も取ることができる。
(1)
 患者の眼に対する術野画像を受領する画像入力部と、
 前記眼の水晶体の一部又は全部の三次元モデルを受領する三次元モデル受領部と、
 前記三次元モデルから前記水晶体のステレオ画像を生成するステレオ画像生成部と、
 前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む表示画像を生成する表示画像生成部と、
を備える画像処理装置。
(2)
 前記表示画像生成部は、
 前記ステレオ画像の前記水晶体のサイズを前記術野画像の前記水晶体のサイズに揃え、前記表示画像を生成する、
 上記(1)に記載の画像処理装置。
(3)
 前記表示画像生成部は、
 前記ステレオ画像の前記水晶体の眼軸周りの向きを前記術野画像の前記水晶体の眼軸周りの向きに揃え、前記表示画像を生成する、
 上記(1)又は(2)に記載の画像処理装置。
(4)
 前記表示画像生成部は、
 前記ステレオ画像の前記水晶体の姿勢を前記術野画像の前記水晶体の姿勢に揃え、前記表示画像を生成する、
 上記(1)から(3)のいずれか一つに記載の画像処理装置。
(5)
 前記表示画像生成部は、
 前記ステレオ画像を前記術野画像に重ねて前記表示画像を生成する、
 上記(1)から(4)のいずれか一つに記載の画像処理装置。
(6)
 前記表示画像生成部は、
 前記ステレオ画像の前記水晶体の位置を前記術野画像の前記水晶体の位置に揃え、前記表示画像を生成する、
 上記(5)に記載の画像処理装置。
(7)
 前記表示画像生成部は、
 前記ステレオ画像の前記水晶体及び前記術野画像の前記水晶体のどちら一方又は両方を拡大し、前記表示画像を生成する、
 上記(1)から(6)のいずれか一つに記載の画像処理装置。
(8)
 前記表示画像生成部は、
 前記ステレオ画像の前記水晶体のサイズ、姿勢及び位置を前記術野画像の前記水晶体のサイズ、姿勢及び位置に揃え、前記ステレオ画像の前記水晶体と術具との離間距離を算出し、算出した前記離間距離に関する離間距離情報を生成して前記表示画像に加える、
 上記(1)から(7)のいずれか一つに記載の画像処理装置。
(9)
 前記表示画像生成部は、
 前記ステレオ画像の前記水晶体のサイズ、姿勢及び位置を前記術野画像の前記水晶体のサイズ、姿勢及び位置に揃え、前記ステレオ画像の前記水晶体と術具との離間距離を算出し、算出した前記離間距離に応じて警告情報を生成し、前記表示画像に加える、
 上記(1)から(8)のいずれか一つに記載の画像処理装置。
(10)
 画像処理装置が、
 患者の眼に対する術野画像を受領し、
 前記眼の水晶体の一部又は全部の三次元モデルを受領し、
 前記三次元モデルから前記水晶体のステレオ画像を生成し、
 前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む表示画像を生成する、
ことを含む画像処理方法。
(11)
 患者の眼に対する術野画像を得る手術顕微鏡と、
 表示画像を生成する画像処理装置と、
 前記表示画像を表示する表示装置と、
を備え、
 前記画像処理装置は、
 前記術野画像を受領する画像入力部と、
 前記眼の水晶体の一部又は全部の三次元モデルを受領する三次元モデル受領部と、
 前記三次元モデルから前記水晶体のステレオ画像を生成するステレオ画像生成部と、
 前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む前記表示画像を生成する表示画像生成部と、
を有する手術顕微鏡システム。
(12)
 上記(1)から(9)のいずれか一つに記載の画像処理装置を用いる画像処理方法。
(13)
 上記(1)から(9)のいずれか一つに記載の画像処理装置を備える手術顕微鏡システム。
<3. Note>
Note that the present technology can also take the following configuration.
(1)
an image input for receiving an operative field image for the patient's eye;
a three-dimensional model receiving unit that receives a three-dimensional model of part or all of the lens of the eye;
a stereo image generator that generates a stereo image of the crystalline lens from the three-dimensional model;
a display image generation unit that generates a display image including the surgical field image and the stereo image based on the surgical field image and the stereo image;
An image processing device comprising:
(2)
The display image generation unit is
aligning the size of the lens of the stereo image with the size of the lens of the surgical field image to generate the display image;
The image processing apparatus according to (1) above.
(3)
The display image generation unit is
generating the display image by aligning the orientation of the lens of the stereo image around the eye axis with the orientation of the lens of the surgical field image around the eye axis;
The image processing apparatus according to (1) or (2) above.
(4)
The display image generation unit is
generating the display image by aligning the orientation of the lens in the stereo image with the orientation of the lens in the surgical field image;
The image processing apparatus according to any one of (1) to (3) above.
(5)
The display image generation unit is
generating the display image by superimposing the stereo image on the operative field image;
The image processing apparatus according to any one of (1) to (4) above.
(6)
The display image generation unit is
generating the display image by aligning the position of the lens in the stereo image with the position of the lens in the surgical field image;
The image processing apparatus according to (5) above.
(7)
The display image generation unit is
magnifying one or both of the lens of the stereo image and the lens of the surgical field image to generate the display image;
The image processing apparatus according to any one of (1) to (6) above.
(8)
The display image generation unit is
Aligning the size, orientation, and position of the lens in the stereo image with the size, orientation, and position of the lens in the surgical field image, calculating the separation distance between the lens in the stereo image and the surgical instrument, and calculating the calculated separation generating separation distance information about the distance and adding it to the displayed image;
The image processing apparatus according to any one of (1) to (7) above.
(9)
The display image generation unit is
Aligning the size, orientation, and position of the lens in the stereo image with the size, orientation, and position of the lens in the surgical field image, calculating the separation distance between the lens in the stereo image and the surgical instrument, and calculating the calculated separation generating warning information according to the distance and adding it to the display image;
The image processing apparatus according to any one of (1) to (8) above.
(10)
The image processing device
receiving surgical field images for the patient's eye;
receiving a three-dimensional model of part or all of the lens of the eye;
generating a stereo image of the lens from the three-dimensional model;
generating a display image including the operative field image and the stereo image based on the operative field image and the stereo image;
An image processing method comprising:
(11)
a surgical microscope for obtaining an operative field image of the patient's eye;
an image processing device that generates a display image;
a display device for displaying the display image;
with
The image processing device is
an image input unit that receives the operative field image;
a three-dimensional model receiving unit that receives a three-dimensional model of part or all of the lens of the eye;
a stereo image generator that generates a stereo image of the crystalline lens from the three-dimensional model;
a display image generation unit that generates the display image including the surgical field image and the stereo image based on the surgical field image and the stereo image;
A surgical microscope system having
(12)
An image processing method using the image processing apparatus according to any one of (1) to (9) above.
(13)
A surgical microscope system comprising the image processing device according to any one of (1) to (9) above.
 1   手術顕微鏡システム
 10  手術顕微鏡
 11  対物レンズ
 12  接眼レンズ
 13  画像処理装置
 13a 3Dモデル受領部(三次元モデル受領部)
 13b ステレオ画像生成部
 13c 画像入力部
 13d 表示画像生成部
 14  モニタ
 20  患者用ベッド
 51  光源
 52  観察光学系
 52a ハーフミラー
 53  正面画像撮影部
 54  提示部
 55  インターフェース部
 56  スピーカ
 500 コンピュータ
 510 CPU
 520 ROM
 530 RAM
 540 バス
 550 入出力インターフェース
 560 入力部
 570 出力部
 580 記録部
 590 通信部
 600 ドライブ
 610 リムーバブル記録媒体
1 Operating Microscope System 10 Operating Microscope 11 Objective Lens 12 Eyepiece Lens 13 Image Processing Device 13a 3D Model Receiving Section (3D Model Receiving Section)
13b stereo image generation unit 13c image input unit 13d display image generation unit 14 monitor 20 patient bed 51 light source 52 observation optical system 52a half mirror 53 front image capturing unit 54 presentation unit 55 interface unit 56 speaker 500 computer 510 CPU
520 ROMs
530 RAM
540 bus 550 input/output interface 560 input section 570 output section 580 recording section 590 communication section 600 drive 610 removable recording medium

Claims (11)

  1.  患者の眼に対する術野画像を受領する画像入力部と、
     前記眼の水晶体の一部又は全部の三次元モデルを受領する三次元モデル受領部と、
     前記三次元モデルから前記水晶体のステレオ画像を生成するステレオ画像生成部と、
     前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む表示画像を生成する表示画像生成部と、
    を備える画像処理装置。
    an image input for receiving an operative field image for the patient's eye;
    a three-dimensional model receiving unit that receives a three-dimensional model of part or all of the lens of the eye;
    a stereo image generator that generates a stereo image of the crystalline lens from the three-dimensional model;
    a display image generation unit that generates a display image including the surgical field image and the stereo image based on the surgical field image and the stereo image;
    An image processing device comprising:
  2.  前記表示画像生成部は、
     前記ステレオ画像の前記水晶体のサイズを前記術野画像の前記水晶体のサイズに揃え、前記表示画像を生成する、
     請求項1に記載の画像処理装置。
    The display image generation unit is
    aligning the size of the lens of the stereo image with the size of the lens of the surgical field image to generate the display image;
    The image processing apparatus according to claim 1.
  3.  前記表示画像生成部は、
     前記ステレオ画像の前記水晶体の眼軸周りの向きを前記術野画像の前記水晶体の眼軸周りの向きに揃え、前記表示画像を生成する、
     請求項1に記載の画像処理装置。
    The display image generation unit is
    generating the display image by aligning the orientation of the lens of the stereo image around the eye axis with the orientation of the lens of the surgical field image around the eye axis;
    The image processing apparatus according to claim 1.
  4.  前記表示画像生成部は、
     前記ステレオ画像の前記水晶体の姿勢を前記術野画像の前記水晶体の姿勢に揃え、前記表示画像を生成する、
     請求項1に記載の画像処理装置。
    The display image generation unit is
    generating the display image by aligning the orientation of the lens in the stereo image with the orientation of the lens in the surgical field image;
    The image processing apparatus according to claim 1.
  5.  前記表示画像生成部は、
     前記ステレオ画像を前記術野画像に重ねて前記表示画像を生成する、
     請求項1に記載の画像処理装置。
    The display image generation unit is
    generating the display image by superimposing the stereo image on the operative field image;
    The image processing apparatus according to claim 1.
  6.  前記表示画像生成部は、
     前記ステレオ画像の前記水晶体の位置を前記術野画像の前記水晶体の位置に揃え、前記表示画像を生成する、
     請求項5に記載の画像処理装置。
    The display image generation unit is
    generating the display image by aligning the position of the lens in the stereo image with the position of the lens in the surgical field image;
    The image processing apparatus according to claim 5.
  7.  前記表示画像生成部は、
     前記ステレオ画像の前記水晶体及び前記術野画像の前記水晶体のどちら一方又は両方を拡大し、前記表示画像を生成する、
     請求項1に記載の画像処理装置。
    The display image generation unit is
    magnifying one or both of the lens of the stereo image and the lens of the surgical field image to generate the display image;
    The image processing apparatus according to claim 1.
  8.  前記表示画像生成部は、
     前記ステレオ画像の前記水晶体のサイズ、姿勢及び位置を前記術野画像の前記水晶体のサイズ、姿勢及び位置に揃え、前記ステレオ画像の前記水晶体と術具との離間距離を算出し、算出した前記離間距離に関する離間距離情報を生成して前記表示画像に加える、
     請求項1に記載の画像処理装置。
    The display image generation unit is
    Aligning the size, orientation, and position of the lens in the stereo image with the size, orientation, and position of the lens in the surgical field image, calculating the separation distance between the lens in the stereo image and the surgical instrument, and calculating the calculated separation generating separation distance information about the distance and adding it to the displayed image;
    The image processing apparatus according to claim 1.
  9.  前記表示画像生成部は、
     前記ステレオ画像の前記水晶体のサイズ、姿勢及び位置を前記術野画像の前記水晶体のサイズ、姿勢及び位置に揃え、前記ステレオ画像の前記水晶体と術具との離間距離を算出し、算出した前記離間距離に応じて警告情報を生成し、前記表示画像に加える、
     請求項1に記載の画像処理装置。
    The display image generation unit is
    Aligning the size, orientation, and position of the lens in the stereo image with the size, orientation, and position of the lens in the surgical field image, calculating the separation distance between the lens in the stereo image and the surgical instrument, and calculating the calculated separation generating warning information according to the distance and adding it to the display image;
    The image processing apparatus according to claim 1.
  10.  画像処理装置が、
     患者の眼に対する術野画像を受領し、
     前記眼の水晶体の一部又は全部の三次元モデルを受領し、
     前記三次元モデルから前記水晶体のステレオ画像を生成し、
     前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む表示画像を生成する、
    ことを含む画像処理方法。
    The image processing device
    receiving surgical field images for the patient's eye;
    receiving a three-dimensional model of part or all of the lens of the eye;
    generating a stereo image of the lens from the three-dimensional model;
    generating a display image including the operative field image and the stereo image based on the operative field image and the stereo image;
    An image processing method comprising:
  11.  患者の眼に対する術野画像を得る手術顕微鏡と、
     表示画像を生成する画像処理装置と、
     前記表示画像を表示する表示装置と、
    を備え、
     前記画像処理装置は、
     前記術野画像を受領する画像入力部と、
     前記眼の水晶体の一部又は全部の三次元モデルを受領する三次元モデル受領部と、
     前記三次元モデルから前記水晶体のステレオ画像を生成するステレオ画像生成部と、
     前記術野画像及び前記ステレオ画像に基づいて、前記術野画像及び前記ステレオ画像を含む前記表示画像を生成する表示画像生成部と、
    を有する手術顕微鏡システム。
    a surgical microscope for obtaining an operative field image of the patient's eye;
    an image processing device that generates a display image;
    a display device for displaying the display image;
    with
    The image processing device is
    an image input unit that receives the operative field image;
    a three-dimensional model receiving unit that receives a three-dimensional model of part or all of the lens of the eye;
    a stereo image generator that generates a stereo image of the crystalline lens from the three-dimensional model;
    a display image generation unit that generates the display image including the surgical field image and the stereo image based on the surgical field image and the stereo image;
    A surgical microscope system having
PCT/JP2022/000855 2021-01-29 2022-01-13 Image processing device, image processing method, and surgical microscope system WO2022163362A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-012781 2021-01-29
JP2021012781A JP2022116559A (en) 2021-01-29 2021-01-29 Image processing device, image processing method, and surgical microscope system

Publications (1)

Publication Number Publication Date
WO2022163362A1 true WO2022163362A1 (en) 2022-08-04

Family

ID=82653335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/000855 WO2022163362A1 (en) 2021-01-29 2022-01-13 Image processing device, image processing method, and surgical microscope system

Country Status (2)

Country Link
JP (1) JP2022116559A (en)
WO (1) WO2022163362A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011030689A (en) * 2009-07-31 2011-02-17 Nidek Co Ltd Fundus photographing system and method for processing three-dimensional fundus image
JP2012506272A (en) * 2008-10-22 2012-03-15 ゼンゾモトリック インストルメンツ ゲゼルシャフト フュア イノベイティブ ゼンゾリク ミット ベシュレンクテル ハフツング Image processing method and apparatus for computer-aided eye surgery
JP2015130911A (en) * 2014-01-09 2015-07-23 パナソニックヘルスケアホールディングス株式会社 Surgical operation support device and surgical operation support program
JP2016073409A (en) * 2014-10-03 2016-05-12 ソニー株式会社 Information processing apparatus, information processing method, and operation microscope apparatus
WO2018207466A1 (en) * 2017-05-09 2018-11-15 ソニー株式会社 Image processing device, image processing method, and image processing program
US20190099226A1 (en) * 2017-10-04 2019-04-04 Novartis Ag Surgical suite integration and optimization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012506272A (en) * 2008-10-22 2012-03-15 ゼンゾモトリック インストルメンツ ゲゼルシャフト フュア イノベイティブ ゼンゾリク ミット ベシュレンクテル ハフツング Image processing method and apparatus for computer-aided eye surgery
JP2011030689A (en) * 2009-07-31 2011-02-17 Nidek Co Ltd Fundus photographing system and method for processing three-dimensional fundus image
JP2015130911A (en) * 2014-01-09 2015-07-23 パナソニックヘルスケアホールディングス株式会社 Surgical operation support device and surgical operation support program
JP2016073409A (en) * 2014-10-03 2016-05-12 ソニー株式会社 Information processing apparatus, information processing method, and operation microscope apparatus
WO2018207466A1 (en) * 2017-05-09 2018-11-15 ソニー株式会社 Image processing device, image processing method, and image processing program
US20190099226A1 (en) * 2017-10-04 2019-04-04 Novartis Ag Surgical suite integration and optimization

Also Published As

Publication number Publication date
JP2022116559A (en) 2022-08-10

Similar Documents

Publication Publication Date Title
KR101451970B1 (en) An ophthalmic surgical apparatus and an method for controlling that
JP5572161B2 (en) Ophthalmic surgery system and method of operating an ophthalmic surgery system
JP2012152469A (en) Ophthalmic surgical microscope
CN109963535A (en) Integrated form ophthalmic surgical system
US20130088414A1 (en) Surgical heads-up display that is adjustable in a three-dimensional field of view
CN106999298A (en) Image processing apparatus, image processing method and surgical operation microscope
US20120303007A1 (en) System and Method for Using Multiple Detectors
US9138138B2 (en) Ophthalmic apparatus and recording medium having ophthalmic program stored therein
JP6819223B2 (en) Ophthalmic information processing equipment, ophthalmic information processing program, and ophthalmic surgery system
JP2008295973A (en) Ophthalomologic measurement apparatus, ophthalomologic measurement program, and method for determining power of intraocular implant
WO2022163362A1 (en) Image processing device, image processing method, and surgical microscope system
JP2018051208A (en) Ophthalmologic information processing apparatus, ophthalmologic information processing program, and ophthalmologic information processing system
KR102169674B1 (en) Mixed reality fundus camera
JP7026988B1 (en) Surgical support device
JP7369148B2 (en) Vibration image for registration verification
WO2022163190A1 (en) Image processing device, image processing method, and surgical microscope system
WO2022163188A1 (en) Image processing device, image processing method, and surgical microscope system
EP4238484A1 (en) Ophthalmological observation device, method for controlling same, program, and storage medium
US20240164850A1 (en) Ophthalmic visualization using spectrum-independent imagers, edge detection, and visible-ir image merging
JP7377331B2 (en) ophthalmology equipment
WO2022163383A1 (en) Image processing device, image processing method, and surgical microscope system
Gulkas et al. Intraoperative Optical Coherence Tomography
WO2022091210A1 (en) Surgery assisting device
Miclos et al. A data acquisition, processing and storage system for an ophthalmic instrument: Fotobioftal-1
MICLO et al. A DATA ACQUISITION, PROCESSING AND STORAGE SYSTEM FOR AN OPHTHALMIC STEREOMICROSCOPE

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22745587

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22745587

Country of ref document: EP

Kind code of ref document: A1