WO2017170488A1 - Optical axis position measuring system, optical axis position measuring method, optical axis position measuring program, and optical axis position measuring device - Google Patents

Optical axis position measuring system, optical axis position measuring method, optical axis position measuring program, and optical axis position measuring device Download PDF

Info

Publication number
WO2017170488A1
WO2017170488A1 PCT/JP2017/012542 JP2017012542W WO2017170488A1 WO 2017170488 A1 WO2017170488 A1 WO 2017170488A1 JP 2017012542 W JP2017012542 W JP 2017012542W WO 2017170488 A1 WO2017170488 A1 WO 2017170488A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical axis
position
dimensional
axis position
relative positional
Prior art date
Application number
PCT/JP2017/012542
Other languages
French (fr)
Japanese (ja)
Inventor
清二 山本
武志 三浦
悦一 林本
Original Assignee
国立大学法人浜松医科大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2016073581 priority Critical
Priority to JP2016-073581 priority
Priority to JP2017053847A priority patent/JP2017185212A/en
Priority to JP2017-053847 priority
Application filed by 国立大学法人浜松医科大学 filed Critical 国立大学法人浜松医科大学
Publication of WO2017170488A1 publication Critical patent/WO2017170488A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • G02B23/24Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes

Abstract

The objective of the present invention is to obtain accurately a three-dimensional relative positional relationship between the position of the optical axis of an elongated instrument such as a rigid endoscope and a position and attitude detecting marker, using an optical axis position measuring device that is lightweight, has a simple construction, and has excellent operability and convenience. An optical axis position measuring device 80 includes: a target image of which an image can be captured by a rigid endoscope 11; and optical axis position measuring markers 91 that can be measured in three dimensions. A calculating means: stores in advance a three-dimensional relative positional relationship between the target image and the optical axis position measuring markers 91; processes three-dimensional data obtained by three-dimensional shape measurement, to acquire a three-dimensional relative positional relationship between a distal end portion 14 of the rigid endoscope 11 and a marker sphere 12, and a three-dimensional relative positional relationship between the marker sphere 12 and the optical axis position measuring markers 91; and acquires from the target image of which an image has been captured by the rigid endoscope 11, a point on the target image corresponding to the position of the center of the captured image. The calculating means obtains a three-dimensional relative positional relationship between the optical axis of the rigid endoscope 11 and the marker sphere 12 on the basis of the stored data and the acquired data.

Description

Optical axis position measuring system, optical axis position measuring method, optical axis position measuring program, optical axis position measuring apparatus

The present invention relates to an optical axis position measurement for measuring a three-dimensional relative positional relationship between an actual optical axis of an elongated instrument such as an endoscope having an optical axis and a marker for detecting the position and orientation of the elongated instrument. The present invention relates to a system, an optical axis position measuring method, an optical axis position measuring program, and an optical axis position measuring apparatus usable in the system.

Conventionally, when a surgical instrument such as an endoscope is inserted into the body of a patient, the exact position of the distal end of the surgical instrument is obtained by CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) taken before surgery. Surgery navigation (surgery support information display) that is displayed on an image and supports the surgeon is known. For example, Patent Literature 1 describes a surgery support system by the present inventors, and describes a technique for aligning a patient's three-dimensional surface shape with a three-dimensional shape measuring apparatus and pre-imaged three-dimensional tomographic data. Has been. Further, the position / posture of the surgical instrument is calculated by measuring the position / posture detection sign (sphere 12 in FIG. 1) attached to the surgical instrument with a three-dimensional shape measuring apparatus that measures the three-dimensional surface shape of the patient. Techniques to do are also described. However, each of these methods only displays the position of the distal end of a surgical instrument or pointer, such as a surgical instrument, and which part of the preoperative image by CT or MRI corresponds to the part imaged by the endoscope. It is not a display.

If the site being imaged by the endoscope (the surgical field displayed on the endoscope monitor) can be confirmed in the preoperative CT image or the like, for example, the operator can use the left hand. Confirm the place where the operation is to be performed directly with the endoscope held in the hand, and change any surgical instrument freely with the right hand while recognizing which part of the image is observed by CT etc. before surgery Surgical operation can be performed continuously.

Patent Documents 2 and 3 are known as conventional techniques showing that a part imaged by such an endoscope is displayed on an image. Moreover, there is WO2008 / 093517 as a prior art by the present inventors.

Patent Document 2 describes a technique for displaying an optical axis direction of a rigid endoscope in use on a three-dimensional tomographic image in a surgical navigation apparatus.

Patent Document 3 discloses distance measuring means (triangulation method by spot light irradiation, ultrasonic sensor, etc.) for measuring the distance from the distal end of an endoscope insertion portion to be inserted into the patient's body to the surgical site within the patient's body. A technique is described in which a place observed with an endoscope is determined using an endoscope having the same and displayed on preoperative CT / MRI.

In the above Patent Documents 2 and 3, a marker such as a light emitting element attached to the endoscope and a position sensor for detecting the marker are used for detecting the position and orientation of the endoscope. In order to make the coordinate system of the three-dimensional tomographic data coincide with the coordinate system for specifying the position of the patient, it is necessary to attach some marker to the patient or separately provide a device for measuring the shape of the patient. There are problems such as inconvenience and system complexity.

On the other hand, WO2008 / 093517, which is a prior art by the present inventors, uses a three-dimensional shape measuring device that measures the three-dimensional surface shape of a patient for detecting the position and orientation of a rigid endoscope, which inconveniences the patient. And the system is not complicated.

However, in WO2008 / 093517, as in Patent Documents 2 and 3, it is assumed that the optical axis of the endoscope is the nominal value, and calibration of the optical axis of the endoscope is not considered. For example, in a direct endoscope, the optical axis of the endoscope is on an extension line of the central axis of the endoscope barrel, that is, the angle formed by the endoscope optical axis and the central axis of the endoscope is 0 degrees. Assuming that, the optical axis information of the endoscope is displayed on the image.

In order to accurately display on the image the part imaged by the endoscope in surgical navigation, the actual optical axis information from the distal end of the endoscope to the distance is necessary. Since it is often the case that the lens is relatively close to the lens, the tip of the endoscope may be considered even if the calibration of the optical axis or the lens position in the vicinity of the endoscope is considered. The calibration of the optical axis from far to far has never been considered. For example, Patent Document 4 describes an apparatus that calibrates the lens position of an endoscope having an elongated shaft and a distal end lens, and the optical axis and field of view in the vicinity of the vicinity from the distal end of the endoscope. However, there is no description or suggestion of a method for calibrating the optical axis from the distal end of the endoscope to a distance.

In general, endoscopes often look at parts that are relatively close to the lens, so the deviation of the actual optical axis from the nominal value is less likely to have a significant effect. In addition, when the optical axis is displayed as a straight line on the image for navigation of the movement of the endoscope, the difference between the actual optical axis and the optical axis direction displayed on the navigation screen becomes significant. The present inventors have found that the deviation from the nominal value of the actual optical axis is an amount that cannot be ignored in developing the surgery support system. As a result of investigating the position of the optical axis for many endoscopes, the present inventors have found that an endoscope with a viewing angle of 120 degrees has a maximum deviation of about 6 degrees (viewing angle from the nominal value of the actual optical axis). Of 5%). This means that there is a possibility that the error of the optical axis position is about several millimeters at a position far from the endoscope tip. Since the operation is a precise operation, even an error of several millimeters may adversely affect the operation.

Under such circumstances, the present inventors, as a method and apparatus for measuring the actual optical axis of the rigid endoscope, include the optical axis from the distal end of the long axis portion of the object to be measured and the first labeling portion. A three-dimensional relative relationship measuring method for measuring a three-dimensional relative relationship and an apparatus therefor have been proposed in the past (see Patent Document 5: Japanese Patent No. 5560424).

In this prior art method, a three-dimensional relative relationship measuring device including a fixing means, a first calibration object, a second calibration object, and a moving means is used. A rigid endoscope or the like, which is an object to be measured, is fixed to the fixing means. The long axis portion of the object to be measured is provided with a first labeling portion capable of measuring the position and the three-dimensional shape by a three-dimensional shape measuring apparatus including an imaging means such as a three-dimensional surface shape scanner. Yes. The first calibration object has a target to which the optical axis from the tip of the long axis portion of the object to be measured hits, and a second labeling portion whose position and three-dimensional shape can be measured by a three-dimensional shape measuring apparatus. The three-dimensional relative relationship between the center coordinates of the target and the second marker is known. Alternatively, the three-dimensional shape measurement apparatus can measure the three-dimensional relative relationship between the center coordinates of the target and the second marker. The second calibration object has a third labeling unit whose position and three-dimensional shape can be measured by the three-dimensional shape measuring apparatus, and the object to be measured is fixed by the fixing means. In this fixed state, the three-dimensional relative relationship between the coordinates of the tip of the long axis portion of the object to be measured or the coordinates included in the optical axis from the tip of the long axis portion and the third labeling portion is known. Alternatively, the three-dimensional shape measuring apparatus can measure the three-dimensional relative relationship between the coordinates of the long axis portion tip of the measured object or the coordinates included on the optical axis from the long axis portion tip and the third labeling portion. Has been.

When performing measurement using this apparatus, first, the first calibration object is moved by the moving means so that the optical axis from the front end of the long axis portion hits the target. Next, the three-dimensional relative relationship between the optical axis of the object to be measured and the first labeling unit is measured by measuring the three-dimensional relative position of the first to third labeling units with a three-dimensional shape measuring apparatus. As a result, it is possible to accurately detect multiple coordinates or multiple coordinates and vectors, which are means for defining the position and orientation of an object, and the optical axis position far from the long axis tip in the same coordinate system. It becomes. Therefore, in a surgical operation support system that performs surgical navigation while displaying a part imaged by a rigid endoscope on an image, an imaging part displayed on the image (intersection of the optical axis of the rigid endoscope and the body lumen) ) Can be made highly accurate, and high-precision surgical navigation can be performed.

JP 2007-209531 A JP 2001-293006 A JP 2001-204738 A Special table 2003-528688 gazette Japanese Patent No. 5560424

However, the conventional technique of Patent Document 5 has several problems.

As described above, the three-dimensional relative relationship measuring apparatus of the prior art has a complicated structure including the fixing means, the first calibration object, the second calibration object, and the moving means. The dimensions were quite large and heavy. Therefore, for example, it is not easy for an operator to carry around, and it is difficult to say that it is excellent in portability and operability.

In addition, since the measurable range of a three-dimensional shape measuring means such as a three-dimensional surface shape scanner has certain restrictions, if the overall dimension of the three-dimensional relative relationship measuring device is too large, all of the measurable range is within the measurable range. In some cases, the first to third labeling parts of the three-dimensional display could not be fully accommodated, and the three-dimensional relative relationship could not be measured. Therefore, in this case, position adjustment is performed by moving the measuring means side so that all the first to third labeling parts are within the measurable range, or by moving the three-dimensional relative relationship measuring device side. In any case, the complicated work of moving heavy equipment has been forced.

Further, in this three-dimensional relative relationship measuring apparatus, it is necessary to adjust the position by moving the first calibration object using the moving means so that the optical axis from the tip of the long axis portion hits the center of the target. However, this operation is very complicated and time-consuming, and this has been one of the causes for reducing the operability. Further, the moving means of the apparatus has a mechanism for rotating the first calibration object along two directions orthogonal to the optical axis direction or along an arc centered on the second calibration object. The movable part (sliding part) uses a lubricant and is not suitable for sterilization. Of course, it is not necessarily impossible to sterilize and use, but it is difficult to say that the convenience is high because careful sterilization is required in consideration of volatilization and scattering of the lubricant.

The present invention has been made in view of the above-described problems, and an object thereof is an optical axis position measuring apparatus that is light and simple in structure and has excellent operability and convenience because no means for moving a calibration object is required. 3D relative positional relationship between the position of the optical axis in a long instrument such as an endoscope having an optical axis starting from the distal end of the long instrument and means for defining the position and orientation of the long instrument It is to provide an optical axis position measurement system, an optical axis position measurement method, and an optical axis position measurement program. Another object of the present invention is to provide an optical axis position measuring device suitable for use in an optical axis position measuring system. When the object having the optical axis starting from the long tool tip is a rigid endoscope, the optical axis position can be accurately measured far away with good operability and convenience. It is to enable accurate surgical navigation. Note that what is expressed as a three-dimensional relative positional relationship in this application is the same as the three-dimensional relative relationship in the prior art.

And as means (means 1) for solving the above-mentioned problem, it is a long tool having an image pick-up means for picking up an image of an object, and a position and posture detection for detecting the position and posture of the long tool. In an elongate instrument provided with a marking body for use, in an optical axis position measuring system for measuring a position of an actual optical axis that is an actual optical axis of the imaging means starting from a tip portion, the elongate instrument is A support mechanism that supports an arbitrary position within a predetermined range; a target image that can be imaged by an imaging unit of the elongated instrument while the elongated instrument is supported by the support mechanism; and an optical axis position measurement An optical axis position measuring device having a marker, and measuring the three-dimensional surface shape of the optical axis position measuring device and the elongated instrument in a state where the elongated instrument is supported on the optical axis position measuring device. Dimensional shape measuring apparatus and the target A first three-dimensional relative positional relationship, which is a three-dimensional relative positional relationship between the image and the optical axis position measuring marker, is stored, and an arithmetic process is performed using the three-dimensional data obtained by the measurement by the tertiary shape measuring device. And calculating means for processing the three-dimensional data obtained by the measurement by the three-dimensional shape measuring apparatus, so that the position / orientation detection marker and the tip of the elongated instrument are The second three-dimensional relative positional relationship, which is a three-dimensional relative positional relationship, and the target imaged by the elongated instrument at a position where the tip of the elongated instrument is separated from the target image The image is processed to obtain a point on the target image corresponding to the center position of the captured image, and the long shape when the point on the target image corresponding to the center position of the captured image is obtained Instrument By processing the three-dimensional data obtained by the measurement by the three-dimensional shape measuring apparatus in the holding position, the three-dimensional relative positional relationship between the position / orientation detection marker and the optical axis position measurement marker is obtained. 3 based on the calculation of obtaining the three-dimensional relative positional relationship of 3, the first to third three-dimensional relative positional relationships and the point on the target image corresponding to the center position of the captured image. There is an optical axis position measurement system that performs an operation for obtaining a three-dimensional relative positional relationship between a position and the position and orientation detection marker.

According to the invention described in the above means 1, the arithmetic means stores in advance the three-dimensional relative positional relationship between the target image and the optical axis position measuring marker, and the arithmetic means is long in the optical axis position measuring device. By processing the 3D data obtained by the measurement by the 3D shape measuring device while supporting the instrument, the three-dimensional relative positional relationship between the position / orientation detection marker and the distal end of the elongated instrument, and the position / orientation A three-dimensional relative positional relationship between the detection marker and the optical axis position measurement marker can be acquired, and the target image captured by the long tool is subjected to image processing, whereby the target image corresponding to the center position of the captured image is displayed. You can get points. And it is the actual optical axis of a long instrument by performing the calculation process using the point on the target image corresponding to the three-dimensional relative positional relationship obtained by the calculation means and the center position of the captured image. The three-dimensional relative positional relationship between the actual optical axis and the position / posture detection marker can be acquired. The long instrument can be supported by the optical axis position measuring device at an arbitrary position within a predetermined range, and the operator can perform an operation on the target image corresponding to the calibration object according to the prior art. It is possible to set the position of the scale tool and perform measurement with the three-dimensional shape measuring apparatus and imaging with the long tool. Therefore, it is possible to omit the means for moving the calibration object, which has been required in the past, and the second calibration object having the marking part, and it has a light and simple structure and is excellent in operability and convenience. It is possible to measure the three-dimensional relative positional relationship between the actual optical axis and the position and orientation detection marker using the optical axis position measuring apparatus. Even when such an optical axis position measuring device is used, the optical axis position and position / posture detection indicator in an elongated instrument such as an endoscope having an optical axis starting from the distal end of the long axis portion. The three-dimensional relative positional relationship with the body can be accurately measured. Note that the actual optical axis in a long instrument having an imaging means is the point of the object corresponding to the center position of the captured image when the imaging means images the object, and the length of the object This refers to a line connecting the imaging center points at the location of each object when the distance from the scale is changed. This definition applies to the optical axis of the rigid endoscope described in the background art.

In the optical axis position measuring apparatus in the optical axis position measuring system, the support mechanism supports a base member and the elongated instrument provided on the base member in a movable state along a longitudinal direction thereof. And the target image is drawn on a surface to be imaged of the target member provided in an inclined state with respect to the main surface of the base member, and the elongated instrument is supported by the guide portion. Sometimes, it can be imaged by the imaging means of the elongate instrument, and the optical axis position measuring marker is obtained from the three-dimensional data obtained by the measurement by the three-dimensional shape measuring apparatus and the position of the optical axis position measuring apparatus. It preferably has a three-dimensional shape capable of detecting a posture and is provided on at least one of the base member and the target member.

In the case of the optical axis position measuring apparatus having such a configuration, in addition to the guide portion being provided on the base member, the target member having the target image is provided in an inclined state with respect to the main surface of the base member. . Therefore, by operating in a state where the long instrument is supported on the guide portion, the long instrument can be moved linearly and simply in the longitudinal direction of the long instrument without shaking the operator. The elongated instrument can be supported at an arbitrary position of the guide portion. Further, even if the actual optical axis is at an angle with respect to the longitudinal direction of the long tool, the position of the image center point that is the point of interest of the long tool can be gradually shifted. Therefore, it can be applied to the case where the actual optical axis is parallel to the longitudinal direction of the elongate instrument or the case where the actual optical axis is at various angles, and the optical axis position measuring device is highly versatile. be able to. For example, when the elongated instrument is a rigid endoscope, the angle of the actual optical axis with respect to the longitudinal direction of the rigid endoscope is a direct endoscope having a substantially 0 ° angle, a 30 ° or 70 ° perspective mirror, and a 90 ° angle mirror. Applicable for measurement of a side endoscope or the like.

The target image is a lattice point pattern, a lattice line pattern, or a checker pattern, and the image processing performed by the calculation unit is performed at the center position of the captured image after performing distortion correction processing, center line acquisition processing, and corner acquisition processing. It is preferable that the process is to obtain corresponding points on the target image.

When such target images are used, distortion correction processing of the captured image of the target image can be performed relatively easily, and further, center line acquisition processing and corner acquisition processing thereafter can be performed with high accuracy. Therefore, the process of acquiring a point on the target image corresponding to the center position of the captured image can be performed with high accuracy. As a result, the three-dimensional relative positional relationship between the actual optical axis, which is the actual optical axis of the elongated instrument, and the position and orientation detection marker can be measured more accurately. Among the target images listed above, it is particularly preferable to use a checker pattern.

The long instrument whose actual optical axis position is measured by the optical axis position measuring system is a rigid endoscope, and the actual optical axis position acquired by the optical axis position measuring system and the position and orientation detection sign The three-dimensional relative positional relationship with the body is acquired in advance by measuring the three-dimensional surface shape of the patient and the three-dimensional shape of the position / orientation detection marker provided on the rigid endoscope, and acquiring the three-dimensional data and the information in advance. In the surgical operation support system that performs arithmetic processing using the three-dimensional tomographic data of the patient and displays the calculation result, the position of the actual optical axis and the position on the image based on the three-dimensional tomographic data of the patient It may be used for adding and displaying the position of the distal end portion of the rigid endoscope.

And as another means (means 2) for solving the above-mentioned problem, it is a long tool having an image pickup means for picking up an object, and a position for detecting the position and posture of the long tool. In the elongate instrument provided with the posture detection marker, in the optical axis position measuring method for measuring the position of the actual optical axis that is the actual optical axis of the imaging means starting from the tip, the elongate A support mechanism that supports the instrument at an arbitrary position within a predetermined range, a target image that can be imaged by the imaging means of the elongated instrument in a state where the elongated instrument is supported by the support mechanism, and an optical axis position A preparation step of preparing an optical axis position measuring device having a measurement marker, and obtaining a first three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the target image and the optical axis position measuring marker. Step and optical axis position For measuring the position and orientation by measuring the three-dimensional surface shape of the optical axis position measuring device and the elongated instrument in a state where the elongated instrument is supported on a fixed device, and processing the obtained three-dimensional data. A step of obtaining a second three-dimensional relative positional relationship, which is a three-dimensional relative positional relationship between a marker and a tip of the elongated instrument, and a state in which the elongated instrument is supported by the optical axis position measuring device Then, a point on the target image corresponding to the center position of the captured image obtained by performing image processing on the target image captured by the elongated instrument at a position where the tip of the elongated instrument is separated from the target image And the optical axis position measuring device and the elongated instrument 3 at the support position of the elongated instrument in the step of acquiring a point on the target image corresponding to the center position of the captured image. Dimension table By measuring the shape and processing the obtained three-dimensional data, a third three-dimensional relative positional relationship, which is a three-dimensional relative positional relationship between the position and orientation detection marker and the optical axis position measuring marker, is obtained. Based on the obtaining step, the first to third three-dimensional relative positional relationships, and the point on the target image corresponding to the center position of the captured image, the position of the actual optical axis and the position and orientation detection indicator There is an optical axis position measurement method characterized by performing a step of acquiring a three-dimensional relative positional relationship with a body.

And as another means (means 3) for solving the above-mentioned problem, it is an elongate instrument having an imaging means for imaging an object, for detecting the position and posture of the elongate instrument. In an elongated instrument provided with a position / orientation detection marker, by an arithmetic means provided in an optical axis position measurement system for measuring the position of an actual optical axis that is the actual optical axis of the imaging means starting from the tip. In the optical axis position measurement program to be executed, a support mechanism for supporting the elongated instrument at an arbitrary position within a predetermined range, and the elongated instrument in a state in which the elongated instrument is supported by the support mechanism. A first relative relationship between the target image and the optical axis position measurement marker in the optical axis position measurement apparatus having the target image that can be captured by the imaging means and the optical axis position measurement marker. 3D The positional relationship is stored in advance, and is obtained when measuring the three-dimensional surface shape of the optical axis position measuring device and the elongated instrument while the elongated instrument is supported on the optical axis position measuring device. Processing the obtained three-dimensional data to obtain a second three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the position and orientation detection marker and the distal end of the elongated instrument; The target image captured by the elongated instrument is image-processed at a position where the tip of the elongated instrument is separated from the target image while the elongated instrument is supported by the axial position measuring device. Calculation for obtaining a point on the target image corresponding to the center position of the captured image, and the support position of the elongated tool when obtaining a point on the target image corresponding to the center position of the captured image Optical axis position measuring device The three-dimensional data obtained when the three-dimensional surface shape of the elongated instrument is measured is processed to obtain a three-dimensional relative positional relationship between the position / orientation detection marker and the optical axis position measurement marker. Based on the calculation for obtaining the third three-dimensional relative positional relationship, the first to third three-dimensional relative positional relationships, and the point on the target image corresponding to the center position of the captured image, the actual optical axis There is an optical axis position measuring program for performing a calculation for obtaining a three-dimensional relative positional relationship between the position of the position and the position and orientation detection marker.

And as another means (means 4) for solving the above-mentioned problem, it is an elongate instrument having an imaging means for imaging an object, for detecting the position and posture of the elongate instrument. An optical axis position measuring device used when measuring the position of the actual optical axis, which is the actual optical axis of the imaging means, starting from the tip of a long instrument provided with a position and orientation detection marker A support mechanism comprising a base member, and a guide portion provided on the base member for supporting the elongated instrument in a movable state along the longitudinal direction thereof, and a main surface of the base member A target member in which a target image that can be imaged by an imaging means of the elongated instrument when the elongated instrument is supported on the guide portion is drawn on the imaging surface, By shape measuring device It has a three-dimensional shape capable of detecting the position and orientation of the optical axis position measuring device from the three-dimensional data acquired by being measured, and is provided on at least one of the base member and the target member, There is an optical axis position measuring device comprising an optical axis position measuring marker whose three-dimensional relative positional relationship with the target image is known.

According to the invention described in the above means 4, the long instrument can be supported by the optical axis position measuring device at an arbitrary position of the guide portion, and the operator can operate the calibration object according to the prior art by the operator's own operation. It is possible to set the position of the long tool with respect to the corresponding target image and perform measurement with the three-dimensional shape measuring apparatus and imaging with the long tool. Therefore, it is possible to omit the means for moving the calibration object, which has been required in the past, and the second calibration object having the marking part, and it has a light and simple structure and is excellent in operability and convenience. An optical axis position measuring device can be obtained. In addition to the guide member being provided on the base member, the target member having the target image is provided in an inclined state with respect to the main surface of the base member. Therefore, by operating in a state where the long instrument is supported on the guide portion, the long instrument can be moved linearly and simply in the longitudinal direction of the long instrument without shaking the operator. The elongated instrument can be supported at an arbitrary position of the guide portion. Furthermore, even if the actual optical axis is at an angle with respect to the longitudinal direction of the elongated instrument, the center position of the captured image that is the point of interest of the elongated instrument can be gradually shifted. Therefore, even if the actual optical axis is parallel to the longitudinal direction of the long tool, it can be applied even when it has various angles, and the optical axis position measuring device with high versatility can be applied. can do. For example, when the elongated instrument is a rigid endoscope, the angle of the actual optical axis with respect to the longitudinal direction of the rigid endoscope is a direct endoscope having a substantially 0 ° angle, a 30 ° or 70 ° perspective mirror, and a 90 ° angle mirror. Applicable for measurement of a side endoscope or the like. Based on the above, a position and orientation detection marker that defines the position of the actual optical axis and the position and orientation of the long instrument in a long instrument such as a rigid endoscope having an optical axis starting from the tip of the instrument And an optical axis position measuring device suitable for use in the above optical axis position measuring system capable of accurately measuring the three-dimensional relative positional relationship.

As described in detail above, according to the inventions described in claims 1 to 6, since the second calibration object having the means for moving the calibration object and the marking portion is unnecessary, it has a light and simple structure and can be operated. Optical axis position in a long instrument such as an endoscope having an optical axis starting from the distal end of the long instrument using an optical axis position measuring device that is excellent in performance and convenience, and the position and orientation of the long instrument It is possible to provide an optical axis position measurement system, an optical axis position measurement method, and an optical axis position measurement program that can accurately measure the three-dimensional relative positional relationship with the position and orientation detection marker that defines the position. According to the seventh aspect of the present invention, an optical axis position measuring device suitable for use in the optical axis position measuring system and the optical axis position measuring method can be provided. When the object having the optical axis starting from the long tool tip is a rigid endoscope, the optical axis position can be accurately measured far away with good operability and convenience. Accurate surgical navigation can be performed.

Schematic which shows the structure of the surgery assistance system which concerns on embodiment of this invention. The figure for demonstrating the optical axis offset of a rigid endoscope (direct endoscope). The figure for demonstrating the optical axis offset of a rigid endoscope (direct endoscope). The flowchart which shows the process in the surgery assistance system which concerns on embodiment of this invention. The figure showing the example of a display of the optical axis and intersection of a rigid endoscope. The photograph showing the whole structure of the optical axis position measuring apparatus of this embodiment. (A) is the front view of the optical-axis position measuring apparatus of this embodiment, (b) is the back view, (c) is the side view, (d) is the top view. The perspective view which shows the mode at the time of use of the optical-axis position measuring apparatus of this embodiment. (A) is a figure which shows a mode that the endoscope front-end | tip part contact | abutted to the target image at the time of use, (b) is a figure which shows a mode that the endoscope front-end | tip part was spaced apart from the target image, (c) is a figure. The figure which shows a mode that the endoscope front-end | tip part was further spaced apart from the target image. The picked-up image of the rigid endoscope for demonstrating the image processing performed in this embodiment. The picked-up image of the rigid endoscope for demonstrating the image processing performed in this embodiment. The picked-up image of the rigid endoscope for demonstrating the image processing performed in this embodiment. The figure which shows visually the calculation which calculates | requires the three-dimensional relative positional relationship performed in this embodiment.

The present invention relates to an optical axis position measurement system for measuring the position of an actual optical axis, which is an actual optical axis of an imaging means starting from a distal end portion, in a long instrument having an imaging means such as a rigid endoscope, and Although it is an optical axis position measuring method, before explaining this system and method, in order to explain the necessity of acquiring the position of the actual optical axis of a long instrument with high accuracy, the actual optical axis obtained by the present invention is used. A surgical operation support system to which the positions are applied will be described with reference to FIGS.

FIG. 1 is a configuration diagram schematically showing an embodiment of a surgical operation support system 1 to which the position of the actual optical axis acquired by the present invention is applied. The surgery support system 1 is a device that provides information about an image captured by the rigid endoscope 11 to the surgeon 75 or the like during surgery on the patient 60. Surgery in which the surgery support system 1 is used targets an object that is imaged by the rigid endoscope 11 such as endoscopic surgery of the sinuses in the otolaryngology department.

As shown in FIG. 1, the surgery support system 1 includes a rigid endoscope 11 as a long instrument, a marker ball 12, a three-dimensional shape measuring device 20, a CT device 30, and a PC ( (Personal Computer) 40 and a display device 50. Further, if an optical axis position measuring device 80 to be described later is arranged within the measurement range of the three-dimensional shape measuring device 20, the actual optical axis of the rigid endoscope 11 and the marker ball 12 (position and orientation detection marker) 3 An optical axis position measurement system according to the present invention for measuring a dimensional relative positional relationship can also be provided.

The rigid endoscope 11 is a device that is operated by the operator 75 and is inserted into the patient 60 (target object) to image the inside. The rigid endoscope 11 has an elongated shape so that it can be inserted into the living body of the patient 60, and a mechanism for imaging the inside of the patient 60 is provided at the distal end thereof. The mechanism is, for example, an imaging element such as a lens positioned so as to face the imaging target and a CCD image sensor (Charge Coupled Device Image Sensor) provided at the imaging position of the lens. In place of the CCD image sensor, a CMOS image sensor (Complementary / Metal / Oxide / Semiconductor / Image / Sensor) may be used. Depending on how the rigid endoscope 11 is positioned, the imaging direction A of the rigid endoscope 11 is determined. Usually, the optical axis direction of the lens is the imaging direction A of the rigid endoscope 11. Information on the image captured by the rigid endoscope 11 is output to the PC 40 electrically connected to the rigid endoscope 11 by a cable. Note that the rigid endoscope 11 does not need to have a special configuration, and the conventionally used rigid endoscope 11 can be used.

The marker sphere 12 is an object that can define three or more fixed points that are fixedly provided at a predetermined relative positional position with respect to the imaging direction A (actual optical axis) of the rigid endoscope 11. is there. The marker sphere 12 is scanned by the three-dimensional shape measuring device 20, and the three-dimensional coordinates of a plurality of points on the surface are obtained from the data obtained by the scanning, and the sphere center coordinates are obtained from the plurality of three-dimensional coordinates. Specifically, the marker spheres 12 are spherical members that are fixed to the rigid endoscope 11 via the rod-shaped member 13 and have different sizes. The difference in size is that three-dimensional coordinates (hereinafter referred to as three-dimensional data) of a plurality of points are calculated from the data obtained by scanning by the three-dimensional shape measuring apparatus 20, and three spheres are obtained from the three-dimensional data. This is because, when extracting the dimensional data and calculating the sphere center coordinates, the diameter of the sphere is also calculated, and each sphere center coordinate is distinguished and detected.

The position at which the marker sphere 12 is provided in the rigid endoscope 11 is a position where the marker sphere 12 is not inserted into the patient 60, further rearward from the portion inserted into the patient 60. Further, the rigid endoscope 11 is inserted into the patient 60 so that the positional relationship between the marker sphere 12 and the imaging direction A (actual optical axis) from the distal end portion of the rigid endoscope 11 is constant. From the portion to the portion where the marker sphere 12 is provided, it is formed of a hard material so that it cannot be bent.

Note that the object that can define three or more fixed points provided in the rigid endoscope 11 is a relative position determined with respect to the imaging direction A (actual optical axis) from the distal end portion of the rigid endoscope 11. What is necessary is just to be in the position of a relationship and to be obtained by distinguishing three or more fixed point coordinates from data obtained by scanning by the three-dimensional shape measuring apparatus 20. Therefore, it does not necessarily have to be spherical like the marker sphere 12 of FIG. 1, and may have a shape such as a rectangular parallelepiped, a cylinder, or a cone. Further, it is only necessary to be able to detect the position and orientation of the rigid endoscope 11, and two or more fixed-point coordinates and one or more vectors or one or more fixed-point coordinates are replaced with three or more fixed-point coordinates. The shape which can detect the above vector may be sufficient.

The three-dimensional shape measuring device 20 is a device that three-dimensionally scans the surface of the patient 60 and the marker sphere 12 when the rigid endoscope 11 is inserted into the patient 60. As shown in FIG. 1, when the rigid endoscope 11 is inserted from the nostril of the patient 60 and the shape of the head of the patient 60 is measured by the rigid endoscope 11, the face of the patient 60 and the marker ball The three-dimensional shape measuring apparatus 20 is provided at a position where the shape can be measured. The three-dimensional shape measuring apparatus 20 is electrically connected to the PC 40 and transmits data that is the basis of the three-dimensional data obtained by scanning to the PC 40.

The data obtained by the scanning by the three-dimensional shape measuring apparatus 20 is used to calculate the three-dimensional coordinates (three-dimensional data) and the position and orientation information of a plurality of points on the surface of the one scanned in the PC 40. As the three-dimensional shape measuring apparatus 20, for example, an apparatus based on a phase shift method described in Japanese Patent Laid-Open No. 2003-254732 can be used. This is a three-dimensional scan by projecting a lattice pattern similar to white light similar to natural sunlight emitted from a xenon light.

In addition, if Fscan manufactured by Pulstec Industrial Co., Ltd., which is a three-dimensional shape measuring apparatus based on the phase shift method, is used, shape measurement can be performed from a distance of 90 ± 10 cm in a measurement time of less than 1 second. Further, the resolution is 0.1 to 0.6 mm, and it also has a function as a normal digital camera. That is, an image with high resolution having three-dimensional data can be acquired in less than one second. In addition, the light used is visible light, and since a laser or the like is not used, the three-dimensional shape of the human body can be measured safely.

The CT apparatus 30 acquires three-dimensional tomographic data of the patient 60 into which the rigid endoscope 11 is inserted. The three-dimensional tomographic data of the patient 60 by the CT apparatus 30 is assumed to be data by the first coordinate system.

The CT apparatus 30 scans an object using radiation or the like, and creates an image (CT image) of the internal structure processed by using a computer by cutting it at equal intervals (for example, 1 mm). It is created as three-dimensional tomographic data that is information indicating the internal three-dimensional shape, and an existing CT apparatus can be used. The CT apparatus 30 is electrically connected to the PC 40 and transmits the acquired three-dimensional tomographic data of the patient 60 to the PC 40. Note that the CT apparatus 30 does not need to be installed at the same location as the 3D shape measurement apparatus 20. Normally, the scanning by the 3D shape measurement apparatus 20 and the acquisition of 3D tomographic data by the CT apparatus 30 are separate. To be done. For example, a method described in Japanese Patent Application Laid-Open No. 2005-278992 can be used to create three-dimensional tomographic data from a CT image.

Note that the surgery support system 1 only needs to be able to acquire information indicating the three-dimensional shape including the inside of the patient 60. Therefore, the shape acquisition means (three-dimensional tomographic data acquisition means) inside the patient is not necessarily limited to the CT apparatus 30, For example, an MRI apparatus or an ultrasonic diagnostic apparatus may be used.

The PC 40 is a device that receives data obtained by scanning by the three-dimensional shape measuring apparatus 20 and three-dimensional tomographic data of the patient 60 acquired by the CT apparatus 30 and performs information processing on these pieces of information. Specifically, the PC 40 includes hardware such as a CPU (Central Processing Unit) and a memory, and the following functions of the PC 40 are realized by operating these information processing apparatuses. As shown in FIG. 1, the PC 40 includes, as functional components, a patient shape acquisition unit 41, a captured image acquisition unit 42, a surface shape calculation unit 43, a coordinate axis matching unit 44, and an endoscope vector calculation unit. 45, an intersection calculation unit 46, and an output unit 47.

The patient shape acquisition unit 41 is a means for receiving the three-dimensional tomographic data of the patient 60 transmitted from the CT apparatus 30. The patient shape acquisition unit 41 outputs the received three-dimensional tomographic data of the patient 60 to the coordinate axis matching unit 44, the intersection calculation unit 46, and the like as necessary. As shown in FIG. 1, the surgery support system 1 does not necessarily include the CT apparatus 30 itself as patient shape acquisition means. It is only necessary to receive the three-dimensional tomographic data of the patient 60 (for example, imaged in step 1).

The captured image acquisition unit 42 is means for receiving data obtained by scanning by the three-dimensional shape measuring apparatus 20. The captured image acquisition unit 42 outputs the received data to the surface shape calculation unit 43, the endoscope vector calculation unit 45, and the like.

The surface shape calculation unit 43 is a surface shape calculation unit that calculates a plurality of coordinate data representing the three-dimensional shape of the surface of the patient 60 from the data obtained by scanning by the three-dimensional shape measurement apparatus 20. The surface of the patient 60 is the face of the patient 60 in the case of the application form of the surgery support system 1. The plurality of coordinate data representing the three-dimensional shape acquired by the surface shape calculation unit 43 is calculated as coordinate data in a coordinate system set in the three-dimensional shape measurement apparatus 20, for example. This coordinate system is different from the first coordinate system described above, and this coordinate system is the second coordinate system. That is, the plurality of coordinate data representing the three-dimensional shape of the patient 60 calculated from the data obtained by the scanning by the three-dimensional shape measuring apparatus 20 is data in the second coordinate system. The surface shape calculation unit 43 outputs data indicating the calculated three-dimensional shape of the surface of the patient 60 to the coordinate axis matching unit 44.

The coordinate axis matching unit 44 receives either or both of the three-dimensional tomographic data of the patient 60 acquired by the patient shape acquiring unit 41 and the data indicating the three-dimensional shape of the surface of the patient 60 calculated by the surface shape calculating unit 43. Coordinate axis matching means that performs coordinate conversion to make data in the first coordinate system and data in the second coordinate system coincide with each other. That is, the coordinate axis coincidence unit 44 converts the three-dimensional tomographic data by the CT device 30 and the data indicating the three-dimensional shape calculated from the data obtained by scanning by the three-dimensional shape measuring device 20 into data using the same coordinate system. It is a means for enabling processing.

Specifically, the coordinate axis matching unit 44 is a patient common to both the three-dimensional tomographic data obtained by the CT apparatus 30 and the data indicating the three-dimensional shape calculated from the data obtained by scanning by the three-dimensional shape measuring apparatus 20. Coordinate conversion coefficients for matching the coordinate axes are calculated by matching the positions of the 60 faces. The process of associating the positions of the common faces is performed by using, for example, a pattern matching method. As a result of the process, data in one of the first coordinate system and the second coordinate system is converted to the other coordinate system. A coordinate conversion coefficient for converting coordinates into coordinate system data is calculated. The coordinate axis matching unit 44 outputs the calculated coordinate conversion coefficient and the like to the endoscope vector calculation unit 45, the intersection calculation unit 46, the output unit 47, and the like as necessary. After the process of matching the coordinate axes, the above-described coordinate conversion coefficient and the like are applied to data indicating a three-dimensional shape in the endoscope vector calculation unit 45, the intersection calculation unit 46, the output unit 47, and the like. The three-dimensional tomographic data by the device 30 and the data indicating the three-dimensional shape calculated from the data obtained by scanning by the three-dimensional shape measuring device 20 are processed in the same coordinate system. Hereinafter, this coordinate system is referred to as a coordinate system F.

The endoscope vector calculation unit 45 calculates the center coordinates of the marker sphere 12 from a plurality of coordinate data of the marker sphere 12 scanned by the three-dimensional shape measuring apparatus 20, and uses the coordinate system F calculated by the coordinate axis matching unit 44. Coordinate conversion is performed using a coordinate conversion coefficient for making data, and the coordinate coordinates of the marker sphere 12 subjected to the coordinate conversion, the center of the marker sphere 12 stored in advance, and the imaging direction from the distal end of the rigid endoscope 11 Endoscope vector calculating means for calculating the imaging direction vector A of the rigid endoscope 11 by the coordinate system F calculated by the coordinate axis matching unit 44 from the positional relationship of Note that the imaging direction vector A mentioned here includes a position that is the starting point of the vector. That is, the vector in the imaging direction A of the rigid endoscope 11 indicates from which point the imaging is performed in which direction. The endoscope vector calculation unit 45 stores in advance information indicating the positional relationship between the center of the marker sphere 12 and the imaging direction vector A of the rigid endoscope 11. Specifically, as the information indicating the positional relationship, for example, the coordinates of the tip of the rigid endoscope 11 in the same coordinate system and the coordinates of the target on which the optical axis (imaging direction vector A) of the rigid endoscope 11 hits. And the center coordinates of the marker sphere 12. The endoscope vector calculation unit 45 outputs data of the imaging direction vector A based on the calculated coordinate system F to the intersection calculation unit 46.

Here, the optical axis of the rigid endoscope 11 (a straight line indicating the imaging center in the imaging direction) may be as long as the nominal value published by the manufacturer, but actually deviates from the nominal value. For example, in the case of an endoscope with a viewing angle of 120 degrees, an error in the optical axis direction of about 6 degrees at the maximum (5% of the viewing angle) may occur. Therefore, if the imaging direction vector A is determined on the assumption that the value is as the nominal value, there will be a discrepancy between the field of view of the actual endoscope and the information to be navigated. Therefore, the positional relationship between the center of the marker sphere 12 stored in advance and the imaging direction vector A of the rigid endoscope 11 is the same as the actual optical axis position (imaging direction vector A) together with the center coordinates of the marker sphere 12. It is necessary to measure and memorize in the coordinate system.

An example of optical axis misalignment when the rigid endoscope 11 is a direct endoscope will be described with reference to FIGS. In the case of a direct-view mirror, the optical axis (a straight line indicating the imaging center in the imaging direction) should coincide with the lens barrel center line, but in reality, a slight deviation occurs due to various factors. FIG. 2 shows an example in which an angle shift occurs at the tip of the lens barrel due to factors such as an objective lens. FIG. 2 shows an example in which the optical axis is shifted by θ degrees with respect to the center of the lens barrel (see FIG. 2A), and the shifted direction is φ degrees from the zenith direction (see FIG. 2B). It is. The example in FIG. 2 is based on the premise that the optical axis shift occurs from the intersection between the front end surface of the lens barrel and the center of the lens barrel. However, the optical axis shift does not always shift from this point. As shown in FIG. 3B, the optical axis (straight line indicating the imaging center in the imaging direction) is the intersection of the front end surface of the lens barrel and the center of the lens barrel. Sometimes it does not pass. The measurement of the optical axis position is performed using an optical axis position measurement system using a three-dimensional shape measuring apparatus 20 described later. 2 and 3, the case of the direct-view mirror has been described, but it is needless to say that the same optical axis shift may occur in the perspective mirror and the side mirror.

The intersection calculation unit 46 is a patient related to the imaging direction vector A of the rigid endoscope 11 based on the coordinate system F calculated by the endoscope vector calculation unit 45 and the information indicating the three-dimensional shape acquired by the patient shape acquisition unit 41. 60 is an intersection calculation means for calculating an intersection with a surface constituting the interior of 60. This intersection is a point (center point) where the rigid endoscope 11 is imaging in information indicating a three-dimensional shape by the CT apparatus 30. Specifically, the intersection calculation unit 46 converts the surfaces constituting the inside of the patient 60 into three-dimensional solid data, and the respective surfaces constituting the three-dimensional solid data and the imaging direction vector A of the rigid endoscope 11. Calculate the intersection coordinates. The calculation of the intersection coordinates will be described later in more detail. The intersection calculation unit 46 outputs the calculated intersection coordinate data to the output unit 47.

The output unit 47 superimposes the data of the intersection coordinates calculated by the intersection calculation unit 46 on the CT image data which is information indicating the surface constituting the inside of the patient 60 acquired by the patient shape acquisition unit 41 and displays it. This is an output means for outputting to the device 50. Further, the output unit 47 may output the endoscope image data captured by the rigid endoscope 11 and input to the PC 40 to the display device 50 together.

The display device 50 displays information input from the PC 40. By referring to the display device 50, the surgeon can know which part of the patient 60 is imaged by the rigid endoscope 11.

Subsequently, the operation of the surgery support system 1 will be described with reference to the flowchart of FIG. This operation is, for example, an operation when performing treatment or the like by inserting the rigid endoscope 11 during an operation on the patient 60. In this description, the processing before the operation and the processing at the time of the operation will be described separately.

First, before the surgery, CT scan imaging is performed on the patient 60 using the CT apparatus 30 (see step S10). This CT scan imaging is performed on the part of the patient 60 into which the rigid endoscope 11 is inserted. Thereby, the information which shows the three-dimensional shape of the face which is the surface of the patient 60, and the surface which comprises the inside of the patient 60 in which the rigid endoscope 11 is inserted is acquired. Information (three-dimensional tomographic data) indicating the three-dimensional shape inside the patient 60 acquired by CT scan imaging is transmitted to the PC 40. In the PC 40, the information is acquired by the patient shape acquisition unit 41 and stored in the PC 40 (see step S20). The above is processing before surgery, for example, performed on the day before surgery.

Next, we will explain the processing at the time of surgery. First, the patient 60 enters the operating room, and as shown in FIG. 1, the rigid endoscope 11 is placed on the back on the operating table 70 so that the rigid endoscope 11 can be inserted through the nostril. After the patient 60 is placed and before the rigid endoscope 11 is inserted, the placed patient 60 is scanned by the three-dimensional shape measuring apparatus 20 (see step S30). Data obtained by scanning is transmitted from the three-dimensional shape measurement apparatus 20 to the PC 40 and received by the captured image acquisition unit 42 in the PC 40. The received data is output from the captured image acquisition unit 42 to the surface shape calculation unit 43.

The surface shape calculation unit 43 calculates data indicating the three-dimensional shape of the face that is the surface of the patient 60 from the data (see step S40). Data indicating the calculated three-dimensional shape of the face of the patient 60 is output from the surface shape calculation unit 43 to the coordinate axis matching unit 44. At the same timing, data indicating the three-dimensional shape inside the patient 60 by the CT apparatus 30 (three-dimensional tomographic data) stored in the PC 40 is output from the patient shape obtaining unit 41 to the coordinate axis matching unit 44. The

At this time, the coordinate axes of the respective coordinate systems related to the data indicating the three-dimensional shape of the patient 60 by the CT device 30 and the data indicating the three-dimensional shape of the face which is the surface of the patient 60 from the data by the three-dimensional shape measuring device 20. Does not match. The data from the CT apparatus 30 and the data from the three-dimensional shape measuring apparatus 20 are in a state in which the values are based on individual coordinate systems.

Here, the coordinate conversion unit 44 calculates a coordinate conversion coefficient by matching the shape of the face in the two data, and the coordinate system F is a coordinate axis in which the data is coordinated and the two data are matched. (See step S50). Matching of the shape of the face is performed by the pattern matching method as described above. Note that the matching parts are set in advance, such as the whole face or a characteristic part such as the nose or cheek of the face. Coordinate conversion coefficients for converting the data into the coordinate system F are output from the coordinate axis matching unit 44 to the endoscope vector calculation unit 45, the intersection calculation unit 46, and the output unit 47, respectively, and are subjected to coordinate conversion of the data. Thereafter, information processing on the three-dimensional shape is performed on the basis of the data by the coordinate system F. The above is the process until the start of surgery.

Further, before the start of the operation, the actual optical axis position of the rigid endoscope 11 is measured and stored using an optical axis position measuring device 80 described later.

Subsequently, the operation is started, and the rigid endoscope 11 is inserted into the patient 60 by the operator 75. At this time, the head of the patient 60 is not moved greatly after the processing of steps S30 to S55 is performed. This is to prevent an increase in deviation when the coordinate system F is used to convert the already obtained three-dimensional data on the patient surface by the coordinate system F and the three-dimensional data on the patient surface obtained during surgery. . Note that there is no problem if the minute shift is corrected by correcting the coordinate conversion coefficient for making the data by the coordinate system F using the three-dimensional data of the patient surface obtained during the operation. After the rigid endoscope 11 is inserted into the patient 60, the patient 60 and the marker sphere 12 are scanned by the three-dimensional shape measuring apparatus 20 (see step S60). The scanned data is transmitted from the three-dimensional shape measuring apparatus 20 to the PC 40 and received by the captured image acquisition unit 42 in the PC 40. The received data is output from the captured image acquisition unit 42 to the endoscope vector calculation unit 45.

Subsequently, the endoscope vector calculation unit 45 calculates the three-dimensional coordinates of the center of the marker sphere 12 (step S70). Subsequently, from the three-dimensional coordinates of the center of the marker sphere 12 calculated by the endoscope vector calculator 45, the center coordinates of the marker sphere 12 stored in advance and the imaging direction vector A of the rigid endoscope 11 are calculated. Based on the information indicating the positional relationship, the imaging direction vector A of the rigid endoscope 11 is calculated as data by the coordinate system F (see step S80). At this time, the rigid endoscope based on the coordinate system F is used by using information indicating the positional relationship between the center coordinates of the marker sphere 12 for each endoscope and the actual optical axis position (imaging direction vector A) stored in advance. 11 imaging direction vectors A are calculated.

The information of the calculated imaging direction vector A is output from the endoscope vector calculation unit 45 to the intersection calculation unit 46. At the same timing, information indicating the three-dimensional shape of the patient 60 by the CT apparatus 30 stored in the PC 40 is output from the patient shape acquisition unit 41 to the intersection calculation unit 46. Subsequently, the intersection calculation unit 46 calculates the intersection between the imaging direction vector A of the rigid endoscope 11 and the surface constituting the inside of the patient 60 (see step S90).

Intersection calculation is performed as follows. First, the intersection calculation unit 46 converts information indicating the three-dimensional shape of the patient 60 by the CT apparatus 30 into three-dimensional solid data. The conversion into the three-dimensional solid data may be performed before the operation and stored in the PC 40 in advance. When the three-dimensional solid data is, for example, polygon data, the inside of the patient 60 is constituted by a large number of triangular faces, and when the three-dimensional solid data is voxel data, the inside of the patient 60 is constituted by a large number of hexahedrons. Next, the intersection of such a triangular or hexahedral surface and the imaging direction vector A of the rigid endoscope 11 is calculated. For the calculation of this intersection, for example, the method disclosed in Japanese Patent No. 5561458 can be used.

The information on the calculated coordinates of the intersection is output from the intersection calculation unit 46 to the output unit 47. At this timing, a CT image that is information indicating a surface constituting the inside of the patient 60 is output from the patient shape acquisition unit 41 to the output unit 47. The information on the intersection is input to the display device 50 by the output unit 47 by superimposing the CT image, which is information indicating the surface constituting the inside of the patient 60, at a location corresponding to the coordinates of the intersection. The input image is displayed by the display device 50 (see step S100). For example, as shown in FIG. 5, the intersection information is displayed on the three-dimensional tomographic data and the endoscope captured image together with the optical axis of the rigid endoscope 11. The surgeon 75 can know which part of the patient 60 is imaging the rigid endoscope 11 by referring to the displayed image.

In addition, an image captured by the rigid endoscope 11 is also received by the PC 40 and output from the output unit 47 to the display device 50, so that the position captured by the rigid endoscope 11 is determined. It is desirable to display it together with the display of whether it is located at

The processing from the above steps S60 to S100 is repeatedly performed at equal intervals such as 1 second. Note that the processing in steps S30 to S50 in the PC 40 is different from the processing in steps S60 to S100. For example, if the coordinate axis matching processing in step S50 is performed, the processing automatically proceeds to the processing in step S60 and subsequent steps. You may do it. Further, the process may be switched by the operation of the surgeon 75 or the like.

As described above, in the surgical operation support system 1 to which the position of the actual optical axis acquired according to the present invention is applied, information indicating the three-dimensional shape of the surface constituting the patient 60 and the surface of the patient by the CT apparatus 30 (3 The portion imaged by the rigid endoscope 11 using the three-dimensional tomographic data) and data obtained by scanning the patient 60 from the outside by the three-dimensional shape measuring apparatus 20 corresponds to which part of the patient 60 You can display what to do. Therefore, according to the surgery support system 1, the above display can be performed without newly using a special endoscope. Further, according to the surgery support system 1, the above display can be performed accurately because the display is not affected by a liquid such as cerebrospinal fluid in the body of the patient 60. Therefore, if the surgery support system 1 is used, the surgery can be performed safely and accurately.

Further, it is not necessary to mark the patient 60 at the time of CT scan imaging (step S10), and it is convenient because CT scan imaging can be performed as usual. Further, it is not necessary to fix the patient 60 with pins or the like for alignment or the like. Even if the patient 60 is moved during the operation, the alignment can be easily performed. Each component of the surgery support system 1 is relatively inexpensive and can be realized at low cost.

The above is a description of the surgery support system 1 to which the position of the real optical axis of the rigid endoscope 11 acquired by the optical axis position measuring system according to the present invention is applied. As described above, the rigid endoscope 11 is used. If the position of the actual optical axis is not accurately obtained, the site of the patient 60 imaged by the rigid endoscope 11 cannot be accurately displayed, and the operation cannot be performed safely and accurately. . The present invention is an optical axis capable of accurately acquiring the position of the actual optical axis of the rigid endoscope 11 using the optical axis position measuring device 80 having a light and simple structure and excellent in operability and convenience. It is a position measurement system and an optical axis position measurement method.

The optical axis position measuring system includes a three-dimensional shape measuring apparatus, an optical axis position measuring apparatus 80, and a PC (Personal Computer) in which a program for performing arithmetic processing described later is installed. As shown in FIG. 1, the three-dimensional shape measuring apparatus and the PC can use the three-dimensional shape measuring apparatus 20 and the PC 40 of the surgery support system 1 as they are. However, if the three-dimensional shape measurement can be performed with high accuracy, a three-dimensional shape measuring device different from the three-dimensional shape measuring device 20 of the surgery support system 1 can be used, and a program for performing arithmetic processing described later. If PC is installed, a PC different from the PC 40 of the surgery support system 1 can be used. The optical axis position measuring device 80 in the optical axis position measuring system according to the embodiment of the present invention and the optical axis position measuring method using the same will be described below with reference to FIGS. In this description, it is assumed that the three-dimensional shape measuring apparatus 20 and the PC 40 are the same as those of the surgery support system 1.

This optical axis position measuring device 80 is used to calibrate the actual optical axis position of the rigid endoscope 11 that is a long instrument, that is, the actual optical axis position and a marker that is a position and orientation detection marker. In order to measure a three-dimensional relative positional relationship with the sphere 12, the rigid endoscope 11 is supported. Then, a three-dimensional shape is measured by the three-dimensional shape measuring device 20 while the rigid endoscope 11 is supported, and data input from the three-dimensional shape measuring device 20 to the PC 40 is processed by the PC 40, so that the light Three-dimensional data (a group of coordinate value data consisting of x, y, z) representing the three-dimensional shape of the axial position measuring device 80 and the rigid endoscope 11 is acquired.

First, the structure of the optical axis position measuring device 80 will be described. As shown in FIGS. 6 to 9, the optical axis position measuring device 80 includes a base member 81 made of a metal plate such as stainless steel. A pair of arm portions 84 are formed on both sides of the front end of the base member 81, and a lightening portion 83 is provided between the arm portions 84 for weight reduction and the like. A V-shaped groove-shaped guide portion 95 is provided at the center of the main surface 82 that is a surface located on the upper side of the base member 81 so as to extend in the front-rear direction of the base member 81. The guide portion 95 is a portion for supporting the lens barrel 15 of the rigid endoscope 11 in a state of being movable along its longitudinal direction. When the lens barrel 15 of the rigid endoscope 11 is supported, the guide tube 95 is provided. The central axis 15 is parallel to the main surface 82 and is also parallel to a plane perpendicular to the main surface 82 and an imaging surface 86 to be described later.

The lower end side of the rectangular target member 85 is supported and fixed to the tips of the pair of arm portions 84 in the base member 81. The target member 85 is made of a metal plate material such as stainless steel colored in black as a whole, and is provided in an inclined state with respect to the main surface 82 of the base member 81. The inclination angle is, for example, θ = 30 ° to 30 °. It is set to be within a range of 70 ° (60 ° in the present embodiment) (see FIG. 7C). A surface of the target member 85 that faces obliquely downward is an imaged surface 86, and a predetermined target image 88 that can be imaged by the rigid endoscope 11 is drawn at the center of the imaged surface 86. In the present embodiment, a target image 88 as shown in FIG. 6, that is, a checker pattern in which black and white checker portions having a square shape are arranged in two columns. The checker pattern is provided with characters (numbers) for each pattern, and extends in a slender and straight line from the lower end side to the upper end side of the target image 88. Note that the center line of the target image 88 exists on the extension line on the tip side of the guide portion 95 described above. Further, when the lens barrel 15 of the rigid endoscope 11 having a fixed cross-sectional diameter is supported by the guide portion 95, the center line of the lens barrel 15 intersects the corner of the black and white checker pattern on the center line of the target image 88. The attachment of the target member 85 to the base member 81 is adjusted. In this adjustment, when the optical axis position measuring device 80 is manufactured, an elongated cylindrical body having the same cross-sectional diameter is supported by the guide portion 95 on the object portion 86 with the center axis position of the tip portion sharpened. This is done by contacting them.

Fixed point coordinates can be obtained by processing three-dimensional data obtained by the three-dimensional shape measurement by the three-dimensional shape measuring device 20 at a plurality of positions separated on the main surface 82 of the base member 81. An optical axis position measuring marker 91 whose three-dimensional relative positional relationship with the target image 88 is known is provided. In the present embodiment, three white marker spheres are provided as such an optical axis position measurement marker 91. Specifically, when the surface 87 of the target member 85 facing obliquely upward is viewed from the front, the base member 81 has portions projecting on both the left and right sides of the target member 85, and three portions are included in these portions. A sign sphere is placed. Therefore, when the three-dimensional shape measuring apparatus 20 scans the optical axis position measuring apparatus 80 from the direction, the optical axis position measuring marker 91 is not easily hidden in the target member 85, and the optical axis position measuring marker 91 and It is possible to measure the three-dimensional shape of the marker sphere 12, which is a marker for detecting the position and orientation of the rigid endoscope 11, and to acquire the three-dimensional relative positional relationship between the marker 91 for measuring the optical axis position and the marker sphere 12. It becomes possible.

Further, on the upper surface of the pair of arm portions 84 in the base member 81, side support pieces 96 made of a metal material such as stainless steel and supporting the target member 85 are provided. The side support pieces 96 are formed with a plurality of slits extending in the vertical direction at equal intervals, and the presence of these slits allows the movement distance of the barrel 15 of the rigid endoscope 11 to be grasped to some extent. Yes.

The optical axis position measuring device 80 is configured to have a size and weight that can be easily carried by the operator 75 with one hand.

Next, a method for measuring the three-dimensional relative positional relationship between the optical axis position of the rigid endoscope 11 and the marker sphere 12 using the optical axis position measuring device 80 will be described. In short, as shown in FIG. 8, the three-dimensional data obtained by measuring the three-dimensional shape of the optical axis position measuring device 80 in a state where the rigid endoscope 11 is supported on the optical axis position measuring device 80. To obtain a value representing the optical axis position of the rigid endoscope 11 and a fixed point coordinate (center coordinate) of the marker sphere 12 in the same coordinate system. The value representing the optical axis position is, for example, the coordinate T on the optical axis at the distal end portion 14 of the barrel 15 of the rigid endoscope 11 and the vector component V parallel to the optical axis, and this is the imaging direction vector A described above. . This method consists of the following five procedures.

(Procedure 1)
The three-dimensional relative positional relationship between the target image 88 of the optical axis position measuring device 80 and the optical axis position measuring marker 91 is acquired.
(Procedure 2)
When the barrel 15 of the rigid endoscope 11 is set in the V-shaped groove-shaped guide portion 95 of the optical axis position measuring device 80 so that the distal end portion 14 abuts on the target member 85 (the state shown in FIG. 9A). The three-dimensional relative positional relationship between the distal end portion 14 of the rigid endoscope 11 and the marker ball 12 is acquired.
(Procedure 3)
When the barrel 15 of the rigid endoscope 11 is supported by the V-shaped groove-shaped guide portion 95 of the optical axis position measuring device 80 and the barrel 15 is moved by an appropriate distance (FIG. 9B) A point on the target image 88 corresponding to the center position of the captured image of the rigid endoscope 11 is acquired (when the state is set to 9 (c)).
(Procedure 4)
Acquires a three-dimensional relative positional relationship between the optical axis position measuring marker 91 and the marker ball 12 when a point on the target image 88 corresponding to the center position of the captured image of the rigid endoscope 11 is acquired in step 3. To do.
(Procedure 5)
From the three-dimensional relative positional relationship obtained in steps 1 to 4 and the point on the target image 88 corresponding to the center position of the captured image of the rigid endoscope 11, the position of the optical axis of the rigid endoscope 11 The three-dimensional relative positional relationship with the marker sphere 12 is acquired.

First, the method for acquiring the three-dimensional relative positional relationship between the target image 88 and the optical axis position measuring marker 91 in step 1 will be described. Specifically, the acquisition of the three-dimensional relative positional relationship between the target image 88 and the optical axis position measurement marker 91 is based on the coordinates in each position of the target image 88 and the optical axis position measurement marker in the same coordinate system. It is to acquire the fixed point coordinates (center coordinates) of the body 91. However, since each position in the target image 88 is infinite, it is impossible to obtain substantially all positions in the same coordinate system. Therefore, acquiring the center coordinates of the optical axis position measuring marker 91 and setting the X and Y axes in the imaged surface 86 so that each position in the target image 88 can be represented by x and y coordinate values, A coordinate conversion coefficient for converting the x and y coordinate values into coordinate values of a coordinate system representing the center coordinates of the optical axis position measuring marker 91 is obtained. This is to acquire the three-dimensional relative positional relationship between the target image 88 and the optical axis position measuring marker 91.

Various methods are conceivable for the X and Y axes set in the imaging surface 86. In this embodiment, when the lens barrel 15 of the rigid endoscope 11 having a fixed cross-sectional diameter is supported by the guide portion 95, the mirror The point at which the center axis of the cylinder 15 intersects the imaged surface 86 is the coordinate origin O, and the center line of the target image 88 in which the boundary line of the black and white checker portion is continuous from the upper end to the lower end is the Y axis. The vertical direction is taken as the X axis. Hereinafter, the X and Y coordinate axes set in the imaging surface 86 to represent the plane coordinates at each position of the target image 88 are referred to as a coordinate system S. The coordinate system of the coordinate system S is the coordinate system to which the coordinate value is converted, the coordinate system representing the center coordinate of the optical axis position measuring marker 91 is called the coordinate system D, and the coordinate conversion coefficient used for the coordinate conversion is the coordinate This is referred to as conversion coefficient Fsd. Hereinafter, a method of acquiring the center coordinates of the optical axis position measuring marker 91 in the coordinate system D and acquiring the coordinate conversion coefficient Fsd will be described.

The guide tube 95 of the optical axis position measuring device 80 supports the lens barrel 15 of the rigid endoscope 11 having a predetermined cross-sectional diameter or a long cylindrical body having a predetermined cross-sectional diameter, and the optical axis position measuring device 80 is three-dimensionally supported. Appropriate positions for the shape measuring device 20 so that the lens barrel 15 (or cylindrical body), the optical axis position measuring marker 91, the imaged surface 86, and the main surface 82 of the base member 81 can be measured in three dimensions. Install in. Next, measurement by the three-dimensional shape measuring apparatus 20 is executed, and three-dimensional data of the optical axis position measuring apparatus 80 is acquired. The coordinate system of the three-dimensional shape measuring apparatus 20 is the coordinate system D, and the acquired three-dimensional data is data by the coordinate system D. Using the acquired three-dimensional data, the following calculation processes (1) to (7) are executed by the PC 40.

(1) Three-dimensional data of each optical axis position measuring marker 91 is extracted from all three-dimensional data, and the center of each optical axis position measuring marker 91 is extracted using the extracted three-dimensional data. Calculate the coordinates. The diameter of the optical axis position measuring marker 91 is stored in the PC 40 that performs arithmetic processing, and three-dimensional data is extracted based on the information of the sphere and the numerical value of the diameter. Then, the extracted three-dimensional data is applied to the spherical equation (x−a) 2 + (y−b) 2 + (z−c) 2 = d 2, and the center coordinates (a, b, c) are obtained by the least square method. Ask for. This method of extracting only the three-dimensional data of the optical axis position measuring marker 91 from all the three-dimensional data is a known technique, and details are described in Japanese Patent No. 3952467. The acquired center coordinates of the optical axis position measuring marker 91 are coordinates based on the coordinate system D. The center coordinates are calculated and stored, and the lengths and angles of the sides of the triangle formed by connecting the center coordinates are also calculated and stored. This is so that each of the optical axis position measuring markers 91 can be identified from the positional relationship of the three optical axis position measuring markers 91, and thereafter, the optical axis in another coordinate system. This is always the case when the center coordinates of the position measurement marker 91 are acquired. This is to enable the coordinate coordinates of the three optical axis position measurement markers 91 to be associated with each other in different coordinate systems and to calculate the coordinate conversion coefficient.

(2) The three-dimensional data of the main surface 82 of the base member 81 is extracted from all the three-dimensional data, and the normal vector of the main surface 82 of the base member 81 is calculated. This is done by performing arithmetic processing in the following order. First, a part of the three-dimensional data in the vicinity of the center coordinates of one optical axis position measurement marker 91 from the remaining data obtained by removing the three-dimensional data extracted in (1) from all three-dimensional data. Are substituted into the plane expression a · x + b · y + c · z + 1 = 0 and the coefficients a, b, and c are calculated by the method of least squares. At this time, if the angle between the vector component connecting the center coordinates calculated in (1) and the vector (a, b, c) is substantially vertical, the plane expression a · x + b · Three-dimensional data in which the distance from the plane by y + c · z + 1 = 0 is within a predetermined minute range is extracted. If there is a predetermined number or more of the three-dimensional data extracted at this time, all the three-dimensional data extracted first are determined to be of the main surface 82, and all the extracted three-dimensional data are converted to the plane formulas a · x + b · y + c. Substituting into z + 1 = 0 and calculating the coefficients a, b, c by the least square method. If the vector component connecting the center coordinates calculated in (1) and the vector (a, b, c) are not substantially vertical, or if the extracted three-dimensional data is less than a predetermined number, it is extracted first. It is determined that the three-dimensional data includes data other than the main surface 82, and data in the vicinity of the initially extracted three-dimensional data is extracted and the same processing is performed. This process is performed until the coefficients a, b, and c are calculated. The calculated a, b, and c are the components (a, b, c) of the normal vector of the principal surface 82. Note that there are two normal vectors (a, b, c) and (-a, -b, -c), and the normal vector going upward from the main surface 82 is selected. For this purpose, a vector directed from the origin coordinate of the three-dimensional shape measuring apparatus 20 to any one of the optical axis position measuring markers 91 and a normal vector of (a, b, c), (−a, −b, −c). And the normal vector whose angle is less than 90 degrees is selected.

(3) The 3D data of the imaged surface 86 is extracted from all the 3D data, and the normal vector of the imaged surface 86 is calculated. This is done by performing arithmetic processing in the following order. First, from the remaining data excluding the three-dimensional data extracted in (1) and (2) from all the three-dimensional data, the distance from the vicinity of the center coordinates of the optical axis position measuring marker 91 is more than a predetermined distance. Part of the three-dimensional data at the position is extracted and substituted into the plane equation e · x + f · y + g · z + 1 = 0 to calculate the coefficients e, f, and g by the least square method. At this time, an angle formed by the normal vector (a, b, c) calculated in (2) and the vector (e, f, g) is an angle θ formed by the main surface 82 of the base member 81 and the imaged surface 86 or ( 180 ° −the angle θ) (in this embodiment, approximately 60 ° or approximately 120 °), the plane equation e · x + f · y + g · z + 1 = 0 is obtained from all three-dimensional data. Three-dimensional data whose distance from the plane is within a predetermined minute range is extracted. If there is a predetermined number or more of the extracted three-dimensional data at this time, it is determined that all of the first extracted three-dimensional data are of the imaged surface 86, and all the extracted three-dimensional data are converted to the plane expression e · x + f · Substituting into y + g · z + 1 = 0, the coefficients e, f, and g are calculated by the method of least squares. If the normal vector (a, b, c) calculated in (2) and the vector (e, f, g) are not at a predetermined angle, or the extracted three-dimensional data is less than the predetermined number If there is, it is determined that the three-dimensional data extracted first includes data other than the imaged surface 86, and data in the vicinity of the three-dimensional data extracted first is extracted and the same processing is performed. This process is performed until the coefficients e, f, and g are calculated. The calculated e, f, and g are the normal vector components (e, f, and g) of the imaged surface 86. There are two normal vectors (e, f, g) and (-e, -f, -g), and the normal vector from the imaged surface 86 toward the main surface 82 of the base member 81 is selected. To do this, calculate the angle between the normal vector (a, b, c) calculated in (2) and the normal vector of (e, f, g), (-e, -f, -g). The normal vector whose angle exceeds 90 ° is selected. In this embodiment, since it is either approximately 60 ° or approximately 120 °, the direction that is approximately 120 ° is selected.

(4) A vector based on the outer product of the normal vector (a, b, c) and the normal vector (e, f, g) is calculated, and a unit vector αd of this vector is calculated. This is to calculate a unit vector αd of a normal vector of a plane perpendicular to both the main surface 82 of the base member 81 and the imaged surface 86 in the coordinate system D. This unit vector αd is parallel to the X axis set on the imaged surface 86.
(5) A vector based on the outer product of the unit vector αd calculated in (4) and the normal vector (e, f, g) calculated in (3) is calculated, and a unit vector βd of this vector is calculated. This is to calculate a unit vector βd of a vector parallel to the center line of the target image 88 on the imaged surface 86. This unit vector βd is parallel to the Y axis set on the imaged surface 86.

(6) The coordinate value Od of the point at which the central axis of the lens barrel 15 of the rigid endoscope 11 having the determined cross-sectional diameter r0 or the cylindrical body having the determined cross-sectional diameter r0 intersects the imaging surface 86 is calculated. This is done by performing arithmetic processing in the following order. First, the unit vector αd is set as a normal vector, and a plane equation including the coordinates of the midpoint of the center coordinates of the optical axis position measurement markers 91 located on the right front side and the left side toward the imaging surface 86 is calculated. Next, from all the three-dimensional data, the three-dimensional data of the optical axis position measuring marker 91, the main surface 82, and the imaging surface 86 extracted in (1) to (3) are excluded, and the remaining data is calculated. Three-dimensional data within a set distance from the obtained plane is extracted. As a result, data including the three-dimensional data of the lens barrel 15 of the rigid endoscope 11 or the cylindrical body having a fixed cross-sectional diameter is extracted. Next, a vector based on the outer product of the unit vector αd and the normal vector of the main surface 82 is calculated. This vector is used as a normal vector, and the optical axis position measuring object 91 on the right front side and the left side toward the imaged surface 86. A plane including the coordinates of the middle point of the center coordinates of the center plane and a plurality of plane formulas when this plane is moved by a predetermined distance in the direction of the normal vector are calculated. Next, three-dimensional data whose distance from these planes is within a minute range is extracted from the extracted three-dimensional data for each plane. Thereby, three-dimensional data of a portion corresponding to the cut surface when the lens barrel 15 or the cylindrical body of the rigid endoscope 11 having a fixed cross-sectional diameter is cut in a plurality of locations in the central axis direction is extracted. Next, the three-dimensional data extracted for each plane is applied to the sphere equation (x−a) 2 + (y−b) 2 + (z−c) 2 = r 0 2, and the center coordinates ( a, b, c). Since the obtained plurality of center coordinates can be regarded as coordinates on the central axis of the lens barrel 15 or the cylindrical body of the rigid endoscope 11, a formula of a straight line passing through these plurality of coordinates is calculated by the least square method. Next, by solving a simultaneous equation consisting of the calculated straight line equation and the plane equation of the imaged surface 86 calculated in (3), the central axis of the lens barrel 15 or the cylindrical body having the determined cross-sectional diameter r0 is taken. The coordinate value Od of the point that intersects the surface 86 is acquired.

(7) The coordinate value Od of the point at which the center axis of the unit vector αd, the unit vector βd, the lens barrel 15 or the cylindrical body having the fixed cross-sectional diameter r0 obtained by (4) to (6) intersects the imaging surface 86. Then, a coordinate conversion coefficient Fsd for converting the coordinate value by the coordinate system S into the coordinate value by the coordinate system D is calculated. The unit vector αd, the unit vector βd, and the coordinate value Od obtained in (4) to (6) are values based on the coordinate system D. The values based on the coordinate system S indicate that the unit vector α is a unit parallel to the X axis. Since it is a vector (1, 0, 0), the unit vector β is a unit vector parallel to the Y axis (0, 1, 0), and the coordinate value O is the coordinate origin (0, 0, 0) It is. Since the coordinate values of two vector components and one point are respectively obtained in the coordinate system D and the coordinate system S, the coordinate transformation coefficient Fsd can be obtained by substituting these values into the coordinate transformation equation and solving the simultaneous equations. Can be calculated. This calculation is a known technique, and details are described in, for example, Japanese Patent No. 4291178, so refer to that.

Thereby, it is possible to obtain the center coordinate of the optical axis position measuring marker 91 by the coordinate system D and the coordinate conversion coefficient Fsd for converting the coordinate value by the coordinate system S to the coordinate value by the coordinate system D. That is, the three-dimensional relative positional relationship between the target image 88 of the optical axis position measuring device 80 and the optical axis position measuring marker 91 can be acquired. In order to obtain the coordinates of the designated position of the captured image when the target image 88 is captured as the coordinate value by the coordinate system S, the coordinate value by the coordinate system S of the corner of the black and white checker pattern is previously set to PC. (Personal Computer) 40. This is because the corner of the black-and-white checker pattern in which the central axis of the lens barrel 15 or cylinder having the determined cross-sectional diameter r0 described above intersects the image pickup surface 86 is the coordinate origin, the center line of the target image 88 is the Y axis, and the coordinate Assuming plane coordinates with the X axis as the axis perpendicular to the Y axis at the origin, the X-axis direction distance and the Y-axis direction distance from the coordinate origin may be accurately measured for each corner of the black and white checker pattern. Further, when the target image 88 is drawn on the image pickup surface 86, if the length of each side of each black and white checker pattern can be drawn with high accuracy, the set values are set to the X-axis direction distance and the Y-axis direction distance as they are. Can do.

Procedure 1 is preparation for calibrating the optical axis position of the rigid endoscope 11, and once it is performed and necessary data is stored in a PC (Personal Computer) 40, it is not necessary to perform the second and subsequent times. That is, when the preparation work of the procedure 1 is completed, the work after the procedure 2 can be repeated, and the optical axis position can be calibrated for each rigid endoscope 11.

Next, when the lens barrel 15 of the rigid endoscope 11 is set in the V-groove-shaped guide portion 95 of the optical axis position measuring device 80 in step 2 so that the distal end portion 14 contacts the target member 85. A method for acquiring the three-dimensional relative positional relationship between the distal end portion 14 of the rigid endoscope 11 and the marker ball 12 (when the state shown in FIG. 9A) is obtained will be described. The acquisition of the three-dimensional relative positional relationship between the distal end portion 14 of the rigid endoscope 11 and the marker sphere 12 is specifically the coordinates of the distal end portion 14 of the rigid endoscope 11 and the marker sphere 12 in the same coordinate system. To obtain the center coordinates of. This same coordinate system is defined as a coordinate system P. As shown in FIG. 3, when the real optical axis of the rigid endoscope 11 does not pass through the intersection of the surface of the distal end portion 14 and the central axis of the lens barrel 15, the coordinates of the distal end portion 14 of the rigid endoscope 11 are used. Instead of this, an equation of a plane that includes the coordinates of the distal end portion 14 of the rigid endoscope 11 and is perpendicular to the central axis of the barrel 15 of the rigid endoscope 11 is acquired.

The lens barrel 15 of the rigid endoscope 11 to be calibrated is supported by the guide portion 95 of the optical axis position measuring device 80 so that the distal end portion 14 abuts on the target member 85 (the state shown in FIG. 9A). The optical axis position measuring device 80 is installed at an appropriate position with respect to the three-dimensional shape measuring device 20 so that the lens barrel 15, the optical axis position measuring marker 91 and the marker sphere 12 can measure the three-dimensional shape. If the lens barrel 15 has a determined cross-sectional diameter r0, the optical axis position measuring marker 91 and the marker sphere 12 only need to be able to measure a three-dimensional shape. Therefore, as shown in FIG. You may install in the position which performs three-dimensional shape measurement from the side of the surface 87 which faces diagonally upward. Next, measurement by the three-dimensional shape measuring apparatus 20 is executed, and three-dimensional data of the optical axis position measuring apparatus 80 and the rigid endoscope 11 are acquired. The coordinate system of the three-dimensional shape measuring apparatus 20 is a coordinate system P, and the acquired three-dimensional data is data based on the coordinate system P. The following calculation processes (1) to (4) are executed using the acquired three-dimensional data.

(1) Three-dimensional data of each optical axis position measuring marker 91 is extracted from all three-dimensional data, and the center of each optical axis position measuring marker 91 is extracted using the extracted three-dimensional data. Calculate the coordinates. This is the same as the calculation process performed in (1) of the procedure 1.
(2) The three-dimensional data of each marker sphere 12 is extracted from all the three-dimensional data, and the center coordinates of each marker sphere 12 are calculated using the extracted three-dimensional data. This is also the same as the calculation process performed in the procedure 1 (1) except that the diameter information of the sphere used for the extraction is different from the procedure 2 (1).

(3) The central coordinates of the optical axis position measuring marker 91 obtained in (1) of procedure 1 and (1) of procedure 2 and the determined cross section obtained in the middle of the calculation process of (6) of procedure 1. The vector component in the direction of the central axis of the lens barrel 15 of the rigid endoscope 11 based on the coordinate system P is calculated using the vector component in the direction of the central axis of the lens barrel 15 or cylindrical body having a diameter. This is done by performing arithmetic processing in the following order. First, the center coordinate of the optical axis position measurement marker 91 obtained by the coordinate system P acquired in the procedure 2 (1) and the optical axis position measurement marker obtained by the coordinate system D acquired in the procedure 1 (1). By substituting the central coordinates of the body 91 into the coordinate conversion formula and solving the simultaneous equations, the coordinate conversion coefficient Fdp for converting the value by the coordinate system D into the value by the coordinate system P is obtained. Next, the vector component in the central axis direction of the lens barrel 15 having the cross-sectional diameter r0 determined by the coordinate system D obtained in the middle of the calculation processing of (1) in step 1 is the same even if the cross-sectional diameter of the lens barrel 15 is different. Therefore, this vector component is coordinate-transformed by the coordinate transformation coefficient Fdp to be a vector component by the coordinate system P. Thus, the vector component in the central axis direction of the lens barrel 15 of the rigid endoscope 11 is acquired.

(4) The data acquired in step 1 and the coordinate conversion coefficient Fdp acquired in step (3) of step 2 and the vector component in the central axis direction of the lens barrel 15 or the data acquired in step 1 and step 2 The coordinate Tp on the central axis of the tip end portion 14 is calculated using the three-dimensional data acquired in step (1). The processing method differs depending on whether the cross-sectional diameter r0 of the barrel 15 of the rigid endoscope 11 is a fixed cross-sectional diameter, or when the cross-sectional diameter r0 is different or unknown. When the cross-sectional diameter of the barrel 15 of the rigid endoscope 11 is the determined cross-sectional diameter r0, first, the coordinates of the origin of the coordinate system S by the coordinate system D of the imaging surface 86 acquired in (6) of the procedure 1 Od is coordinate-transformed by the coordinate transformation coefficient Fdp acquired in (2) of the procedure 2 to obtain a coordinate Op by the coordinate system P. Next, a distance L from the coordinate Op to the coordinate Tp on the central axis of the tip end portion 14 is calculated by a calculation formula of L = r0 / tan θ. θ is an angle formed by the main surface 82 and the imaged surface 86, and a value acquired in the middle of the calculation process of (3) of the procedure 1 is used, or the optical axis position measuring marker 91 is produced with high accuracy according to the set value. If so, the set value is used. Next, the component of the unit vector γp of the vector in the central axis direction of the lens barrel 15 acquired in (3) of the procedure 2 is calculated. If the direction of the vector γp is directed to the imaged surface 86 side, the coordinate Op is used. The vector component of L × γp is subtracted. Moreover, if it faces the other side, it adds. Thereby, the coordinate Tp on the central axis of the front-end | tip part 14 is obtained.

If the sectional diameter r of the barrel 15 of the rigid endoscope 11 is not the determined sectional diameter r0, or if the sectional diameter r is unknown, first, the optical axis position acquired in (1) and (3) of the procedure 2 Using the center coordinates of the measurement marker 91 and the vector in the direction of the central axis of the lens barrel 15, the same arithmetic processing as that in step 1 (6) is performed, and the lens barrel 15 is cut into a plurality of positions. The three-dimensional data of the portion corresponding to the cut edge when extracted is extracted. Next, the unit vector αd acquired in (4) of the procedure 1 is subjected to coordinate conversion by the coordinate conversion coefficient Fdp acquired in (2) of the procedure 2 to obtain a unit vector αp based on the coordinate system P. Then, the unit vector γp of the vector in the central axis direction of the lens barrel 15 acquired in (2) of procedure 2 is calculated. Next, a unit vector αp, a unit vector γp, and arbitrary three-dimensional data (coordinates) are converted into a vector (1, 0, 0), a vector (0, 0, 1), and coordinates (0, 0, 0). A coordinate conversion coefficient Fpk is calculated, and the three-dimensional data extracted by the coordinate conversion coefficient Fpk is subjected to coordinate conversion. The three-dimensional data thus extracted becomes a value in a coordinate system in which the Z-axis direction is parallel to the central axis direction of the barrel 15 of the rigid endoscope 11, and the extracted three-dimensional data is z for each portion of the lens barrel 15 where the wheel is cut. The coordinate values are substantially the same, and can be processed with the plane coordinates of the x and y coordinate values. Next, the three-dimensional data is substituted for each circular section into the circle formula (x−a) 2 + (y−b) 2 = r 2 , and the center coordinates (a B) and the radius value r. If the calculated average value rav of the radius values r matches the determined cross-sectional diameter r0 within an allowable range, the above-described processing when the cross-sectional diameter r of the lens barrel 15 is the determined cross-sectional diameter r0 is performed.

If the average value rav does not coincide with the determined cross-sectional diameter r0 within the allowable range, the z coordinate values that are substantially the same for each portion of the lens barrel 15 that is cut off are averaged, and the average value of the z coordinate values z ′ is added to the center coordinates (a, b) of the circle to form three-dimensional coordinates (a, b, z ′), and this coordinate value is coordinated by a coordinate conversion coefficient Fkp which is a coordinate conversion coefficient opposite to the coordinate conversion coefficient Fpk. It is converted into a coordinate value by the coordinate diameter P. Next, since the obtained center coordinates can be regarded as coordinates on the central axis of the barrel 15 of the rigid endoscope 11 based on the coordinate system P, a straight line formula passing through these coordinates is calculated by the least square method. To do. Next, the plane expression of the imaging target surface 86 acquired in (1) of the procedure 1 is coordinate-transformed by the coordinate conversion coefficient Fdp acquired in (3) of the procedure 2 so that the plane expression by the coordinate system P is obtained. Then, the coordinate value Cp of the point where the central axis of the lens barrel 15 intersects the imaged surface 86 is obtained by solving the simultaneous equations composed of the calculated straight line formula and the plane equation of the imaged surface 86. Next, the distance L from the coordinate Cp to the coordinate on the central axis of the tip end portion 14 is calculated by a calculation formula of L = rav / tan θ. As described above, θ is an angle formed by the main surface 82 and the imaging surface 86. Next, in consideration of the direction of the unit vector γp already acquired, the vector component of L × γp is subtracted from the coordinate Cp or the vector component of L × γp is added to the coordinate Cp. Thereby, the coordinate Tp on the central axis of the front-end | tip part 14 is obtained.

In addition, it replaces with the coordinate of the front-end | tip part 14 of the rigid endoscope 11, and contains the coordinate of the front-end | tip part 14 of the rigid endoscope 11, and obtains the formula of a plane perpendicular to the central axis of the barrel 15 of the rigid endoscope 11. In this case, the vector in the central axis direction of the lens barrel 15 obtained in (3) of step 2 is set as a normal vector, and a plane equation including the coordinates Tp on the central axis of the tip end portion 14 may be calculated.

Next, when the barrel 15 of the rigid endoscope 11 is supported by the V-shaped groove-shaped guide portion 95 of the optical axis position measuring device 80 in step 3, the barrel 15 is moved by an appropriate distance. A method for acquiring a point on the target image 88 corresponding to the center position of the captured image of the rigid endoscope 11 (when the state shown in FIG. 9B) is obtained will be described. When the imaging system of the rigid endoscope 11 is operated and the lens barrel 15 is moved by an appropriate distance, the captured image of the target image 88 by the rigid endoscope 11 as shown in FIG. FIG. 10A shows an image created directly from the imaged data. By performing distortion correction processing on the imaged data, the captured image becomes an appropriate image as shown in FIG. By processing the image data of FIG. 10B, a center line that is a line where the black and white checker pattern is switched in the target image 88 is detected. This center line corresponds to the Y axis of the coordinate system S. Next, the image data is processed to detect the image position of the corner of the black and white checker pattern as shown in FIG. Then, the coordinate value of the coordinate system S stored in advance is applied to the image position of this corner. When applying this coordinate value, it is necessary to recognize what number each black and white checker pattern is. For this purpose, an operator may designate one of the black and white checker patterns on the captured image from the input device and input what number the pattern is, but the distal end portion 14 of the rigid endoscope 11 is When moving away from the imaged surface 86, the captured image of the target image 88 becomes smaller, making it difficult to recognize the numbers assigned to the black and white checker pattern. Therefore, when the distal end portion 14 of the rigid endoscope 11 is slightly separated from the imaging target surface 86, the operator designates one of the black and white checker patterns on the captured image from the input device, and what pattern number is the pattern. Enter whether or not. Then, the image data is processed in the process of separating the distal end portion 14 of the rigid endoscope 11 from the imaging surface 86, and the designated black and white checker pattern is tracked as shown in FIG. When the cylinder 15 is moved by an appropriate distance, it is possible to recognize where the designated black and white checker pattern is.

If the rigid endoscope 11 is a direct endoscope, the designated black-and-white checker pattern is always in the captured image in the process of moving the barrel 15 of the rigid endoscope 11, but the rigid endoscope 11 is In the case of a side endoscope, the designated black and white checker pattern disappears from the captured image. Therefore, as shown in FIG. 12, tracking processing is performed including the upper and lower checker patterns of the designated black and white checker pattern. That is, when the designated black and white checker pattern is above or below a predetermined position above or below the image, the tracking target is set to one checker pattern below or one above, and the number of the input checker pattern is set to one. Increase or decrease one. As a result, even when the rigid endoscope 11 is a perspective endoscope or a side endoscope, the black and white checker pattern to be tracked is the order number when the lens barrel 15 is moved by an appropriate distance. To be able to recognize.

After the coordinate value of the coordinate system S stored in advance is applied to the image position of the corner of the black and white checker pattern, the position of the center point of the captured image as shown in FIG. That is, how many x and y coordinate values of the center point of the captured image are calculated in the coordinate system S which is a plane coordinate set in the imaged surface 86 is calculated. This is because the center line of the target image 88 is the Y-axis, and the coordinate value of the corner position of the black and white checker pattern is known. Yes, it can be calculated according to the position in the vertical direction of the center line. The coordinate value of the center point of the acquired captured image is the coordinate value Gs of the point where the optical axis of the rigid endoscope 11 in the coordinate system S intersects the imaging surface 86.

Next, when the point on the target image 88 corresponding to the center position of the captured image of the rigid endoscope 11 is acquired in the procedure 3 in the procedure 4, the optical axis position measuring marker 91 and the marker sphere 12 are A method for acquiring the three-dimensional relative positional relationship will be described. The acquisition of the three-dimensional relative positional relationship between the optical axis position measurement marker 91 and the marker sphere 12 is specifically the same coordinate system and the center coordinates of the optical axis position measurement marker 91 and the marker sphere 12. To obtain the center coordinates of. This same coordinate system is defined as a coordinate system W.

In the state where the point on the target image 88 corresponding to the center position of the captured image of the rigid endoscope 11 is acquired in the procedure 3, the measurement by the three-dimensional shape measurement apparatus 20 is executed, and the optical axis position measurement apparatus 80 and Three-dimensional data of the rigid endoscope 11 is acquired. The coordinate system of the three-dimensional shape measuring apparatus 20 is a coordinate system W, and the acquired three-dimensional data is data by the coordinate system W. The following calculation processes (1) and (2) are executed using the acquired three-dimensional data.

(1) Three-dimensional data of each optical axis position measuring marker 91 is extracted from all three-dimensional data, and the center of each optical axis position measuring marker 91 is extracted using the extracted three-dimensional data. Calculate the coordinates. This is the same as the arithmetic processing performed in (1) of procedure 1 and (1) of procedure 2.
(2) The three-dimensional data of each marker sphere 12 is extracted from all the three-dimensional data, and the center coordinates of each marker sphere 12 are calculated using the extracted three-dimensional data. This is the same as the arithmetic processing performed in (2) of the procedure 2.

Next, from the three-dimensional relative positional relationship obtained in steps 1 to 4 in step 5 and the point on the target image 88 corresponding to the center position of the captured image of the rigid endoscope 11, the rigid endoscope A method for obtaining a three-dimensional relative positional relationship between the position of the optical axis 11 and the marker sphere 12 will be described. Specifically, obtaining the three-dimensional relative positional relationship between the position of the optical axis of the rigid endoscope 11 and the marker sphere 12 is a value representing the position of the optical axis of the rigid endoscope 11 in the same coordinate system. The center coordinate of the marker sphere 12 is obtained, and the value representing the position of the optical axis of the rigid endoscope 11 is a vector V component parallel to the optical axis of the rigid endoscope 11 and the tip of the rigid endoscope 11. This is the coordinate value of the point T on the optical axis in the unit 14. For this, the following calculation processes (1) to (3) are performed.

(1) Coordinate conversion is performed between the center coordinates of the marker sphere 12 by the coordinate system P obtained in (2) of the procedure 2 and the center coordinates of the marker sphere 12 by the coordinate system W obtained in (2) of the procedure 4. A coordinate conversion coefficient Fwp for converting a value in the coordinate system W into a value in the coordinate system P is obtained by substituting into the equation and solving the simultaneous equations. Then, the central coordinates of the optical axis position measuring marker 91 obtained by the coordinate system W obtained in step (1) of the procedure 4 are subjected to coordinate transformation using the coordinate transformation coefficient Fwp to obtain coordinates based on the coordinate system P.

(2) The center coordinate of the optical axis position measurement marker 91 by the coordinate system P obtained in (1) of the procedure 5 and the optical axis position measurement marker by the coordinate system D obtained in (1) of the procedure 1. By substituting the central coordinates of the body 91 into the coordinate conversion equation and solving the simultaneous equations, the coordinate conversion coefficient Fdp ′ for converting the value by the coordinate system D into the value by the coordinate system P is obtained. The visual coordinates of the center coordinates of the optical axis position measuring marker 91 obtained in the coordinate system P obtained in the procedure 5 (1) are the positions of the marker sphere 12 in the procedure 2 and the marker sphere 12 in the procedure 4. Is the coordinate in the coordinate system P when it is assumed that the position does not change, and therefore is different from the central coordinate of the optical axis position measuring marker 91 in the coordinate system P obtained in the procedure 2. Therefore, the coordinate conversion coefficient Fdp ′ is different from the coordinate conversion coefficient Fdp.

(3) Since the coordinate Gs of the center point of the captured image acquired in step 3 is a plane coordinate by the coordinate system S, the coordinate Gs is converted into a three-dimensional coordinate Gp by the coordinate system P by coordinate conversion. For this purpose, first, the coordinate Gs of the center point of the acquired captured image is set to the three-dimensional coordinate (x, y, 0), coordinate conversion is performed by the coordinate conversion coefficient Fsd acquired in (7) of step 1, and the coordinate The three-dimensional coordinates Gd by the system D are set. Next, coordinate transformation is performed by the coordinate transformation coefficient Fdp ′ acquired in (2) of the procedure 5 to obtain a three-dimensional coordinate Gp by the coordinate system P. Thereby, the coordinates of the center point of the captured image, that is, the coordinates of the point where the optical axis of the rigid endoscope 11 intersects the imaged surface 86 is obtained as the coordinate value Gp by the coordinate system P. Further, since the coordinate Tp on the optical axis at the distal end portion 14 of the rigid endoscope 11 obtained in the procedure 2 (4) is a coordinate by the coordinate system P, the coordinate Tp is subtracted from the coordinate Gp. A vector Vp parallel to the optical axis of the rigid endoscope 11 can be obtained as a component by the coordinate system P. Thus, a value representing the position of the optical axis of the rigid endoscope 11 is obtained as a value by the coordinate system P. Further, since the center coordinates of the marker sphere 12 obtained in (2) of the procedure 2 are coordinates by the coordinate system P, this represents the position of the optical axis of the rigid endoscope 11 in the same coordinate system. The value and the center coordinates of the marker sphere 12 are obtained.

However, the calculation of the value representing the position of the optical axis of the rigid endoscope 11 by the arithmetic processing of the procedure 5 described above is performed at the distal end portion 14 of the rigid endoscope 11 with the coordinates of the point on the optical axis and the lens barrel 15. It is assumed that the coordinates of the points on the central axis of the are coincident. In practice, as shown in FIG. 3, the coordinates of the point on the optical axis and the coordinates of the point on the central axis of the lens barrel 15 often do not coincide with each other at the distal end portion 14. The possibility is high in the case of a side endoscope. Therefore, for more accurate calibration of the position of the optical axis, as shown in FIG. 9C, the rigid endoscope 11 is placed on the V-shaped groove-shaped guide portion 95 of the optical axis position measuring device 80. While the lens barrel 15 is supported, the lens barrel 15 is further moved by an appropriate distance, and the procedures 3 to 5 are performed again. In the following description, the coordinates of the point on the optical axis at the tip end portion 14 are distinguished as Mp, and the coordinates of the point on the central axis of the lens barrel 15 are distinguished as Tp as before.

By performing the procedure 3 to the procedure 5 again, the coordinates of the point where the optical axis of the other rigid endoscope 11 intersects the imaging surface 86 are obtained as the coordinate value Gp by the coordinate system P. Assuming that the previously acquired coordinate is Gp1 and the second acquired coordinate is Gp2, if the coordinate Gp1 is subtracted from the coordinate Gp2, a vector Vp parallel to the optical axis of the rigid endoscope 11 is obtained. A straight line formula passing through Gp2 and coordinates Gp1 is also acquired. Then, in step (4) of the procedure 2, an equation of a plane that includes the coordinates Tp of the point on the central axis of the lens barrel 15 at the distal end portion 14 and is perpendicular to the central axis of the lens barrel 15 is obtained. The coordinate Mp of the point on the optical axis in the distal end portion 14 can be acquired by solving the simultaneous equations consisting of the equation and the acquired linear equation. Thereby, in the same coordinate system, a value representing the position of the optical axis of the rigid endoscope 11 and the center coordinates of the marker sphere 12 can be obtained with higher accuracy.

Step 2 is performed, and Steps 3 to 5 are performed twice while changing the position of the lens barrel 15 of the rigid endoscope 11, so that a value indicating the position of the optical axis of the rigid endoscope 11 in the same coordinate system is obtained. FIG. 13 visually shows that the center coordinates of the marker sphere 12 are acquired. The rigid endoscope 11 is assumed not to move throughout the optical axis position calibration, and the lens barrel 15 is moved by an appropriate distance while the distal end portion 14 of the rigid endoscope 11 is in contact with the imaging surface 86. In the two states, it is assumed that the optical axis position measuring device 80 has moved. Then, a plane including a point on the central axis of the lens barrel 15 in the distal end portion 14 is acquired in a state where the distal end portion 14 is in contact with the image capturing surface 86, and In this state, the coordinates Gp1 and Gp2 of the point (gaze point) where the optical axis of the rigid endoscope 11 intersects the imaging surface 86 are obtained, and the point on the optical axis at the distal end portion 14 is obtained from the straight line connecting these two coordinates. The coordinates Mp of are acquired.

The value representing the position of the optical axis of the rigid endoscope 11 and the center coordinate of the marker sphere 12 by the coordinate system P obtained by the measurement operation and the calculation process of the procedure 1 to the procedure 5 described above are stored in the PC 40. Then, as described above, the imaging direction vector A of the rigid endoscope 11 and the imaging position of the rigid endoscope 11 during the operation are obtained by arithmetic processing and used for displaying on the display device 50, and high-precision surgical navigation. Will be able to do.

According to the optical axis position measuring system and the optical axis position measuring method using the optical axis position measuring device 80 of the present embodiment, the following effects can be obtained.

(1) In the case of the optical axis position measurement system and the optical axis position measurement method of the present embodiment, as shown in procedure 1, by processing the three-dimensional data obtained by the measurement by the three-dimensional shape measuring apparatus 20, A three-dimensional relative positional relationship between the target image 88 and the optical axis position measurement marker 91 can be acquired in advance, and the elongated instrument is added to the optical axis position measurement device 80 as shown in the procedure 2 and the procedure 4. By processing the three-dimensional data obtained by the measurement by the three-dimensional shape measuring device 20 in a state where the rigid endoscope 11 is supported, the marker sphere 12 which is a position and orientation detection marker and the rigid endoscope 11 are processed. The three-dimensional relative positional relationship between the distal end portion 14 and the three-dimensional relative positional relationship between the marker sphere 12 and the optical axis position measuring marker 91 can be acquired. As shown in the procedure 3, the rigid endoscope 11 can be obtained. Captured target image 88 can obtain the point on the target image 88 corresponding to the center position of the captured image by image processing. Then, as shown in the procedure 5, the actual light of the rigid endoscope 11 is obtained by a calculation process using the three-dimensional relative positional relationship and the point on the target image 88 corresponding to the center position of the captured image. The three-dimensional relative positional relationship between the real optical axis that is the axis and the marker sphere 12 can be acquired. The rigid endoscope 11 can be supported by the optical axis position measuring device 80 at an arbitrary position within a predetermined range, and the target corresponding to the calibration object of the prior art can be operated by the surgeon 75 himself. The position of the rigid endoscope 11 with respect to the image 88 is set, and measurement by the three-dimensional shape measuring apparatus 20 and imaging by the rigid endoscope 11 are performed. Therefore, it is possible to omit the means for moving the calibration object, which has been required in the past, and the second calibration object having the marking part, and it has a light and simple structure and is excellent in operability and convenience. The three-dimensional relative positional relationship between the actual optical axis of the rigid endoscope 11 and the marker sphere 12 can be measured using the optical axis position measuring device 80. Even when such an optical axis position measuring device 80 is used, the position of the actual optical axis and the marker sphere in a long instrument such as the rigid endoscope 11 having the optical axis starting from the distal end portion. It is possible to accurately measure the three-dimensional relative positional relationship with 12 or the like position and orientation detection markers.

(2) Further, the optical axis position measuring device 80 in the optical axis position measuring system and the optical axis position measuring method is small, light and simple by omitting the second calibration object having the calibration object moving means and the labeling unit. Since it is structured, the operator 75 can easily carry it alone, and portability and operability are improved. Therefore, for example, the optical axis position measuring device 80 can be disposed so as to surely fall within the imageable range of the three-dimensional shape measuring device 20, and the movement and position adjustment at that time can be performed without difficulty. In addition, since the prior position adjustment work for making the optical axis of the rigid endoscope 11 coincide with the center of the target becomes unnecessary, complexity is eliminated, and operability and convenience are improved. Further, since no movable part is provided, sterilization can be performed without any problem.

(3) Further, in the optical axis position measuring device 80 in this optical axis position measuring system, the target member 85 having the target image 88 is the main member of the base member 81 in addition to the guide member 95 being provided on the base member 81. It is provided in an inclined state with respect to the surface 82. Therefore, by operating with the guide portion 95 supporting the lens barrel 15 of the rigid endoscope 11, the rigid endoscope 11 is linearly moved in the direction of the central axis of the lens barrel 15 without the operator 75 shaking. Therefore, the lens barrel 15 of the rigid endoscope 11 can be supported at an arbitrary position of the guide portion 95. Further, even if the actual optical axis is at an angle with respect to the central axis direction of the barrel 15 of the rigid endoscope 11, the position of the image center point that is the point of sight of the rigid endoscope 11 can be gradually shifted. it can. Therefore, even when the optical axis is parallel to the central axis direction of the lens barrel 15 of the rigid endoscope 11 or when the optical axis is at various angles, the light can be applied with high versatility. The axial position measuring device 80 can be used. That is, it can be applied to measurement of a direct-viewing mirror having an optical axis angle of approximately 0 °, a 30 ° or 70 ° perspective mirror, a 90 ° side mirror, and the like (see FIG. 8).

(4) Further, in the optical axis position measuring device 80 in this optical axis position measuring system, the target image 88 is drawn on the imaging target surface 86 that faces obliquely downward in the target member 85, so that the imaging by the rigid endoscope 11 is performed. Is performed from the imaging surface 86 side. On the other hand, if the barrel 15 of the rigid endoscope 11 has a fixed cross-sectional diameter r0, the measurement by the three-dimensional shape measuring apparatus 20 can be performed from the side of the surface 87 facing obliquely upward. For this reason, even if incident light is emitted during the three-dimensional shape measurement by the three-dimensional shape measuring apparatus 20, the target member 85 itself blocks the light, so that the target image 88 on the imaging surface 86 does not shine due to reflection. The captured image can be accurately recognized. In addition, when viewed from the front side of the surface 87 facing obliquely upward, the plurality of optical axis position measurement markers 91 are not easily hidden by the target member 85, which is necessary for three-dimensional shape measurement by the three-dimensional shape measuring apparatus 20. Dimensional data can be acquired with certainty.

Note that each embodiment of the present invention may be modified as follows.

In the above embodiment, the long instrument for measuring the position of the actual optical axis is the rigid endoscope 11 provided with the marker ball 12, and the present invention uses the rigid endoscope 11 used in the surgery support system 1. The purpose was to obtain the position of the actual optical axis with high accuracy. However, the present invention can be applied to any elongate instrument having an optical axis in the imaging function from the tip and provided with a position and orientation detection marker. For example, the present invention can be applied to a system that accurately detects an imaging position from the outside when observing the inside of an industrial product with a long instrument having an imaging function.

In the above embodiment, the marker for detecting the position and orientation of the elongated instrument is set to three or more marker balls 12. However, three-dimensional shape measurement is possible, and three or more fixed points, two or more fixed points and one or more vectors, or two or more by computing three-dimensional data acquired by three-dimensional shape measurement Any shape can be used as long as it has a three-dimensional shape capable of detecting the vector and one or more fixed points. For example, the shape may be a rectangular parallelepiped, a cylinder, a cone, a pyramid, a polyhedron, or the like.

In the optical axis position measuring device 80 of the above embodiment, all the three optical axis position measuring markers 91 are provided on the base member 81, but the three-dimensional shape measurement by the three-dimensional shape measuring device 20 is performed on the imaging surface 86 side. In other words, all of these may be provided on the target member 85, or a part of the three may be provided on the base member 81 and the remaining may be provided on the target member 85. Good.

In the optical axis position measuring device 80 of the above embodiment, the optical axis position measuring marker 91 is made of three spheres, but three-dimensional shape measurement is possible, and three-dimensional data obtained by three-dimensional shape measurement is calculated. What is a solid shape that can detect three or more fixed points, two or more fixed points and one or more vectors, or two or more vectors and one or more fixed points by processing It may be of any shape. For example, the shape may be a rectangular parallelepiped, a cylinder, a cone, a pyramid, a polyhedron, or the like.

In the optical axis position measuring device 80 of the above embodiment, the inclination angle θ of the target member 85 with respect to the main surface 82 of the base member 81 is 60 °, but is not limited to this, for example, 30 ° or 70 °. Of course.

In the optical axis position measuring device 80 of the above embodiment, the guide portion 95 has a groove shape, but is not limited thereto, and may be, for example, a cylindrical shape. Further, a guide portion may be provided by providing a pair of convex portions on the main surface 82 of the base member 81.

In the optical axis position measuring device 80 of the above-described embodiment, the target image 88 on the imaging target surface 86 is a black and white checker pattern with a center line numbered sequentially from the bottom. Any target image 88 may be used as long as the center of the captured image can be obtained from the image by the coordinate value of the coordinate system S which is a plane coordinate defined on the imaged surface 86. For example, it may be vertical and horizontal grid lines in which the color or reflectance of the line is changed one by one, or the color or reflectance is changed so that the center coincides with the corner of the black and white checker pattern of the above embodiment. A pattern in which minute circles (lattice points) are arranged may be used.

In the above-described embodiment, when the lens barrel 15 or the cylindrical body having a fixed cross-sectional diameter is supported by the guide portion 95 of the base member 81, the monochrome image in which the central axis of the lens barrel 15 or the cylindrical body is at the center line of the target image 88. Process 3D data acquired by performing 3D shape measurement in a state where the lens barrel 15 or the cylindrical body having a fixed cross-sectional diameter is supported by the guide portion 95 of the base member 81 so as to intersect with the corner of the checker pattern. Thus, the three-dimensional relative positional relationship between the target image 88 and the optical axis position measurement marker 91, that is, the center coordinates of the optical axis position measurement marker 91 by the coordinate system D and the plane coordinates defined on the imaging surface 86. A coordinate conversion coefficient Fsd for converting a coordinate value by a certain coordinate system S into a coordinate value by the coordinate system D was acquired. However, any method may be used as long as the three-dimensional relative positional relationship between the target image 88 and the optical axis position measuring marker 91 can be acquired. For example, the position of the imaging surface 86 where the central axis of the lens barrel 15 intersects when the lens barrel 15 or the cylindrical body having a fixed cross-sectional diameter is supported by the guide portion 95 of the base member 81 is not adjusted, and the target image 88 One of the corners of the black-and-white checker pattern is marked with a different reflectance, and at the time of measuring the three-dimensional shape, the reflected light intensity data is detected together with the three-dimensional data, and the coordinate value by the coordinate system S at one point is detected. A coordinate value by the coordinate system D may be obtained, and the coordinate conversion coefficient Fsd may be calculated from the coordinate system S, the two vectors by the coordinate system D, and the coordinate value of one point as in the above embodiment. Further, instead of obtaining the normal vector of the plane of the base member 81 and the imaging target surface 86 from the three-dimensional data, three different corners of the black and white checker pattern of the target image 88 are marked with different reflectances, and the coordinate system The coordinate conversion coefficient Fsd may be calculated from the coordinate values of three points by S and the coordinate system D. In addition, instead of marking one corner of the black and white checker pattern with a different reflectance, a sphere is attached to the image pickup surface 86 so that the perpendicular of the image pickup surface 86 at the corner of the checker pattern passes through the center of the sphere. The coordinate value by the coordinate system S and the coordinate value by the coordinate system D may be acquired.

In the above embodiment, the rigid endoscope 11 is measured by performing three-dimensional shape measurement in a state where the distal end portion 14 of the rigid endoscope 11 is in contact with the imaging surface 86 and processing the acquired three-dimensional data. 11, the three-dimensional relative positional relationship between the tip portion 14 and the marker sphere 12, that is, the coordinates of the tip portion 14 of the rigid endoscope 11 based on the coordinate system P or the plane coordinate including the coordinates and the center coordinate of the marker sphere 12 did. However, if the three-dimensional relative positional relationship between the distal end portion 14 of the rigid endoscope 11 and the marker ball 12 can be acquired, the distal end portion 14 of the rigid endoscope 11 does not come into contact with the imaging surface 86. May be. For example, three-dimensional shape measurement is performed by acquiring a three-dimensional shape from a state in which the distal end portion 14 of the rigid endoscope 11 is in contact with the imaging surface 86 and moving it by a known distance in the central axis direction of the lens barrel 15. You may make it process dimension data. Since the known distance needs to be obtained with high accuracy, for example, the distal end portion 14 of the rigid endoscope 11 is positioned so as to be aligned with the edge of the thinned portion 83 of the base member 81, and the lens barrel extends from the edge of the thinned portion 83. The distance to the imaged surface 86 in the 15 central axis directions may be acquired with high accuracy. As a method for obtaining with high accuracy, three-dimensional shape measurement is performed in a state where a block gauge is pressed against the side surface of the lightening portion 83 of the base member 81, and the main surface 82, the imaged surface 86, and the block gauge of the base member 81 are measured. The three-dimensional data of the surface pressed against the side surface is extracted, and the formulas of the respective planes are calculated from the three-dimensional data of these planes. What is necessary is just to calculate the distance to the to-be-imaged surface 86 in an axial direction.

In the above embodiment, when the point of the real optical axis at the distal end portion 14 of the rigid endoscope 11 and the point of the central axis of the lens barrel 15 are coincident with each other, the procedure 3 to the procedure 5 are performed only once. If not, or if it is unclear whether or not they match, step 3 to step 5 were performed twice by changing the position of the lens barrel 15 (position of the tip portion 14). ~ Procedure 5 may be performed three or more times. In this case, since three or more coordinates Gp at which the actual optical axis intersects the imaging target surface 86 are obtained, the component of the vector Vp parallel to the actual optical axis and the straight line expression representing the actual optical axis are calculated by the least square method. That's fine. Also in this case, the coordinates Mp of the point of the actual optical axis at the tip portion 14 can be calculated in the same manner as in the above embodiment.

DESCRIPTION OF SYMBOLS 1 ... Surgery support system 11 ... Rigid endoscope 12 as an elongate instrument ... Marking ball | bowl 14 as a position and orientation detection label | marker ... Tip part 20 ... Three-dimensional shape measuring apparatus 40 ... PC as a calculation means
DESCRIPTION OF SYMBOLS 50 ... Display apparatus 60 ... Patient 80 as target object ... Optical-axis position measuring device 81 ... Base member 82 ... Main surface 85 (base member) main surface 85 ... Target member 86 ... Imaging surface 88 ... Target image 91 ... Optical-axis position measurement Marker 95 ... Guide part

Claims (7)

  1. An elongate instrument having an imaging means for imaging an object, wherein the distal end portion of the elongate instrument provided with a position and orientation detection indicator for detecting the position and orientation of the elongate instrument is provided. In the optical axis position measurement system that measures the position of the actual optical axis that is the actual optical axis of the imaging means as a starting point,
    A support mechanism that supports the elongated instrument at an arbitrary position within a predetermined range; and a target image that can be imaged by an imaging means of the elongated instrument in a state where the elongated instrument is supported by the support mechanism; An optical axis position measuring device having an optical axis position measuring marker,
    A three-dimensional shape measuring device for measuring the three-dimensional surface shape of the optical axis position measuring device and the elongated instrument in a state where the elongated instrument is supported on the optical axis position measuring device;
    A first three-dimensional relative positional relationship, which is a three-dimensional relative positional relationship between the target image and the optical axis position measuring marker, is stored, and using the three-dimensional data obtained by the measurement by the tertiary shape measuring apparatus. An arithmetic means for performing arithmetic processing,
    The computing means is
    By processing the three-dimensional data obtained by the measurement by the three-dimensional shape measuring apparatus, a second three-dimensional relationship that is a three-dimensional relative positional relationship between the position and orientation detection marker and the distal end portion of the elongated instrument is obtained. An operation to obtain a relative positional relationship;
    A point on the target image corresponding to the center position of the captured image is obtained by performing image processing on the target image captured by the elongated instrument at a position where the tip of the elongated instrument is separated from the target image. Operations to
    By processing the three-dimensional data obtained by the measurement by the three-dimensional shape measuring device while maintaining the support position of the elongated tool when acquiring the point on the target image corresponding to the center position of the captured image, An operation for obtaining a third three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the position and orientation detection marker and the optical axis position measuring marker;
    Based on the first to third three-dimensional relative positional relationships and the point on the target image corresponding to the center position of the captured image, the three-dimensional relationship between the position of the actual optical axis and the position and orientation detection marker An optical axis position measurement system that performs a calculation to obtain a relative positional relationship.
  2. In the optical axis position measuring device,
    The support mechanism includes a base member, and a guide portion that is provided on the base member and supports the elongated instrument in a movable state along the longitudinal direction thereof.
    The target image is drawn on an imaging target surface of a target member provided in an inclined state with respect to the main surface of the base member, and when the elongated instrument is supported by the guide unit, the elongated instrument is Can be imaged by imaging means,
    The optical axis position measurement marker has a three-dimensional shape capable of detecting the position and orientation of the optical axis position measurement device from the three-dimensional data acquired by the measurement by the three-dimensional shape measurement device, The optical axis position measurement system according to claim 1, wherein the optical axis position measurement system is provided on at least one of a base member and the target member.
  3. The target image is a lattice point pattern, a lattice line pattern, or a checker pattern, and the image processing performed by the calculation unit is performed at the center position of the captured image after performing distortion correction processing, center line acquisition processing, and corner acquisition processing. The optical axis position measurement system according to claim 1, wherein the process is a process of acquiring a corresponding point on the target image.
  4. The elongated instrument is a rigid endoscope, and the three-dimensional relative positional relationship between the position of the actual optical axis and the position and orientation detection marker obtained by the optical axis position measurement system is:
    The three-dimensional surface shape of the patient and the three-dimensional shape of the position and orientation detection marker provided on the rigid endoscope are measured, and the three-dimensional data acquired and the three-dimensional tomographic data of the patient acquired in advance are measured. In the surgical operation support system that performs arithmetic processing using and displays the calculation result, the position of the real optical axis and the position of the distal end portion of the rigid endoscope are displayed on the image based on the three-dimensional tomographic data of the patient. The optical axis position measuring system according to any one of claims 1 to 3, wherein the optical axis position measuring system is used for additional display.
  5. An elongate instrument having an imaging means for imaging an object, wherein the distal end portion of the elongate instrument provided with a position and orientation detection indicator for detecting the position and orientation of the elongate instrument is provided. In the optical axis position measuring method for measuring the position of the actual optical axis that is the actual optical axis of the imaging means as a starting point,
    A support mechanism that supports the elongated instrument at an arbitrary position within a predetermined range; and a target image that can be imaged by an imaging means of the elongated instrument in a state where the elongated instrument is supported by the support mechanism; A preparation step of preparing an optical axis position measuring device having an optical axis position measuring marker;
    Obtaining a first three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the target image and the optical axis position measuring marker;
    By measuring the three-dimensional surface shape of the optical axis position measuring device and the elongated instrument while supporting the elongated instrument on the optical axis position measuring device, and processing the obtained three-dimensional data, Obtaining a second three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the position and orientation detection marker and the distal end of the elongated instrument;
    Image processing of the target image captured by the elongate instrument at a position where the tip of the elongate instrument is separated from the target image while the elongate instrument is supported by the optical axis position measuring device And obtaining a point on the target image corresponding to the center position of the captured image;
    The optical axis position measuring device and the three-dimensional surface shape of the elongated instrument are measured with the support position of the elongated instrument in the step of acquiring a point on the target image corresponding to the center position of the captured image. Obtaining a third three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the position and orientation detection marker and the optical axis position measuring marker by processing the obtained three-dimensional data; ,
    Based on the first to third three-dimensional relative positional relationships and the point on the target image corresponding to the center position of the captured image, the three-dimensional relationship between the position of the actual optical axis and the position and orientation detection marker And a step of obtaining a relative positional relationship.
  6. An elongate instrument having an imaging means for imaging an object, wherein the distal end portion of the elongate instrument provided with a position and orientation detection indicator for detecting the position and orientation of the elongate instrument is provided. In the optical axis position measurement program executed by the calculation means provided in the optical axis position measurement system for measuring the position of the actual optical axis that is the actual optical axis of the imaging means as a starting point,
    A support mechanism that supports the elongated instrument at an arbitrary position within a predetermined range; and a target image that can be imaged by an imaging means of the elongated instrument in a state where the elongated instrument is supported by the support mechanism; A first three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the target image and the optical axis position measuring marker in the optical axis position measuring device having the optical axis position measuring marker is stored in advance. And
    Processing the three-dimensional data obtained when measuring the three-dimensional surface shape of the optical axis position measuring device and the elongated instrument while the elongated instrument is supported on the optical axis position measuring device; Calculation to obtain a second three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the position and orientation detection marker and the distal end of the elongated instrument;
    Image processing of the target image captured by the elongate instrument at a position where the tip of the elongate instrument is separated from the target image while the elongate instrument is supported by the optical axis position measuring device And calculating a point on the target image corresponding to the center position of the captured image;
    The three-dimensional surface shapes of the optical axis position measuring device and the elongated instrument were measured while maintaining the position of the elongated instrument when a point on the target image corresponding to the center position of the captured image was acquired. Processing to obtain a third three-dimensional relative positional relationship that is a three-dimensional relative positional relationship between the position and orientation detection marker and the optical axis position measurement marker, ,
    Based on the first to third three-dimensional relative positional relationships and the point on the target image corresponding to the center position of the captured image, the three-dimensional relationship between the position of the actual optical axis and the position and orientation detection marker An optical axis position measurement program characterized by performing an operation for obtaining a relative positional relationship.
  7. An elongate instrument having an imaging means for imaging an object, wherein the distal end portion of the elongate instrument provided with a position and orientation detection indicator for detecting the position and orientation of the elongate instrument is provided. In the optical axis position measuring device used when measuring the position of the actual optical axis that is the actual optical axis of the imaging means as a starting point,
    A support mechanism comprising a base member and a guide portion provided on the base member for supporting the elongated instrument in a movable state along its longitudinal direction;
    A target image which is provided in an inclined state with respect to the main surface of the base member and can be imaged by the imaging device of the elongated instrument when the elongated instrument is supported by the guide portion is drawn on the imaging target surface. A target member,
    It has a three-dimensional shape capable of detecting the position and orientation of the optical axis position measuring device from three-dimensional data acquired by being measured by a three-dimensional shape measuring device, and at least of the base member and the target member An optical axis position measuring apparatus, comprising: an optical axis position measuring marker that is provided on any of the three and has a known three-dimensional relative positional relationship with the target image.
PCT/JP2017/012542 2016-03-31 2017-03-28 Optical axis position measuring system, optical axis position measuring method, optical axis position measuring program, and optical axis position measuring device WO2017170488A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2016073581 2016-03-31
JP2016-073581 2016-03-31
JP2017053847A JP2017185212A (en) 2016-03-31 2017-03-20 Optical axis position measurement system, optical axis position measurement method, optical axis position measurement program, optical axis position measurement device
JP2017-053847 2017-03-20

Publications (1)

Publication Number Publication Date
WO2017170488A1 true WO2017170488A1 (en) 2017-10-05

Family

ID=59965756

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/012542 WO2017170488A1 (en) 2016-03-31 2017-03-28 Optical axis position measuring system, optical axis position measuring method, optical axis position measuring program, and optical axis position measuring device

Country Status (1)

Country Link
WO (1) WO2017170488A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11164839A (en) * 1997-09-26 1999-06-22 Picker Internatl Inc Microscope calibrator and method for calibration
JP2001293006A (en) * 2000-04-11 2001-10-23 Olympus Optical Co Ltd Surgical navigation apparatus
JP2003528688A (en) * 2000-03-30 2003-09-30 ザ ボード オブ トラスティーズ オブ ザ リーランド スタンフォード ジュニア ユニバーシティ Apparatus and method for calibrating an endoscope
JP2009254805A (en) * 2008-03-18 2009-11-05 Hamamatsu Univ School Of Medicine Surgery supporting system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11164839A (en) * 1997-09-26 1999-06-22 Picker Internatl Inc Microscope calibrator and method for calibration
JP2003528688A (en) * 2000-03-30 2003-09-30 ザ ボード オブ トラスティーズ オブ ザ リーランド スタンフォード ジュニア ユニバーシティ Apparatus and method for calibrating an endoscope
JP2001293006A (en) * 2000-04-11 2001-10-23 Olympus Optical Co Ltd Surgical navigation apparatus
JP2009254805A (en) * 2008-03-18 2009-11-05 Hamamatsu Univ School Of Medicine Surgery supporting system

Similar Documents

Publication Publication Date Title
US6146390A (en) Apparatus and method for photogrammetric surgical localization
US5704897A (en) Apparatus and method for registration of points of a data field with respective points of an optical image
JP4113591B2 (en) Image guided surgery system
DE10108547B4 (en) Operating system for controlling surgical instruments based on intra-operative X-ray images
US6409686B1 (en) Virtual probe for a stereotactic digitizer for use in surgery
JP2005013737A (en) Device usable together with computer support navigation system, device usable to make instrument usable together with computer support navigation system and method for making instrument having first predetermined geometry to be registered in computer support system
US20170215971A1 (en) System and method for 3-d tracking of surgical instrument in relation to patient body
JP4757142B2 (en) Imaging environment calibration method and information processing apparatus
US5515160A (en) Method and apparatus for representing a work area in a three-dimensional structure
US5987349A (en) Method for determining the position and orientation of two moveable objects in three-dimensional space
KR101270912B1 (en) Device and method for the contactless determination and measurement of a spatial position and/or a spatial orientation of bodies, method for calibrating and checking, in particular, medical tools, and patterns or structures on, in particular, medical tools
EP0904735A2 (en) Tool calibration
US6926673B2 (en) Optical tracking systems
US20100039506A1 (en) System for and method of visualizing an interior of body
EP2123216A1 (en) Bronchoscope
EP2326276B1 (en) Medical measuring system and use of this medical measuring system
CA2769150C (en) Three-dimensional (3d) ultrasound imaging system for assessing scoliosis
EP1713387B1 (en) Method and device for creating at least one section of a virtual 3d model of a body cavity
US6483948B1 (en) Microscope, in particular a stereomicroscope, and a method of superimposing two images
JP2008104877A (en) Scope navigation apparatus, method, and computer readable storage medium
EP2123215A1 (en) Medical device
DE10100335B4 (en) Device for displaying a size in the field of vision of a user and using the device
JP4674948B2 (en) Surgical navigation device and method of operating surgical navigation device
US6640127B1 (en) Surgical operation navigating system using a reference frame
DE602005004947T2 (en) Metrology device by laser targets for videoendoscopic probe

Legal Events

Date Code Title Description
NENP Non-entry into the national phase in:

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17775025

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17775025

Country of ref document: EP

Kind code of ref document: A1