CN110379493B - Image navigation registration system and image navigation system - Google Patents

Image navigation registration system and image navigation system Download PDF

Info

Publication number
CN110379493B
CN110379493B CN201910807816.8A CN201910807816A CN110379493B CN 110379493 B CN110379493 B CN 110379493B CN 201910807816 A CN201910807816 A CN 201910807816A CN 110379493 B CN110379493 B CN 110379493B
Authority
CN
China
Prior art keywords
image
point set
space position
position point
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910807816.8A
Other languages
Chinese (zh)
Other versions
CN110379493A (en
Inventor
祁甫浪
杜汇雨
郭涛
邱本胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201910807816.8A priority Critical patent/CN110379493B/en
Publication of CN110379493A publication Critical patent/CN110379493A/en
Application granted granted Critical
Publication of CN110379493B publication Critical patent/CN110379493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The application discloses an image navigation registration system and an image navigation system, wherein the image navigation registration system is implemented based on structures such as a reference body provided with an optical ball, a camera tracking handle, a registration object provided with an optical ball and a spherical structure, and the like, wherein the reference body, the camera tracking handle and the spherical structure provide references of a physical space and an image space for the image navigation registration method, so that the image navigation registration method can realize registration of the physical space and the image space based on rigid transformation and an iterative closest point algorithm, thereby reducing the difficulty and complexity of registration operation and effectively improving the success rate of image registration.

Description

Image navigation registration system and image navigation system
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image navigation registration system and an image navigation system.
Background
The image guidance system (Image Guided System) is a system for providing image guidance to a doctor for a surgical procedure of a patient by scanning a medical image with a medical imaging device. Through the system, doctors can obtain the internal structure and physiological information of the patient body in the treatment process, so that the current treatment state (such as the position of a treatment instrument in the patient body, the dynamic size of a focus ablation area, whether the condition of internal massive hemorrhage caused by puncture failure exists or not) can be accurately judged. Compared with the traditional operation method without image guidance, the image navigation system is beneficial to improving the success rate of the operation and reducing the return visit rate after the operation.
One difficulty that needs to be addressed in the specific application of image navigation systems is how to fuse images from different sensors, different times or different spaces into the same coordinate system. For example, during surgery, intra-operative guidance and tracking can only be achieved by matching pre-operative pre-scanned three-dimensional image data to the physical surgical coordinate system in which the patient is located. Therefore, the accuracy of the registration process of the image navigation system is an important guarantee that the images displayed by the image navigation system can accurately reflect the conditions in the patient.
The essence of image registration is to calculate the transfer matrix between the two coordinate systems by inputting and outputting images. The method commonly used at present mainly comprises an iterative registration algorithm based on surface data and a least squares fitting registration algorithm based on marked points. However, since the surface data available in surgery are limited and easily deformed, the registration method based on the marked points is more reliable and accurate, and is more favored in clinical application. The registration method based on the marker points can be divided into registration based on the implantable marker points and registration based on the natural anatomical marker points. Wherein based on the implanted marker points, the name implies the need to fix the marker to the patient's body prior to surgery, which can cause a degree of trauma; however, the method based on natural anatomical marking points has the difficulty that automatic identification of marking points is difficult to realize.
Therefore, how to reduce the difficulty and complexity of the registration operation and effectively improve the success rate of image registration is a technical problem that needs to be solved by those skilled in the art at present.
Disclosure of Invention
In order to solve the technical problems, the application provides an image navigation registration system and an image navigation system, so as to achieve the purposes of reducing the difficulty and complexity of registration operation and effectively improving the success rate of image registration.
In order to achieve the technical purpose, the embodiment of the application provides the following technical scheme:
an image navigation registration system implemented based on an image navigation system comprising a reference body, a camera tracking handle, and a registration, each comprising a plurality of optical spheres detectable by a navigation camera, the registration further comprising a plurality of spherical structures arranged in a predetermined order, the spherical structures being made of a material detectable by the navigation camera; the image navigation registration system includes:
the data acquisition module is used for acquiring image data containing position information of the target to be detected and the registration object;
the reference body fixing module is used for fixing the reference body at a preset position so that the navigation camera can detect the registration object, the reference body and the target to be detected at the same time;
The physical space determining module is used for obtaining a physical space position point set according to the position of the spherical structure;
the image acquisition module is used for inputting the image data containing the position information of the target to be detected and the registration object into guiding operation software to acquire a DICOM image and a reconstructed three-dimensional image;
the image space determining module is used for utilizing the DICOM image and acquiring an image space position point set according to the position information of the registration object in the three-dimensional stereo image;
the conversion matrix determining module is used for calculating a conversion matrix between the image space position point set and the physical space position point set according to the image space position point set and the physical space position point set;
and the space registration module is used for registering the image space with the physical space according to the conversion matrix.
Optionally, the physical space determining module is specifically configured to track the optical ball and a positional relationship between the optical ball and the spherical structure by using the navigation camera, obtain complete point sets of all the spherical structures in a physical space, and sort the complete point sets of all the spherical structures according to a preset sequence to obtain a physical space position point set;
Or (b)
And tracking the handle by the camera to obtain the position of the needle point of the handle, and pointing the needle point of the handle to the spherical structure in sequence according to the preset sequence so as to obtain the point set of the physical space position.
Optionally, the image space determining module is specifically configured to determine, using the DICOM image, an image area in which each spherical structure of the registration object is located in the three-dimensional stereo image;
determining a specific position of the spherical structure from the determined image area where the spherical structure is located through a preset Hough transform algorithm, and taking the position of the circle center of the spherical structure as a detection image point set;
and determining the image space position point set according to the matching condition of the detected image point set and the spherical structure in the three-dimensional stereo image.
Optionally, the image space determining module determines, by using the DICOM image, that an image area where each spherical structure of the registration object is located in the three-dimensional stereo image is specifically used for determining whether the sharpness of the registration object displayed in the three-dimensional stereo image meets a requirement, if not, moving images of three coronal, axial and sagittal sections in the DICOM image to a maximum section of an outline of each spherical structure, respectively, and determining an image area where the spherical structure is located by using an intersection point of the three coronal, axial and sagittal sections as a center of the spherical structure;
And if so, selecting an image area where the spherical structure is positioned from the three-dimensional stereo image by a frame.
Optionally, in the voting mode of the preset hough transform algorithm, a minimum radius definition area and a maximum radius definition area of the spherical structure are determined by user input information;
the weight of the voting pattern is determined by the image region size, the gray threshold and the number of spherical structures.
Optionally, the image space determining module determines, according to the matching condition of the detected image point set and the spherical structure in the three-dimensional stereo image, that the image space position point set is specifically used for, when the detected image point set is matched with the number and positions of the spherical structure in the three-dimensional stereo image, taking the detected image point set as the image space position point set;
when the detected image point set is matched with the number of the spherical structures in the three-dimensional stereo image, but the positions of partial image data points are not matched, eliminating the image data points in the detected image point set, which are not matched with the positions of the spherical structures in the three-dimensional stereo image, and taking the detected image point set with the image data points eliminated as the image space position point set;
When the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image, judging whether the detected image point set has image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image or not, if so, eliminating the image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, and taking the detected image point set after eliminating the image data points as the image space position point set; and if not, taking the detected image point set as the image space position point set.
Optionally, the transformation matrix determining module calculates a transformation matrix between the set of image spatial position points and the set of physical spatial position points according to the set of image spatial position points and the set of physical spatial position points,
when the detected image point set is matched with the number and the positions of the spherical structures in the three-dimensional stereo image, sequencing the physical space position point set according to the preset sequence to obtain a corrected physical space position point set;
when the detected image point set is matched with the number of spherical structures in the three-dimensional stereo image, but the positions of partial image data points are not matched, removing physical data points corresponding to preset image data points in the physical space position point set, and sequencing the physical space position point set after removing the physical data points according to the preset sequence to obtain a corrected physical space position point set; the preset image data points are image data points which are removed in the process of determining the image space position point set;
When the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image and the detected image point set does not have image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, removing physical data points which do not correspond to the image data points in the image space position point set in the physical space position point set, and sorting the physical space position point set after removing the physical data points according to the preset sequence to obtain a corrected physical space position point set;
when the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image and the detected image point set has image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, removing physical data points which are not corresponding to the image data points in the image space position point set and physical data points which are not matched with the image data points in the image space position point set in the physical space position point set, and sorting the physical space position point set after the physical data points are removed according to the preset sequence to obtain a corrected physical space position point set;
And calculating a conversion matrix between the image space position point set and the physical space position point set according to the corrected physical space position point set and the image space position point set.
An image navigation system for application in an image navigation registration process, the image navigation system comprising: a reference body, a camera tracking handle, and a registration;
the reference body comprises a base and a plurality of optical balls arranged on the base;
the camera tracking handle comprises a camera tracking handle bracket and a plurality of optical balls arranged on the camera tracking handle bracket;
the registration object comprises a support structure, wherein the support structure comprises a first setting surface and a second setting surface which are perpendicular to each other, a plurality of optical balls are arranged on the first setting surface, and a plurality of spherical structures which are arranged according to a preset sequence are arranged on the second setting surface;
the optical sphere is detectable by a navigation camera and the spherical structure is made of a material detectable by the navigation camera.
According to the technical scheme, the embodiment of the application provides the image navigation registration system and the image navigation system, wherein the image navigation registration system is implemented based on the structures such as the reference body provided with the optical ball, the camera tracking handle and the registration object provided with the optical ball and the spherical structure, and the reference body, the camera tracking handle and the spherical structure provide the reference of the physical space and the image space for the image navigation registration method, so that the image navigation registration method can realize the registration of the physical space and the image space based on the rigid transformation and the iterative closest point algorithm, the difficulty and the complexity of the registration operation are reduced, and the success rate of the image registration is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic structural diagram of an image navigation registration system according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a reference body according to an embodiment of the present application;
FIG. 3 is a schematic view of a camera tracking handle according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a registration article according to one embodiment of the present application;
FIG. 5 is a schematic illustration of an arrangement of spherical structures in a registration article according to one embodiment of the present application;
FIG. 6 is a block diagram of a reference to image space coordinate system O according to one embodiment of the present application I Schematic representation of the transformation relationship of (a);
fig. 7 is a flowchart of an image navigation registration method according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The embodiment of the application provides an image navigation registration system, as shown in fig. 1, which is realized based on an image navigation system comprising a reference body, a camera tracking handle and a registration object, wherein the reference body, the camera tracking handle and the registration object all comprise a plurality of optical balls which can be detected by a navigation camera, and the registration object also comprises a plurality of spherical structures which are arranged in a preset sequence, and the spherical structures are made of materials which can be detected by the navigation camera; the image navigation registration system includes:
the data acquisition module is used for acquiring image data containing position information of the target to be detected and the registration object;
in the working process of the data acquisition module, firstly, a patient is fixed on a scanning bed in a posture which is favorable for later interventional therapy operation, and meanwhile, the registration object is fixed beside the patient (the fixed position can be scanned by medical image equipment and does not influence operation), and the high-definition anatomical structure containing the target to be detected (such as a focus of the patient) region is obtained through a magnetic resonance or CT scanning mode, and meanwhile, the image data of the position information of the registration object is obtained. The patient posture and position should be kept as unchanged as possible during the scanning.
After the scanning is finished, the scanning bed body is moved to a position suitable for operation of a doctor, and before the image registration is finished, the relative positions of the patient and the registration object are kept unchanged, otherwise, the scanning bed body needs to be scanned again for image registration.
The reference body fixing module is used for fixing the reference body at a preset position so that the navigation camera can detect the registration object, the reference body and the target to be detected at the same time;
the preset position is a position where the registration object, the reference body and the target to be detected can be detected by the navigation camera at the same time without affecting operation, for example, the reference body can be a magnetic resonance magnet or a housing of a CT device, and the reference body exists as a physical space reference.
The physical space determining module is used for obtaining a physical space position point set according to the position of the spherical structure;
the image acquisition module is used for inputting the image data containing the position information of the target to be detected and the registration object into guiding operation software to acquire a DICOM image and a reconstructed three-dimensional image;
wherein DICOM (Digital Imaging and Communications in Medicine) images refer to medical digital imaging and communication images.
The image space determining module is used for utilizing the DICOM image and acquiring an image space position point set according to the position information of the registration object in the three-dimensional stereo image;
the conversion matrix determining module is used for calculating a conversion matrix between the image space position point set and the physical space position point set according to the image space position point set and the physical space position point set;
and the space registration module is used for registering the image space with the physical space according to the conversion matrix.
Referring to fig. 2-5, fig. 2-5 present a schematic structural view of a possible reference body, camera tracking handle and the registration object; wherein, fig. 2 is a schematic structural diagram of the reference body, the reference body includes a base and a plurality of optical balls arranged on the base according to a certain rule, in fig. 2, the reference body includes 4 optical balls, the 4 optical balls are arranged in a cross shape, the center line of the 4 optical balls is compared with a point, the positions of all the ball centers in the spherical structure relative to the point are known fixedly, the positions of the 4 optical balls relative to the point are known fixedly, the reference numeral 11 in fig. 2 indicates the base, and the reference numerals 12, 13, 14 and 15 indicate the optical balls of the reference body; FIG. 3 is a schematic view of the structure of the camera tracking handle, where the camera tracking handle includes a camera tracking handle support, a plurality of optical balls on the camera tracking handle support arranged according to a certain rule, and bolts, screw holes and other structures in the camera tracking handle support, and FIG. 3 also shows a surgical instrument disposed on the camera tracking handle support, where the surgical instrument includes, but is not limited to, an ablation needle, a puncture needle, a biopsy needle, and the like, and the bolts and screw holes are used to cooperatively fix the surgical instrument; reference numeral 21 in fig. 3 denotes the surgical instrument, 28 denotes the camera tracking handle holder, 22, 23, 24, 25 denotes an optical ball of the camera tracking handle, 26 denotes a screw hole, and 27 denotes a bolt; fig. 4 shows a schematic structural view of a possible registration, which includes a support structure, where the support structure includes a first setting surface and a second setting surface that are perpendicular to each other, where the first setting surface is provided with a plurality of optical balls, and the second setting surface is provided with a plurality of spherical structures arranged in a preset order, and in addition, the registration includes structures such as screw holes and studs, where reference numeral 31 in fig. 4 indicates screw holes, 32 indicates studs, 33, 34, 35, 36 indicates the optical balls of the registration, 37 indicates the support structure, 38 indicates the second setting surface, 381, 382, 383, 384, 385, 386, 387 indicates the spherical structures arranged on the second setting surface, and optionally, the order from 381 to 382, 383, 384, 385, 386, 387 is the preset order; fig. 5 shows a schematic diagram of the arrangement of the spherical structures of the registration object shown in fig. 4, in fig. 5, 7 spherical structures are arranged in 2 rows and 4 columns, and only one spherical structure is arranged in one middle column (2 columns or 3 columns), so that the starting point can be automatically identified by the distance difference between the spherical structures in the image registration process.
In this embodiment, the image navigation registration system is implemented based on structures such as a reference body configured with an optical ball, a camera tracking handle, and a registration object configured with an optical ball and a spherical structure, where the reference body, the camera tracking handle, and the spherical structure provide references of a physical space and an image space for the image navigation registration method, so that the image navigation registration method can realize registration of the physical space and the image space based on rigid transformation and an iterative closest point algorithm, thereby reducing difficulty and complexity of registration operation and effectively improving success rate of image registration.
The specific working process of each module in the embodiment of the present application is described below, where the physical space determining module is specifically configured to track, by using the navigation camera, the optical ball and a positional relationship between the optical ball and the spherical structure, obtain complete point sets of all the spherical structures in a physical space, and sort the complete point sets of all the spherical structures according to a preset sequence, so as to obtain a physical space position point set;
or (b)
And tracking the handle by the camera to obtain the position of the needle point of the handle, and pointing the needle point of the handle to the spherical structure in sequence according to the preset sequence so as to obtain the point set of the physical space position.
When the optical balls of the registration object are not shielded, tracking the optical balls and the position relation between the optical balls and the spherical structure by using the navigation camera to obtain complete point sets of all the spherical structures in a physical space, and sequencing the complete point sets of all the spherical structures according to a preset sequence to obtain a physical space position point set; the preset order is the same as the arrangement order of the spherical structures in the registration object, and may be the arrangement order of the spherical structures shown in fig. 5; when the number of spherical structures of the registration object is not 7, the preset order may be other specific arrangement order as long as the spherical structures as the starting points can be determined by the distances between the spherical structures.
When the optical ball of the registration object is blocked for some reason, the position of the needle point of the handle needs to be acquired through the camera tracking handle, and the needle point of the handle is sequentially pointed to the spherical structure according to the preset sequence, so that the physical space position point set is acquired. The process can be completed by doctors or by mechanical equipment which is preset to be completed.
The image space determining module is specifically configured to determine an image area where each spherical structure of the registration object is located in the three-dimensional stereo image by using the DICOM image;
Determining a specific position of the spherical structure from the determined image area where the spherical structure is located through a preset Hough transform algorithm, and taking the position of the circle center of the spherical structure as a detection image point set;
and determining the image space position point set according to the matching condition of the detected image point set and the spherical structure in the three-dimensional stereo image.
Wherein determining, using the DICOM image, an image region in which each spherical structure of the registration object is located in the three-dimensional stereoscopic image comprises:
judging whether the definition of the registration matters displayed in the three-dimensional stereo image meets the requirement, if not, respectively moving images of three coronal, axial and sagittal sections in the DICOM image to the maximum section of the outline of each spherical structure, taking the intersection point of the three coronal, axial and sagittal sections as the circle center of the spherical structure, and determining the image area where the spherical structure is positioned by the position where the circle center of the spherical structure is positioned;
and if so, selecting an image area where the spherical structure is positioned from the three-dimensional stereo image by a frame.
Namely, when the definition of the registration object displayed in the three-dimensional stereo image meets the requirement and can be recognized by a doctor, the image area where the spherical structure is located can be directly selected in the three-dimensional stereo image in a frame mode (meanwhile, the image area containing the scanning information of the patient is prevented from being selected as far as possible). Because the material in the spherical structure can be well imaged under the medical image scanning equipment and is a high-volume signal, the background of the spherical structure is free of any imaging material and no signal, the spherical structure is distinguished from the background in the image, and corresponding parameters including the number of the spherical structures in the registration object, the radius, the gray threshold and the like are selected according to the situation.
When the definition of the registration object displayed in the three-dimensional stereo image is not high due to the fact that the signal-to-noise ratio of the image is too low, and the internal structure of the registration object is not clearly visible, a doctor can judge the approximate position of the sphere center of the spherical structure through the outline of the spherical structure of the registration object in the image, then the image of the coronary, axial and sagittal three sections in the DICOM image is respectively moved to the maximum section of the outline of each spherical structure, the intersection point of the coronary, axial and sagittal three sections is used as the circle center of the spherical structure, and the image area where the spherical structure is located is determined according to the position of the circle center of the spherical structure, so that the system can still be normally used under special conditions, and the stability and the operability of the system are improved.
The minimum radius definition area and the maximum radius definition area of the spherical structure in the voting mode of the preset Hough transform algorithm are determined by user input information;
the weight of the voting pattern is determined by the image region size, the gray threshold and the number of spherical structures.
In a preset Hough transform algorithm, the traditional Hough transform algorithm is optimized, a voting mode is changed, a minimum radius definition area and a maximum radius definition area of a spherical structure can be determined, voting is carried out on the areas, and the weight of the voting mode is determined by the size of the image area, the gray threshold value, the number of the spherical structures and the like. The input information is an image and the output consists of an accumulator image showing the voting structure on the image domain, which realises the probability of the centre of the spherical structure. The other output consists of a radius image with the average radius of the spherical structure. Meanwhile, a multithreading and layering sampling method is adopted, so that the detection speed of an algorithm is increased. The preset Hough transform algorithm can determine the size of the detected spherical structure according to requirements, and can adjust parameters such as a round heart rate and a gray threshold value. Under the condition that the spherical structure is partially shielded, geometric fitting can be carried out on the non-shielding boundary, the partially shielded spherical structure is detected, and the stability of the system is improved.
The process of using the specific position of the sphere center in a series of spherical structures obtained by using a preset Hough transformation algorithm as a detection image point set is an automatic detection process in a navigation registration algorithm.
Specifically, the image space determining module determines the image space position point set according to the matching condition of the detected image point set and the spherical structure in the three-dimensional stereo image, and uses the detected image point set as the image space position point set when the detected image point set is matched with the number and the positions of the spherical structure in the three-dimensional stereo image;
when the detected image point set is matched with the number of the spherical structures in the three-dimensional stereo image, but the positions of partial image data points are not matched, eliminating the image data points in the detected image point set, which are not matched with the positions of the spherical structures in the three-dimensional stereo image, and taking the detected image point set with the image data points eliminated as the image space position point set;
when the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image, judging whether the detected image point set has image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image or not, if so, eliminating the image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, and taking the detected image point set after eliminating the image data points as the image space position point set; and if not, taking the detected image point set as the image space position point set.
The fact that the number of the detected image point sets is not matched with the number of the spherical structures in the three-dimensional stereo image means that the number of the data points in the detected image point sets is incomplete and cannot be the same as the number of the spherical structures in the three-dimensional stereo image; the fact that the positions of the detected image point set and the spherical structure in the three-dimensional stereo image are not matched means that the positions of the spherical structure represented by the data points in the detected image point set are different from the positions of the spherical structure in the three-dimensional stereo image.
Correspondingly, when the determination modes of the image space position point sets are different, corresponding correction needs to be made to the physical space position point sets, specifically, the conversion matrix determination module calculates a conversion matrix between the image space position point sets and the physical space position point sets according to the image space position point sets and the physical space position point sets,
when the detected image point set is matched with the number and the positions of the spherical structures in the three-dimensional stereo image, sequencing the physical space position point set according to the preset sequence to obtain a corrected physical space position point set;
when the detected image point set is matched with the number of spherical structures in the three-dimensional stereo image, but the positions of partial image data points are not matched, removing physical data points corresponding to preset image data points in the physical space position point set, and sequencing the physical space position point set after removing the physical data points according to the preset sequence to obtain a corrected physical space position point set; the preset image data points are image data points which are removed in the process of determining the image space position point set;
When the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image and the detected image point set does not have image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, removing physical data points which do not correspond to the image data points in the image space position point set in the physical space position point set, and sorting the physical space position point set after removing the physical data points according to the preset sequence to obtain a corrected physical space position point set;
when the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image and the detected image point set has image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, removing physical data points which are not corresponding to the image data points in the image space position point set and physical data points which are not matched with the image data points in the image space position point set in the physical space position point set, and sorting the physical space position point set after the physical data points are removed according to the preset sequence to obtain a corrected physical space position point set;
And calculating a conversion matrix between the image space position point set and the physical space position point set according to the corrected physical space position point set and the image space position point set.
Can be used in both reference body and registrationIn case of successful detection by the navigation camera, a transformation matrix T of the reference body to the navigation camera can be obtained Oc,R -1 Conversion matrix T of registration relative to navigation camera Oc,B At the same time, the corrected physical space position point set and the image space position point set can be utilized to calculate a conversion matrix T between the image space position point set and the physical space position point set B2,B1 Referring to FIG. 6, T is used R,OI =T Oc,R -1 ·T Oc,B ·T B2,B1 Obtaining the coordinate system O from the reference body to the image space I Is a transformation relation of (a).
Finally, the image space and the physical space can be registered by acquiring the conversion matrix, after registration, the image space and the physical space are in one-to-one correspondence, and the operation of a doctor in the physical space can be reflected on the image space in real time, for example, an ablation needle is inserted into a patient body to a lesion part, and the needle inserting route and the part of the patient where the needle is finally positioned are displayed on the image in real time.
Accordingly, an embodiment of the present application provides an image navigation registration method, as shown in fig. 7, implemented based on an image navigation system including a reference body, a camera tracking handle, and a registration object, where the reference body, the camera tracking handle, and the registration object each include a plurality of optical balls that can be detected by a navigation camera, and the registration object further includes a plurality of spherical structures arranged in a preset order, and the spherical structures are made of a material that can be detected by the navigation camera; the image navigation registration method comprises the following steps:
S101: acquiring image data containing position information of a target to be detected and the registration object;
in step S101, the patient is first fixed on the scanning bed in a posture that is favorable for the subsequent interventional therapy operation, and the registration is fixed beside the patient (as long as the fixing position can be scanned by the medical imaging device and does not affect the operation), and the high-definition anatomical structure including the region of the target to be detected (for example, the focus of the patient) is obtained by means of magnetic resonance or CT scanning, and the image data of the position information of the registration is obtained. The patient posture and position should be kept as unchanged as possible during the scanning.
After the scanning is finished, the scanning bed body is moved to a position suitable for operation of a doctor, and before the image registration is finished, the relative positions of the patient and the registration object are kept unchanged, otherwise, the scanning bed body needs to be scanned again for image registration.
S102: fixing the reference body at a preset position so that the navigation camera can detect the registration object, the reference body and the target to be detected at the same time;
in step S102, the preset position refers to a position where the registration object, the reference body, and the target to be measured, for example, a magnet of magnetic resonance or a housing of a CT apparatus, can be detected by the navigation camera at the same time without affecting the operation, and the reference body exists as a physical space reference.
S103: obtaining a physical space position point set according to the position of the spherical structure;
s104: inputting the image data containing the position information of the target to be detected and the registration object into guiding operation software to obtain a DICOM image and a reconstructed three-dimensional image;
wherein DICOM (Digital Imaging and Communications in Medicine) images refer to medical digital imaging and communication images.
S105: acquiring an image space position point set according to the position information of the registration object in the three-dimensional stereo image by using the DICOM image;
s106: calculating a conversion matrix between the image space position point set and the physical space position point set according to the image space position point set and the physical space position point set;
s107: and registering the image space with the physical space according to the conversion matrix.
Referring to fig. 2-5, fig. 2-5 present a schematic structural view of a possible reference body, camera tracking handle and the registration object; wherein, fig. 2 is a schematic structural diagram of the reference body, the reference body includes a base and a plurality of optical balls arranged on the base according to a certain rule, in fig. 2, the reference body includes 4 optical balls, the 4 optical balls are arranged in a cross shape, the center line of the 4 optical balls is compared with a point, the positions of all the ball centers in the spherical structure relative to the point are known fixedly, the positions of the 4 optical balls relative to the point are known fixedly, the reference numeral 11 in fig. 2 indicates the base, and the reference numerals 12, 13, 14 and 15 indicate the optical balls of the reference body; FIG. 3 is a schematic view of the structure of the camera tracking handle, where the camera tracking handle includes a camera tracking handle support, a plurality of optical balls on the camera tracking handle support arranged according to a certain rule, and bolts, screw holes and other structures in the camera tracking handle support, and FIG. 3 also shows a surgical instrument disposed on the camera tracking handle support, where the surgical instrument includes, but is not limited to, an ablation needle, a puncture needle, a biopsy needle, and the like, and the bolts and screw holes are used to cooperatively fix the surgical instrument; reference numeral 21 in fig. 3 denotes the surgical instrument, 28 denotes the camera tracking handle holder, 22, 23, 24, 25 denotes an optical ball of the camera tracking handle, 26 denotes a screw hole, and 27 denotes a bolt; fig. 4 shows a schematic structural view of a possible registration, which includes a support structure, where the support structure includes a first setting surface and a second setting surface that are perpendicular to each other, where the first setting surface is provided with a plurality of optical balls, and the second setting surface is provided with a plurality of spherical structures arranged in a preset order, and in addition, the registration includes structures such as screw holes and studs, where reference numeral 31 in fig. 4 indicates screw holes, 32 indicates studs, 33, 34, 35, 36 indicates the optical balls of the registration, 37 indicates the support structure, 38 indicates the second setting surface, 381, 382, 383, 384, 385, 386, 387 indicates the spherical structures arranged on the second setting surface, and optionally, the order from 381 to 382, 383, 384, 385, 386, 387 is the preset order; fig. 5 shows a schematic diagram of the arrangement of the spherical structures of the registration object shown in fig. 4, in fig. 5, 7 spherical structures are arranged in 2 rows and 4 columns, and only one spherical structure is arranged in one middle column (2 columns or 3 columns), so that the starting point can be automatically identified by the distance difference between the spherical structures in the image registration process.
In this embodiment, the image navigation registration method is implemented based on structures such as a reference body configured with an optical ball, a camera tracking handle, and a registration object configured with an optical ball and a spherical structure, where the reference body, the camera tracking handle, and the spherical structure provide references of a physical space and an image space for the image navigation registration method, so that the image navigation registration method can realize registration of the physical space and the image space based on rigid transformation and an iterative closest point algorithm, thereby reducing difficulty and complexity of registration operation and effectively improving success rate of image registration.
The following describes each step of the image navigation registration method provided in the embodiment of the present application, where obtaining, according to the position where the spherical structure is located, a set of physical spatial position points includes:
s1031: tracking the optical ball and the position relation between the optical ball and the spherical structure by using the navigation camera to obtain complete point sets of all the spherical structures in a physical space, and sequencing the complete point sets of all the spherical structures according to a preset sequence to obtain a physical space position point set;
or (b)
And tracking the handle by the camera to obtain the position of the needle point of the handle, and pointing the needle point of the handle to the spherical structure in sequence according to the preset sequence so as to obtain the point set of the physical space position.
When the optical balls of the registration object are not shielded, tracking the optical balls and the position relation between the optical balls and the spherical structure by using the navigation camera to obtain complete point sets of all the spherical structures in a physical space, and sequencing the complete point sets of all the spherical structures according to a preset sequence to obtain a physical space position point set; the preset order is the same as the arrangement order of the spherical structures in the registration object, and may be the arrangement order of the spherical structures shown in fig. 5; when the number of spherical structures of the registration object is not 7, the preset order may be other specific arrangement order as long as the spherical structures as the starting points can be determined by the distances between the spherical structures.
When the optical ball of the registration object is blocked for some reason, the position of the needle point of the handle needs to be acquired through the camera tracking handle, and the needle point of the handle is sequentially pointed to the spherical structure according to the preset sequence, so that the physical space position point set is acquired. The process can be completed by doctors or by mechanical equipment which is preset to be completed.
The obtaining the set of image space position points by using the DICOM image and according to the position information of the registration object in the three-dimensional stereo image comprises:
S1051: determining an image area where each spherical structure of the registration object is located in the three-dimensional stereoscopic image by using the DICOM image;
s1052: determining a specific position of the spherical structure from the determined image area where the spherical structure is located through a preset Hough transform algorithm, and taking the position of the circle center of the spherical structure as a detection image point set;
s1053: and determining the image space position point set according to the matching condition of the detected image point set and the spherical structure in the three-dimensional stereo image.
Wherein determining, using the DICOM image, an image region in which each spherical structure of the registration object is located in the three-dimensional stereoscopic image comprises:
judging whether the definition of the registration matters displayed in the three-dimensional stereo image meets the requirement, if not, respectively moving images of three coronal, axial and sagittal sections in the DICOM image to the maximum section of the outline of each spherical structure, taking the intersection point of the three coronal, axial and sagittal sections as the circle center of the spherical structure, and determining the image area where the spherical structure is positioned by the position where the circle center of the spherical structure is positioned;
And if so, selecting an image area where the spherical structure is positioned from the three-dimensional stereo image by a frame.
Namely, when the definition of the registration object displayed in the three-dimensional stereo image meets the requirement and can be recognized by a doctor, the image area where the spherical structure is located can be directly selected in the three-dimensional stereo image in a frame mode (meanwhile, the image area containing the scanning information of the patient is prevented from being selected as far as possible). Because the material in the spherical structure can be well imaged under the medical image scanning equipment and is a high-volume signal, the background of the spherical structure is free of any imaging material and no signal, the spherical structure is distinguished from the background in the image, and corresponding parameters including the number of the spherical structures in the registration object, the radius, the gray threshold and the like are selected according to the situation.
When the definition of the registration object displayed in the three-dimensional stereo image is not high due to the fact that the signal-to-noise ratio of the image is too low, and the internal structure of the registration object is not clearly visible, a doctor can judge the approximate position of the sphere center of the spherical structure through the outline of the spherical structure of the registration object in the image, then the image of the coronary, axial and sagittal three sections in the DICOM image is respectively moved to the maximum section of the outline of each spherical structure, the intersection point of the coronary, axial and sagittal three sections is used as the circle center of the spherical structure, and the image area where the spherical structure is located is determined according to the position of the circle center of the spherical structure, so that the system can still be normally used under special conditions, and the stability and the operability of the system are improved.
In step S1052, a minimum radius definition area and a maximum radius definition area of the spherical structure in the voting mode of the preset hough transform algorithm are determined by user input information;
the weight of the voting pattern is determined by the image region size, the gray threshold and the number of spherical structures.
In a preset Hough transform algorithm, the traditional Hough transform algorithm is optimized, a voting mode is changed, a minimum radius definition area and a maximum radius definition area of a spherical structure can be determined, voting is carried out on the areas, and the weight of the voting mode is determined by the size of the image area, the gray threshold value, the number of the spherical structures and the like. The input information is an image and the output consists of an accumulator image showing the voting structure on the image domain, which realises the probability of the centre of the spherical structure. The other output consists of a radius image with the average radius of the spherical structure. Meanwhile, a multithreading and layering sampling method is adopted, so that the detection speed of an algorithm is increased. The preset Hough transform algorithm can determine the size of the detected spherical structure according to requirements, and can adjust parameters such as a round heart rate and a gray threshold value. Under the condition that the spherical structure is partially shielded, geometric fitting can be carried out on the non-shielding boundary, the partially shielded spherical structure is detected, and the stability of the system is improved.
The process of using the specific position of the sphere center in a series of spherical structures obtained by using a preset Hough transformation algorithm as a detection image point set is an automatic detection process in a navigation registration algorithm.
Specifically, the determining the image space position point set according to the matching condition of the detected image point set and the spherical structure in the three-dimensional stereo image includes:
when the detected image point set is matched with the number and the positions of the spherical structures in the three-dimensional stereo image, the detected image point set is used as the image space position point set;
when the detected image point set is matched with the number of the spherical structures in the three-dimensional stereo image, but the positions of partial image data points are not matched, eliminating the image data points in the detected image point set, which are not matched with the positions of the spherical structures in the three-dimensional stereo image, and taking the detected image point set with the image data points eliminated as the image space position point set;
when the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image, judging whether the detected image point set has image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image or not, if so, eliminating the image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, and taking the detected image point set after eliminating the image data points as the image space position point set; and if not, taking the detected image point set as the image space position point set.
The fact that the number of the detected image point sets is not matched with the number of the spherical structures in the three-dimensional stereo image means that the number of the data points in the detected image point sets is incomplete and cannot be the same as the number of the spherical structures in the three-dimensional stereo image; the fact that the positions of the detected image point set and the spherical structure in the three-dimensional stereo image are not matched means that the positions of the spherical structure represented by the data points in the detected image point set are different from the positions of the spherical structure in the three-dimensional stereo image.
Correspondingly, when the determination manners of the image space position point sets are different, corresponding correction needs to be made to the physical space position point sets, specifically:
the calculating a conversion matrix between the image space position point set and the physical space position point set according to the image space position point set and the physical space position point set comprises:
s1061: when the detected image point set is matched with the number and the positions of the spherical structures in the three-dimensional stereo image, sequencing the physical space position point set according to the preset sequence to obtain a corrected physical space position point set;
s1062: when the detected image point set is matched with the number of spherical structures in the three-dimensional stereo image, but the positions of partial image data points are not matched, removing physical data points corresponding to preset image data points in the physical space position point set, and sequencing the physical space position point set after removing the physical data points according to the preset sequence to obtain a corrected physical space position point set; the preset image data points are image data points which are removed in the process of determining the image space position point set;
S1063: when the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image and the detected image point set does not have image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, removing physical data points which do not correspond to the image data points in the image space position point set in the physical space position point set, and sorting the physical space position point set after removing the physical data points according to the preset sequence to obtain a corrected physical space position point set;
s1064: when the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image and the detected image point set has image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, removing physical data points which are not corresponding to the image data points in the image space position point set and physical data points which are not matched with the image data points in the image space position point set in the physical space position point set, and sorting the physical space position point set after the physical data points are removed according to the preset sequence to obtain a corrected physical space position point set;
S1065: and calculating a conversion matrix between the image space position point set and the physical space position point set according to the corrected physical space position point set and the image space position point set.
In case both the reference body and the registration can be successfully detected by the navigation camera, a transformation matrix T of the reference body to the navigation camera can be obtained Oc,R -1 Conversion matrix T of registration relative to navigation camera Oc,B At the same time, the corrected physical space position point set and the image space position point set can be utilized to calculate a conversion matrix T between the image space position point set and the physical space position point set B2,B1 Referring to FIG. 6, T is used R,OI =T Oc,R -1 ·T Oc,B ·T B2,B1 Obtaining the coordinate system O from the reference body to the image space I Is a transformation relation of (a).
Finally, the image space and the physical space can be registered by acquiring the conversion matrix, after registration, the image space and the physical space are in one-to-one correspondence, and the operation of a doctor in the physical space can be reflected on the image space in real time, for example, an ablation needle is inserted into a patient body to a lesion part, and the needle inserting route and the part of the patient where the needle is finally positioned are displayed on the image in real time.
The image navigation registration system provided in the embodiments of the present application is described below, and the image navigation registration system described below may be referred to correspondingly to the image navigation registration method described above.
Correspondingly, the embodiment of the application also provides an image navigation system, referring to fig. 2-5, applied to the image navigation registration process, the image navigation system comprises: a reference body, a camera tracking handle, and a registration;
the reference body comprises a base and a plurality of optical balls arranged on the base;
the camera tracking handle comprises a camera tracking handle bracket and a plurality of optical balls arranged on the camera tracking handle bracket;
the registration object comprises a support structure, wherein the support structure comprises a first setting surface and a second setting surface which are perpendicular to each other, a plurality of optical balls are arranged on the first setting surface, and a plurality of spherical structures which are arranged according to a preset sequence are arranged on the second setting surface;
the optical sphere is detectable by a navigation camera and the spherical structure is made of a material detectable by the navigation camera.
In summary, the embodiments of the present application provide an image navigation registration system and an image navigation system, where the image navigation registration method is implemented based on a reference body configured with an optical ball, a camera tracking handle, and a registration object configured with an optical ball and a spherical structure, where the reference body, the camera tracking handle, and the spherical structure provide a reference of a physical space and an image space for the image navigation registration method, so that the image navigation registration method can implement registration of the physical space and the image space based on a rigid transformation and an iterative closest point algorithm, thereby reducing difficulty and complexity of registration operation, and effectively improving success rate of image registration.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An image navigation registration system, characterized in that it is implemented based on an image navigation system comprising a reference body, a camera tracking handle and a registration object, each comprising a plurality of optical spheres detectable by a navigation camera, said registration object further comprising a plurality of spherical structures arranged in a preset order, said spherical structures being made of a material detectable by said navigation camera; the image navigation registration system includes:
The data acquisition module is used for acquiring image data containing position information of the target to be detected and the registration object;
the reference body fixing module is used for fixing the reference body at a preset position so that the navigation camera can detect the registration object, the reference body and the target to be detected at the same time;
the physical space determining module is used for obtaining a physical space position point set according to the position of the spherical structure;
the image acquisition module is used for inputting the image data containing the position information of the target to be detected and the registration object into guiding operation software to acquire a DICOM image and a reconstructed three-dimensional image;
the image space determining module is used for utilizing the DICOM image and acquiring an image space position point set according to the position information of the registration object in the three-dimensional stereo image;
the conversion matrix determining module is used for calculating a conversion matrix between the image space position point set and the physical space position point set according to the image space position point set and the physical space position point set;
and the space registration module is used for registering the image space with the physical space according to the conversion matrix.
2. The system of claim 1, wherein the physical space determining module is specifically configured to track the optical ball and a positional relationship between the optical ball and the spherical structure by using the navigation camera, obtain complete point sets of all the spherical structures in a physical space, and sort the complete point sets of all the spherical structures according to a preset order to obtain a physical space position point set;
Or (b)
And tracking the handle by the camera to obtain the position of the needle point of the handle, and pointing the needle point of the handle to the spherical structure in sequence according to the preset sequence so as to obtain the point set of the physical space position.
3. The system of claim 1, wherein the image space determination module is specifically configured to determine, using the DICOM image, an image region in the three-dimensional stereoscopic image in which each spherical structure of the registration object is located;
determining a specific position of the spherical structure from the determined image area where the spherical structure is located through a preset Hough transform algorithm, and taking the position of the circle center of the spherical structure as a detection image point set;
and determining the image space position point set according to the matching condition of the detected image point set and the spherical structure in the three-dimensional stereo image.
4. The system of claim 3, wherein the image space determination module uses the DICOM image to determine an image area in which each spherical structure of the registration object is located in the three-dimensional stereo image, and is specifically configured to determine whether the sharpness of the registration object displayed in the three-dimensional stereo image meets a requirement, if not, then moving images of three coronal, axial and sagittal sections in the DICOM image to a maximum section of each contour of the spherical structure, respectively, and determining an image area in which the spherical structure is located with an intersection of the three coronal, axial and sagittal sections as a center of the spherical structure;
And if so, selecting an image area where the spherical structure is positioned from the three-dimensional stereo image by a frame.
5. A system according to claim 3, wherein the minimum radius definition area and the maximum radius definition area of the spherical structure in the voting mode of the preset hough transform algorithm are determined by user input information;
the weight of the voting pattern is determined by the image region size, the gray threshold and the number of spherical structures.
6. The system of claim 3, wherein the image space determination module determines the set of image space position points based on a match of the set of detected image points to a spherical structure in the three-dimensional stereoscopic image, the set of detected image points being used as the set of image space position points when the set of detected image points matches both a number and a position of the spherical structure in the three-dimensional stereoscopic image;
when the detected image point set is matched with the number of the spherical structures in the three-dimensional stereo image, but the positions of partial image data points are not matched, eliminating the image data points in the detected image point set, which are not matched with the positions of the spherical structures in the three-dimensional stereo image, and taking the detected image point set with the image data points eliminated as the image space position point set;
When the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image, judging whether the detected image point set has image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image or not, if so, eliminating the image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, and taking the detected image point set after eliminating the image data points as the image space position point set; and if not, taking the detected image point set as the image space position point set.
7. The system of claim 6, wherein the transformation matrix determination module calculates a transformation matrix between the set of image spatial location points and the set of physical spatial location points based on the set of image spatial location points and the set of physical spatial location points,
when the detected image point set is matched with the number and the positions of the spherical structures in the three-dimensional stereo image, sequencing the physical space position point set according to the preset sequence to obtain a corrected physical space position point set;
when the detected image point set is matched with the number of spherical structures in the three-dimensional stereo image, but the positions of partial image data points are not matched, removing physical data points corresponding to preset image data points in the physical space position point set, and sequencing the physical space position point set after removing the physical data points according to the preset sequence to obtain a corrected physical space position point set; the preset image data points are image data points which are removed in the process of determining the image space position point set;
When the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image and the detected image point set does not have image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, removing physical data points which do not correspond to the image data points in the image space position point set in the physical space position point set, and sorting the physical space position point set after removing the physical data points according to the preset sequence to obtain a corrected physical space position point set;
when the detected image point set is not matched with the number of the spherical structures in the three-dimensional stereo image and the detected image point set has image data points which are not matched with the positions of the spherical structures in the three-dimensional stereo image, removing physical data points which are not corresponding to the image data points in the image space position point set and physical data points which are not matched with the image data points in the image space position point set in the physical space position point set, and sorting the physical space position point set after the physical data points are removed according to the preset sequence to obtain a corrected physical space position point set;
And calculating a conversion matrix between the image space position point set and the physical space position point set according to the corrected physical space position point set and the image space position point set.
8. An image navigation system for performing an image navigation registration procedure using the image navigation registration system of any of claims 1-7, the image navigation system comprising: a reference body, a camera tracking handle, and a registration;
the reference body comprises a base and a plurality of optical balls arranged on the base;
the camera tracking handle comprises a camera tracking handle bracket and a plurality of optical balls arranged on the camera tracking handle bracket;
the registration object comprises a support structure, wherein the support structure comprises a first setting surface and a second setting surface which are perpendicular to each other, a plurality of optical balls are arranged on the first setting surface, and a plurality of spherical structures which are arranged according to a preset sequence are arranged on the second setting surface;
the optical sphere is detectable by a navigation camera and the spherical structure is made of a material detectable by the navigation camera.
CN201910807816.8A 2019-08-29 2019-08-29 Image navigation registration system and image navigation system Active CN110379493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910807816.8A CN110379493B (en) 2019-08-29 2019-08-29 Image navigation registration system and image navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910807816.8A CN110379493B (en) 2019-08-29 2019-08-29 Image navigation registration system and image navigation system

Publications (2)

Publication Number Publication Date
CN110379493A CN110379493A (en) 2019-10-25
CN110379493B true CN110379493B (en) 2023-04-21

Family

ID=68261111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910807816.8A Active CN110379493B (en) 2019-08-29 2019-08-29 Image navigation registration system and image navigation system

Country Status (1)

Country Link
CN (1) CN110379493B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111202583A (en) * 2020-01-20 2020-05-29 上海奥朋医疗科技有限公司 Method, system and medium for tracking movement of surgical bed
CN114543816B (en) * 2022-04-25 2022-07-12 深圳市赛特标识牌设计制作有限公司 Guiding method, device and system based on Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1555245A (en) * 2001-09-19 2004-12-15 株式会社日立医药 Treatment tool and magnetic resonance imager
CN202751447U (en) * 2012-07-20 2013-02-27 北京先临华宁医疗科技有限公司 Vertebral pedicle internal fixation surgical navigation system based on structured light scanning
CN105934198A (en) * 2013-10-25 2016-09-07 西门子公司 Magnetic resonance coil unit and method for its manufacture
CN108309450A (en) * 2017-12-27 2018-07-24 刘洋 Locator system and method for surgical navigational
CN109994188A (en) * 2019-03-12 2019-07-09 上海嘉奥信息科技发展有限公司 Neurosurgery navigation registration test method and system based on NDI

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011134083A1 (en) * 2010-04-28 2011-11-03 Ryerson University System and methods for intraoperative guidance feedback

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1555245A (en) * 2001-09-19 2004-12-15 株式会社日立医药 Treatment tool and magnetic resonance imager
CN202751447U (en) * 2012-07-20 2013-02-27 北京先临华宁医疗科技有限公司 Vertebral pedicle internal fixation surgical navigation system based on structured light scanning
CN105934198A (en) * 2013-10-25 2016-09-07 西门子公司 Magnetic resonance coil unit and method for its manufacture
CN108309450A (en) * 2017-12-27 2018-07-24 刘洋 Locator system and method for surgical navigational
CN109994188A (en) * 2019-03-12 2019-07-09 上海嘉奥信息科技发展有限公司 Neurosurgery navigation registration test method and system based on NDI

Also Published As

Publication number Publication date
CN110379493A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
DK2061556T3 (en) PROCEDURE AND APPARATUS TO CORRECT A ERROR IN THE CO-REGISTRATION OF COORDINATE SYSTEMS USED TO REPRESENT OBJECTS UNDER NAVIGATED BRAIN STIMULATION
EP0927403B1 (en) Method and apparatus for correlating a body with an image of the body
US6096050A (en) Method and apparatus for correlating a body with an image of the body
JP4822634B2 (en) A method for obtaining coordinate transformation for guidance of an object
US8682413B2 (en) Systems and methods for automated tracker-driven image selection
US8131031B2 (en) Systems and methods for inferred patient annotation
US7970174B2 (en) Medical marker tracking with marker property determination
US6259943B1 (en) Frameless to frame-based registration system
US9795319B2 (en) Method and device for navigation of a surgical tool
JP4340345B2 (en) Frameless stereotactic surgery device
US20060004284A1 (en) Method and system for generating three-dimensional model of part of a body from fluoroscopy image data and specific landmarks
EP3153101B1 (en) Identification and registration of multi-marker jig
EP1892668B1 (en) Registration of imaging data
US20080119725A1 (en) Systems and Methods for Visual Verification of CT Registration and Feedback
CN110464462B (en) Image navigation registration system for abdominal surgical intervention and related device
CN103619273A (en) Assembly for manipulating a bone comprising a position tracking system
WO2017011892A1 (en) System and method for mapping navigation space to patient space in a medical procedure
EP2030169A2 (en) Coordinate system registration
CN110379493B (en) Image navigation registration system and image navigation system
Maurer et al. The accuracy of image-guided neurosurgery using implantable fiducial markers
US20080172383A1 (en) Systems and methods for annotation and sorting of surgical images
CN105678738B (en) The localization method and its device of datum mark in medical image
CN115409838B (en) Registration method and system of instruments in medical image and related equipment
EP4169470A1 (en) Apparatus and method for positioning a patient's body and tracking the patient's position during surgery
Cao et al. Target error for image-to-physical space registration: Preliminary clinical results using laser range scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant