CN117618110A - 3D structured light-based unmarked surgical navigation method and system - Google Patents

3D structured light-based unmarked surgical navigation method and system Download PDF

Info

Publication number
CN117618110A
CN117618110A CN202311210945.1A CN202311210945A CN117618110A CN 117618110 A CN117618110 A CN 117618110A CN 202311210945 A CN202311210945 A CN 202311210945A CN 117618110 A CN117618110 A CN 117618110A
Authority
CN
China
Prior art keywords
anatomical structure
point cloud
cloud data
structured light
preoperative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311210945.1A
Other languages
Chinese (zh)
Inventor
王凯峰
韩磊
陈光耀
罗鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang International Institute Of Innovative Design And Intelligent Manufacturing Tianjin University
Tianjin University
Original Assignee
Zhejiang International Institute Of Innovative Design And Intelligent Manufacturing Tianjin University
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang International Institute Of Innovative Design And Intelligent Manufacturing Tianjin University, Tianjin University filed Critical Zhejiang International Institute Of Innovative Design And Intelligent Manufacturing Tianjin University
Priority to CN202311210945.1A priority Critical patent/CN117618110A/en
Publication of CN117618110A publication Critical patent/CN117618110A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a 3D structured light-based marking-free surgical navigation method and a 3D structured light-based marking-free surgical navigation system, which belong to the technical field of surgical navigation, wherein the method comprises the following steps: acquiring CT data of an anatomical structure of a patient; constructing a three-dimensional model of the anatomical structure according to the CT data; sampling the surface of the three-dimensional model to obtain point cloud data of a preoperative anatomical structure; acquiring an intraoperative RGB image and a depth image through a 3D structured light camera; obtaining point cloud data of an intraoperative anatomical structure according to the RGB image and the depth image; registering point cloud data of the preoperative anatomical structure in the image space with point cloud data of the intraoperative anatomical structure in the patient space; acquiring spatial position information of a surgical instrument in operation through a position sensing mark fixedly connected with the surgical instrument; recording the space position information of the surgical instrument in real time, and determining an actual surgical path; and comparing the actual operation path with the preoperative planning path to conduct operation navigation.

Description

3D structured light-based unmarked surgical navigation method and system
Technical Field
The invention belongs to the technical field of surgical navigation, and particularly relates to a 3D structured light-based unmarked surgical navigation method and system.
Background
With the progress of modern medicine and the rapid development of high-tech technology, the world is moving to the era of accurate medical treatment, and minimally invasive, accurate and intelligent surgical operation is a future development trend. To further improve the safety and accuracy of surgical procedures, surgical navigation systems (Surgical Navigation System, SNS) incorporating medical imaging, machine vision and artificial intelligence techniques have evolved. The method mainly uses an image guiding and navigation positioning operation mode to accurately correspond preoperative image data of a patient with an intraoperative anatomical structure, displays deviation (including angle, depth, distance and the like) between an operation instrument and an operation planning path in real time, realizes intraoperative real-time navigation, assists a doctor in accurate operation, thereby avoiding nerve, blood vessel or organ damage caused by operation accidents to the maximum extent, greatly shortening operation time, reducing bleeding amount and operation wound of the patient and reducing incidence rate of postoperative complications. The operation navigation system is widely studied and applied in neurosurgery, spinal surgery, orthopaedics, otorhinolaryngology, maxillofacial surgery and other surgical operations, and is becoming an indispensable auxiliary tool in an operating room.
At present, although the existing operation navigation system can help doctors to grasp anatomical structure images and navigation positioning information in the operation process in real time and can assist doctors to formulate personalized treatment schemes according to anatomical structure characteristics of patients, the existing operation navigation system has certain limitations in the use process. Firstly, the common registration method is a point registration method that a doctor manually selects a mark point on the anatomy of a patient, and the method can ensure the precision but additionally increases the workload of the doctor; in order to make up for the deficiency of the registration method based on the mark points, a plurality of scholars also put forward a registration method based on the surface matching, but the defects that the anatomical structure point cloud cannot be automatically extracted, the registration process is complicated and the like are also existed. Second, the real-time localization of the patient's anatomy is typically achieved intraoperatively by means of implantation of markers, which can cause additional trauma and burden to the patient. In addition, the accuracy of the surgical navigation system can be disturbed by changes in the position of the markers, which can require interruption of the procedure for re-registration after loosening of the marker fixation or changes in the relative position between the marker and the patient due to impact displacement, thereby increasing the risk of the procedure and unnecessary time costs. In addition, implantation of the markers may also result in a reduction in the operable range of the part of the surgeon, affecting the normal performance of the procedure.
Disclosure of Invention
In order to solve the technical problems that in the prior art, manual registration of doctors is difficult to ensure registration accuracy, the current label-free registration method cannot automatically extract anatomical structure point cloud, registration flow is complex, real-time positioning of a patient anatomical structure is realized by adopting a method of implanting a label, the implantation of the label can bring extra wound and burden to the patient, and the risk of operation and unnecessary time cost are increased, the invention provides a 3D structured light-based label-free operation navigation method and system.
First aspect
The invention provides a 3D structured light-based marking-free operation navigation method, which comprises the following steps:
s101: acquiring CT data of an anatomical structure of a patient;
s102: constructing a three-dimensional model of the anatomical structure according to the CT data;
s103: sampling the surface of the three-dimensional model to obtain point cloud data of a preoperative anatomical structure;
s104: acquiring an intraoperative RGB image and a depth image through a 3D structured light camera;
s105: obtaining point cloud data of an intraoperative anatomical structure according to the RGB image and the depth image;
s106: registering point cloud data of the preoperative anatomical structure in the image space with point cloud data of the intraoperative anatomical structure in the patient space;
s107: acquiring spatial position information of a surgical instrument in operation through a position sensing mark fixedly connected with the surgical instrument;
s108: recording the space position information of the surgical instrument in real time, and determining an actual surgical path;
s109: and comparing the actual operation path with the preoperative planning path to conduct operation navigation.
Second aspect
The invention provides a 3D structured light-based markerless surgical navigation system for performing the 3D structured light-based markerless surgical navigation method of the first aspect.
Compared with the prior art, the invention has at least the following beneficial technical effects:
(1) According to the invention, point cloud data of a preoperative anatomical structure is acquired according to CT data, point cloud data of an operative anatomical structure is acquired according to RGB (red, green and blue) images and depth images acquired by a 3D structured light camera, and then the point cloud data of the preoperative anatomical structure in an image space and the point cloud data of the operative anatomical structure in a patient space are subjected to label-free and automatic registration. The anatomical structure point cloud is automatically extracted, the registration process is simple, manual registration by doctors is not needed, and the registration precision is improved.
(2) According to the invention, the spatial position information of the surgical instrument in operation is acquired through the position sensing mark fixedly connected with the surgical instrument, the actual operation path is determined according to the spatial position information of the surgical instrument, and the actual operation path is compared with the planned path before operation, so that accurate operation navigation is performed, no marker is needed to be implanted, extra wound and burden are avoided to a patient, the risk of operation is reduced, and unnecessary time cost is reduced.
Drawings
The above features, technical features, advantages and implementation of the present invention will be further described in the following description of preferred embodiments with reference to the accompanying drawings in a clear and easily understood manner.
Fig. 1 is a schematic flow chart of a 3D structured light-based markerless surgical navigation method provided by the invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain the specific embodiments of the present invention with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the invention, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
For simplicity of the drawing, only the parts relevant to the invention are schematically shown in each drawing, and they do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In this context, it should be noted that the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, unless otherwise explicitly stated and defined. Either mechanically or electrically. Can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In addition, in the description of the present invention, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Example 1
In one embodiment, referring to fig. 1 of the specification, a schematic flow chart of a 3D structured light-based method for marking-free surgical navigation according to the present invention is shown.
The invention provides a 3D structured light-based marking-free operation navigation method, which comprises the following steps:
s101: CT data of the anatomy of a patient is acquired.
Among them, CT (Computed Tomography ) is a medical imaging technique, also called CAT scan (Computerized Axial Tomography). CT is an advanced X-ray imaging technique used to generate detailed cross-sectional images of the interior of the human body to aid physicians in diagnosing and treating various diseases and conditions.
S102: from the CT data, a three-dimensional model of the anatomical structure is constructed.
In particular, three-dimensional models of anatomical structures can be constructed by means of the chemicals software.
S103: sampling the surface of the three-dimensional model to obtain point cloud data of the preoperative anatomical structure.
It should be noted that, because the densities of different parts of the human body and the absorption degree of the X-rays are different, the CT values of different positions are also different, and by setting the CT values within a certain range, the surface model of the anatomy structure of the patient before the operation can be obtained, and then the sampling is performed on the surface model, so as to obtain the point cloud data of the anatomy structure before the operation.
S104: an intraoperative RGB image and depth image are acquired by a 3D structured light camera.
Among them, a 3D structured light camera is a special type of camera for acquiring three-dimensional shape information of an object surface. The 3D structured light camera adopts the principle of structured light projection, and measures the depth and shape of an object by emitting light and observing its reflection at the object surface.
Wherein the RGB image is a color image, three channels (red, green, blue) are used to represent color information, and the intensity of each channel can be varied to form different colors.
Wherein the depth image (or distance image or depth map) captures distance information of objects in the scene to the camera. The value of each pixel represents the distance or depth of the point to the camera.
S105: and obtaining point cloud data of the intraoperative anatomical structure according to the RGB image and the depth image.
In one possible embodiment, S105 specifically includes substeps S1051 to S1053:
s1051: the anatomical structure in the RGB image is segmented by a deep learning algorithm.
In one possible implementation, the substep S1051 specifically includes grandchild steps S10511 to S10516:
s10511: and inputting the preprocessed RGB image into a trained convolutional neural network to obtain a feature map.
Among them, convolutional neural networks (Convolutional Neural Network, CNN) are a deep learning neural network architecture specifically designed for processing and analyzing data having a grid structure, such as images and videos.
S10512: in the feature map, a preset region of interest is set for each point, and a plurality of candidate regions of interest are obtained.
Wherein the setting of the region of interest may improve the processing efficiency, since the whole image does not need to be segmented, but only the region of interest is of interest.
S10513: and inputting the multiple candidate regions of interest into a region generation network, and performing binary classification and bounding box regression.
Wherein the region generation network (Region Proposal Network, RPN) is a deep learning network, initially as part of a target detection task, for generating an algorithm of candidate object positions. The main role of the RPN is to generate candidate regions (also called candidate boxes or candidate region proposals) from the input image for object detection in a subsequent step.
The binary classification refers to classifying input data into one of two categories. Typically, the two categories are a positive category (positive class) and a negative category (negative class), respectively.
The bounding box regression refers to predicting the position and bounding box of a target in an object detection or target positioning task.
S10514: and filtering the candidate interested region which does not contain the structure body to be structured according to the output result of the region generating network.
S10515: and carrying out pooling treatment on the filtered region of interest.
S10516: and carrying out binary classification and bounding box regression on the region of interest after the pooling treatment, and obtaining the RGB image binarization mask.
In the invention, the deep learning and image processing technology is combined, which is helpful to improve the accuracy, efficiency and data quality of the anatomical structure segmentation.
S1052: and dividing the anatomical structure in the depth image according to the mapping relation between the RGB image and the depth image.
In one possible implementation, S1052 is specifically: based on the RGB image binarization mask, the anatomical structure in the depth image is segmented according to the mapping relation between the RGB image and the depth image.
S1053: and obtaining the point cloud data of the intraoperative anatomical structure according to the mapping relation between the depth image and the point cloud image.
In the invention, the binary mask of the RGB image is combined with the depth image, so that the accuracy and stability of the segmentation of the anatomical structure can be improved, and the method is beneficial to more accurately analyzing and understanding the anatomical structure in medical image processing and other applications.
S106: the point cloud data of the preoperative anatomical structure in the image space is registered with the point cloud data of the intraoperative anatomical structure in the patient space.
In one possible embodiment, before S106, the 3D structured light-based label-free surgical navigation method further includes: and preprocessing the point cloud data of the preoperative anatomical structure in the image space and the point cloud data of the intraoperative anatomical structure in the patient space by adopting a filtering algorithm.
Specifically, the point cloud data can be preprocessed by adopting common filtering algorithms such as voxel sampling, geometric sampling, random sampling, curvature sampling, curved surface sampling, poisson disk sampling, statistical filtering and the like.
In one possible embodiment, S106 specifically includes substeps S1061 to S1065:
s1061: point-pair features of point cloud data of a preoperative anatomical structure in the image space and point cloud data of an intraoperative anatomical structure in the patient space are calculated separately.
S1062: and determining the corresponding relation between the point cloud point pairs of the preoperative anatomical structure and the point cloud point pairs of the intraoperative anatomical structure through a feature matching algorithm.
S1063: according to the corresponding relation between the point cloud point pairs of the preoperative anatomical structure and the point cloud point pairs of the intraoperative anatomical structure, a first rotation matrix R1 and a first translation vector T1 between the two point cloud data are determined, and initial registration between the point cloud data of the preoperative anatomical structure and the point cloud data of the intraoperative anatomical structure is completed.
S1064: and taking the first rotation matrix R1 and the first translation vector T1 as initial values of space transformation, using a fine registration algorithm to realize the fine registration between the point cloud data of the preoperative anatomical structure and the point cloud data of the intraoperative anatomical structure, and obtaining a second rotation matrix R2 and a second translation vector T2.
S1065: a spatial transformation relation of the image space and the patient space is determined from the second rotation matrix R2 and the second translation vector T2.
According to the invention, point cloud data of a preoperative anatomical structure is acquired according to CT data, point cloud data of an operative anatomical structure is acquired according to RGB (red, green and blue) images and depth images acquired by a 3D structured light camera, and then the point cloud data of the preoperative anatomical structure in an image space and the point cloud data of the operative anatomical structure in a patient space are subjected to label-free and automatic registration. The anatomical structure point cloud is automatically extracted, the registration process is simple, manual registration by doctors is not needed, and the registration precision is improved.
In one possible embodiment, after S106, the 3D structured light-based label-free surgical navigation method further includes:
and tracking the pose of the anatomical structure in operation in real time by a pose estimation method.
Specifically, the pose estimation method includes, but is not limited to, the following:
firstly, point cloud data of an intraoperative anatomical structure are input into a point cloud deep learning network, RGB texture features and point cloud geometric features are extracted, the RGB texture features and the point cloud geometric features are input into a pixel-level fusion network, splicing and fusion are carried out, and the pose of the intraoperative anatomical structure is tracked in real time through a pose estimation model.
Secondly, firstly extracting features from input RGB pictures, then matching the features with a three-dimensional model of the anatomy structure of a patient before operation, and finally establishing a corresponding relation of 2D-3D coordinates through a PnP (Perchoice-n-Point) algorithm, so as to estimate the real-time pose of the anatomy structure during operation.
In the present invention, a real-time pose estimation model can help track the position and pose of the anatomy during surgery. This is important for navigation, surgical procedures, or accurate positioning during treatment, helping doctors to make accurate procedures and decisions.
S107: and acquiring the spatial position information of the surgical instrument in operation through a position sensing mark fixedly connected with the surgical instrument.
In one possible embodiment, S107 specifically includes substeps S1071 to S1074:
s1071: in the preoperative stage, three-dimensional coordinates of the feature points on the position sensing marker under the marker coordinate system are acquired.
The position sensing marks may be a kind of checkerboard marks or circular marks.
Specifically, a Structure-from-Motion (SfM) algorithm may be used to obtain three-dimensional coordinates of feature points on the position-sensing markers in the marker coordinate system.
S1072: in the intraoperative stage, three-dimensional coordinates of the feature points on the position sensing marks under a camera coordinate system are acquired by a camera.
S1073: and matching the three-dimensional coordinates of the characteristic points on the position sensing marks under the mark coordinate system with the three-dimensional coordinates of the characteristic points on the position sensing marks under the camera coordinate system, and determining the spatial posture of the position sensing marks.
Specifically, the three-dimensional coordinates of the feature points on the position-sensing marks under the mark coordinate system and the three-dimensional coordinates of the feature points on the position-sensing marks under the camera coordinate system are matched using an Iterative Closest Point (ICP) registration method.
S1074: and determining the spatial posture of the surgical instrument fixedly connected with the position sensing mark according to the spatial posture of the position sensing mark.
S108: and recording the spatial position information of the surgical instrument in real time, and determining the actual surgical path.
S109: the actual surgical path is compared with the pre-operative planned path for surgical navigation.
Specifically, the surgical navigation may be performed by way of path indication.
Wherein the path indication comprises a light path indication or an augmented reality path indication; the light path indication comprises a laser lamp with an adjustable indication angle, and the operation instrument moving path is indicated by the on and off of the indication lamp.
In one possible embodiment, after S109, the 3D structured light-based label-free surgical navigation method further includes:
s110: when the actual operation path deviates from the pre-operation planning path, outputting a corresponding early warning signal according to the deviation degree grade.
In the invention, the early warning system can timely detect deviation in the actual operation process and help doctors identify potential problems or errors, thereby improving the operation safety. By correcting the deviation in time, the operation can be ensured to be carried out according to the best practice and specification, thereby improving the quality and success rate of the operation.
In one possible implementation, S110 is specifically: when the actual operation path deviates from the pre-operation planned path, different colors of light rays are emitted through the indicator lamp to perform early warning according to the deviation degree grade.
In the invention, the path deviation degree is indicated by the color difference of the indicator lamps, and the path deviation degree is intuitively indicated by using the light rays with different colors, so that a medical team can quickly understand and identify the problem without deeply analyzing digital or text information. This improves the efficiency of information communication. Also, the color difference may immediately draw the attention of the doctor or operator, enabling them to immediately take corrective action. Such immediate feedback is important to avoid potential problems or errors.
Compared with the prior art, the invention has at least the following beneficial technical effects:
(1) According to the invention, point cloud data of a preoperative anatomical structure is acquired according to CT data, point cloud data of an operative anatomical structure is acquired according to RGB (red, green and blue) images and depth images acquired by a 3D structured light camera, and then the point cloud data of the preoperative anatomical structure in an image space and the point cloud data of the operative anatomical structure in a patient space are subjected to label-free and automatic registration. The anatomical structure point cloud is automatically extracted, the registration process is simple, manual registration by doctors is not needed, and the registration precision is improved.
(2) According to the invention, the spatial position information of the surgical instrument in operation is acquired through the position sensing mark fixedly connected with the surgical instrument, the actual operation path is determined according to the spatial position information of the surgical instrument, and the actual operation path is compared with the planned path before operation, so that accurate operation navigation is performed, no marker is needed to be implanted, extra wound and burden are avoided to a patient, the risk of operation is reduced, and unnecessary time cost is reduced.
Example 2
In one embodiment, the invention provides a 3D structured light-based markerless surgical navigation system for performing the 3D structured light-based markerless surgical navigation method of embodiment 1.
The 3D structured light-based unmarked surgical navigation system provided by the invention can realize the steps and effects of the 3D structured light-based unmarked surgical navigation method in the embodiment 1, and is not repeated.
Compared with the prior art, the invention has at least the following beneficial technical effects:
(1) According to the invention, point cloud data of a preoperative anatomical structure is acquired according to CT data, point cloud data of an operative anatomical structure is acquired according to RGB (red, green and blue) images and depth images acquired by a 3D structured light camera, and then the point cloud data of the preoperative anatomical structure in an image space and the point cloud data of the operative anatomical structure in a patient space are subjected to label-free and automatic registration. The anatomical structure point cloud is automatically extracted, the registration process is simple, manual registration by doctors is not needed, and the registration precision is improved.
(2) According to the invention, the spatial position information of the surgical instrument in operation is acquired through the position sensing mark fixedly connected with the surgical instrument, the actual operation path is determined according to the spatial position information of the surgical instrument, and the actual operation path is compared with the planned path before operation, so that accurate operation navigation is performed, no marker is needed to be implanted, extra wound and burden are avoided to a patient, the risk of operation is reduced, and unnecessary time cost is reduced.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (11)

1. A 3D structured light-based markerless surgical navigation method, comprising:
s101: acquiring CT data of an anatomical structure of a patient;
s102: constructing a three-dimensional model of the anatomical structure according to the CT data;
s103: sampling the surface of the three-dimensional model to obtain point cloud data of a preoperative anatomical structure;
s104: acquiring an intraoperative RGB image and a depth image through a 3D structured light camera;
s105: obtaining point cloud data of an intraoperative anatomical structure according to the RGB image and the depth image;
s106: registering point cloud data of the preoperative anatomical structure in the image space with point cloud data of the intraoperative anatomical structure in the patient space;
s107: acquiring spatial position information of a surgical instrument in operation through a position sensing mark fixedly connected with the surgical instrument;
s108: recording the space position information of the surgical instrument in real time, and determining an actual surgical path;
s109: and comparing the actual operation path with the preoperative planning path to conduct operation navigation.
2. The 3D structured light-based markerless surgical navigation method according to claim 1, wherein S105 specifically comprises:
s1051: segmenting an anatomical structure in the RGB image by a deep learning algorithm;
s1052: segmenting an anatomical structure in the depth image according to the mapping relation between the RGB image and the depth image;
s1053: and obtaining the point cloud data of the intraoperative anatomical structure according to the mapping relation between the depth image and the point cloud image.
3. The 3D structured light-based markerless surgical navigation method according to claim 2, wherein S1051 specifically comprises:
s10511: inputting the preprocessed RGB image into a trained convolutional neural network to obtain a feature map;
s10512: setting a preset region of interest for each point in the feature map to obtain a plurality of candidate regions of interest;
s10513: inputting a plurality of candidate interested areas into an area generating network, and carrying out binary classification and bounding box regression;
s10514: filtering a part of candidate interested areas which do not contain the structure body to be constructed according to the output result of the area generating network;
s10515: carrying out pooling treatment on the filtered region of interest;
s10516: and carrying out binary classification and bounding box regression on the region of interest after the pooling treatment, and obtaining the RGB image binarization mask.
4. The 3D structured light-based markerless surgical navigation method according to claim 3, wherein S1052 is specifically:
based on the RGB image binarization mask, according to the mapping relation between the RGB image and the depth image, the anatomical structure in the depth image is segmented.
5. The 3D structured light-based markerless surgical navigation method according to claim 1, further comprising, prior to the S106:
and preprocessing the point cloud data of the preoperative anatomical structure in the image space and the point cloud data of the intraoperative anatomical structure in the patient space by adopting a filtering algorithm.
6. The 3D structured light-based markerless surgical navigation method according to claim 1, wherein S106 specifically comprises:
s1061: respectively calculating point-pair characteristics of point cloud data of a preoperative anatomical structure in an image space and point cloud data of an intraoperative anatomical structure in a patient space;
s1062: determining the corresponding relation between the point cloud point pairs of the preoperative anatomical structure and the point cloud point pairs of the intraoperative anatomical structure through a feature matching algorithm;
s1063: according to the corresponding relation between the point cloud point pairs of the preoperative anatomical structure and the point cloud point pairs of the intraoperative anatomical structure, determining a first rotation matrix R1 and a first translation vector T1 between the two point cloud data, and completing initial registration between the point cloud data of the preoperative anatomical structure and the point cloud data of the intraoperative anatomical structure;
s1064: using the first rotation matrix R1 and the first translation vector T1 as initial values of space transformation, using a fine registration algorithm to realize the fine registration between the point cloud data of the preoperative anatomical structure and the point cloud data of the intraoperative anatomical structure, and obtaining a second rotation matrix R2 and a second translation vector T2;
s1065: a spatial transformation relation of the image space and the patient space is determined according to the second rotation matrix R2 and the second translation vector T2.
7. The 3D structured light-based markerless surgical navigation method according to claim 1, wherein S107 specifically comprises:
s1071: in the preoperative stage, acquiring three-dimensional coordinates of the feature points on the position sensing mark under the mark coordinate system;
s1072: in the intraoperative stage, acquiring three-dimensional coordinates of the feature points on the position sensing marks under a camera coordinate system through a camera;
s1073: matching the three-dimensional coordinates of the characteristic points on the position sensing marks under the mark coordinate system with the three-dimensional coordinates of the characteristic points on the position sensing marks under the camera coordinate system, and determining the spatial posture of the position sensing marks;
s1074: and determining the spatial posture of the surgical instrument fixedly connected with the position sensing mark according to the spatial posture of the position sensing mark.
8. The 3D structured light-based markerless surgical navigation method according to claim 1, further comprising, after S106:
and tracking the pose of the anatomical structure in operation in real time by a pose estimation method.
9. The 3D structured light-based markerless surgical navigation method according to claim 1, further comprising, after S109:
s110: when the actual operation path deviates from the preoperative planning path, outputting a corresponding early warning signal according to the deviation degree grade.
10. The 3D structured light-based markerless surgical navigation method according to claim 9, wherein S110 is specifically:
when the actual operation path deviates from the preoperative planning path, different colors of light rays are emitted through the indicator lamp to perform early warning according to the deviation degree grade.
11. A 3D structured light based markerless surgical navigation system for performing the 3D structured light based markerless surgical navigation method according to any of the claims 1 to 10.
CN202311210945.1A 2023-09-19 2023-09-19 3D structured light-based unmarked surgical navigation method and system Pending CN117618110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311210945.1A CN117618110A (en) 2023-09-19 2023-09-19 3D structured light-based unmarked surgical navigation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311210945.1A CN117618110A (en) 2023-09-19 2023-09-19 3D structured light-based unmarked surgical navigation method and system

Publications (1)

Publication Number Publication Date
CN117618110A true CN117618110A (en) 2024-03-01

Family

ID=90018775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311210945.1A Pending CN117618110A (en) 2023-09-19 2023-09-19 3D structured light-based unmarked surgical navigation method and system

Country Status (1)

Country Link
CN (1) CN117618110A (en)

Similar Documents

Publication Publication Date Title
CN109785374B (en) Automatic real-time unmarked image registration method for navigation of dental augmented reality operation
US10482606B2 (en) Medical image reporting system and method
CN110956635B (en) Lung segment segmentation method, device, equipment and storage medium
Wannous et al. Enhanced assessment of the wound-healing process by accurate multiview tissue classification
CN111150489B (en) Computer-aided system for alignment of prosthesis
US9240046B2 (en) Method and system to assist 2D-3D image registration
CN114129240B (en) Method, system and device for generating guide information and electronic equipment
Zheng et al. Multi-part modeling and segmentation of left atrium in C-arm CT for image-guided ablation of atrial fibrillation
CN105894508B (en) A kind of medical image is automatically positioned the appraisal procedure of quality
CN112509022A (en) Non-calibration object registration method for preoperative three-dimensional image and intraoperative perspective image
US20210271914A1 (en) Image processing apparatus, image processing method, and program
WO2021046455A1 (en) Fast and automatic pose estimation using intraoperatively located fiducials and single-view fluoroscopy
US11883219B2 (en) Artificial intelligence intra-operative surgical guidance system and method of use
CN114948199A (en) Surgical operation auxiliary system and operation path planning method
CN112349391A (en) Optimized rib automatic labeling method
CN116883281A (en) Image imaging image quality enhancement system and method for image guided radiotherapy
CN114903590A (en) Morse microsurgery marker information processing method, system and storage medium
US11715208B2 (en) Image segmentation
CN113274130A (en) Markless surgery registration method for optical surgery navigation system
CN117618110A (en) 3D structured light-based unmarked surgical navigation method and system
Pla-Alemany et al. Automatic multi-atlas liver segmentation and Couinaud classification from CT volumes
CN115358995A (en) Full-automatic space registration system based on multi-mode information fusion
CN114496197A (en) Endoscope image registration system and method
CN114283179A (en) Real-time fracture far-near end space pose acquisition and registration system based on ultrasonic images
CN112085698A (en) Method and device for automatically analyzing left and right breast ultrasonic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination