CN118078443B - Processing method of operation navigation system, software system and operation navigation system - Google Patents
Processing method of operation navigation system, software system and operation navigation system Download PDFInfo
- Publication number
- CN118078443B CN118078443B CN202410488038.1A CN202410488038A CN118078443B CN 118078443 B CN118078443 B CN 118078443B CN 202410488038 A CN202410488038 A CN 202410488038A CN 118078443 B CN118078443 B CN 118078443B
- Authority
- CN
- China
- Prior art keywords
- raman
- point
- positioning
- coordinate system
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000001069 Raman spectroscopy Methods 0.000 claims abstract description 112
- 238000001237 Raman spectrum Methods 0.000 claims abstract description 52
- 230000003287 optical effect Effects 0.000 claims abstract description 48
- 239000013598 vector Substances 0.000 claims abstract description 35
- 210000003625 skull Anatomy 0.000 claims abstract description 34
- 239000000523 sample Substances 0.000 claims description 57
- 238000000034 method Methods 0.000 claims description 37
- 230000005484 gravity Effects 0.000 claims description 22
- 238000003384 imaging method Methods 0.000 claims description 19
- 238000003745 diagnosis Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 7
- 238000013136 deep learning model Methods 0.000 claims description 6
- 230000003902 lesion Effects 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 2
- 238000005259 measurement Methods 0.000 abstract description 11
- 238000004422 calculation algorithm Methods 0.000 description 36
- 238000003841 Raman measurement Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 19
- 238000002073 fluorescence micrograph Methods 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 13
- 206010018338 Glioma Diseases 0.000 description 11
- 208000032612 Glial tumor Diseases 0.000 description 9
- 201000007983 brain glioma Diseases 0.000 description 9
- 230000004927 fusion Effects 0.000 description 9
- 238000001764 infiltration Methods 0.000 description 9
- 206010028980 Neoplasm Diseases 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000005284 excitation Effects 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 238000000799 fluorescence microscopy Methods 0.000 description 5
- 239000013307 optical fiber Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 239000003795 chemical substances by application Substances 0.000 description 4
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 210000001519 tissue Anatomy 0.000 description 4
- 210000004556 brain Anatomy 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 210000005013 brain tissue Anatomy 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000035515 penetration Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- RBTBFTRPCNLSDE-UHFFFAOYSA-N 3,7-bis(dimethylamino)phenothiazin-5-ium Chemical compound C1=CC(N(C)C)=CC2=[S+]C3=CC(N(C)C)=CC=C3N=C21 RBTBFTRPCNLSDE-UHFFFAOYSA-N 0.000 description 1
- ZGXJTSGNIOSYLO-UHFFFAOYSA-N 88755TAZ87 Chemical compound NCC(=O)CCC(O)=O ZGXJTSGNIOSYLO-UHFFFAOYSA-N 0.000 description 1
- JXASPPWQHFOWPL-UHFFFAOYSA-N Tamarixin Natural products C1=C(O)C(OC)=CC=C1C1=C(OC2C(C(O)C(O)C(CO)O2)O)C(=O)C2=C(O)C=C(O)C=C2O1 JXASPPWQHFOWPL-UHFFFAOYSA-N 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013170 computed tomography imaging Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- GNBHRKFJIUUOQI-UHFFFAOYSA-N fluorescein Chemical compound O1C(=O)C2=CC=CC=C2C21C1=CC=C(O)C=C1OC1=CC(O)=CC=C21 GNBHRKFJIUUOQI-UHFFFAOYSA-N 0.000 description 1
- 229940020947 fluorescein sodium Drugs 0.000 description 1
- 239000007850 fluorescent dye Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229960000907 methylthioninium chloride Drugs 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000010837 poor prognosis Methods 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Robotics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
Abstract
The invention provides a processing method of a surgical navigation system, a software system and the surgical navigation system, comprising the following steps: acquiring the coordinate positions of a plurality of groups of positioning balls based on an optical positioning instrument, and calculating the barycentric coordinates of each group of positioning balls and the unit normal vector of a plane determined by the plurality of groups of positioning balls; determining a three-dimensional coordinate system with the barycentric coordinate position of a group of positioning balls at the skull side of a patient as an origin as a world coordinate system, acquiring space coordinates of each Raman point in the world coordinate system, and calculating the position of the corresponding pixel point of each Raman point on the image; and acquiring a preoperative medical image, translating the world coordinate system to enable the world coordinate system and the coordinate system of the preoperative medical image to be overlapped at a coordinate origin, and displaying the positions of the corresponding pixels of each Raman point on the image in the preoperative medical image. In this way, data support can be provided for subsequent determination of the position of the raman spectrum measurement point in the preoperative three-dimensional reconstructed image, and determination of the two-dimensional projection position of the raman spectrum measurement point at the microscope viewing angle.
Description
Technical Field
The present invention relates to the field of medical technology, and in particular, to a processing method of a surgical navigation system, a software system, and a surgical navigation system.
Background
The high recurrence rate after brain glioma surgery leads to high mortality and poor prognosis for patients, one of the main reasons for this is micro-infiltration growth of gliomas. The growth of brain gliomas has great heterogeneity, and gliomas with different molecular types have significantly different invasiveness and micro-infiltration growth modes. However, due to the lack of an imaging technology for accurately positioning the micro-infiltration range of the glioma, the relationship between the molecular typing of the glioma and the micro-infiltration growth mode of the glioma is not clear. Therefore, the micro-infiltration range of the brain glioma is accurately determined, and the method has important clinical significance for accurately cutting off the brain glioma and reducing postoperative recurrence.
The fluorescence image navigation technology can rapidly display the position and the range of the glioma in real time by injecting the fluorescent agent before operation, but the accuracy and the specificity of the fluorescence image navigation technology in the aspect of evaluating the micro-infiltration of the glioma are poor due to the lack of the targeting fluorescent agent for the glioma and the high heterogeneity of the glioma. The Raman spectrum technology can judge whether brain tissues are cancerous or not with high specificity through the fingerprint spectrum of the recognition molecules, even distinguish different molecular types, and solve the problems of insufficient specificity and accuracy in micro-infiltration recognition of the fluorescence image navigation technology. However, brain gliomas are complex in composition, their high heterogeneity results in extremely complex content contained in raman spectra, it is difficult to distinguish tumor from normal tissue by directly observing the spectral peak differences, and it is difficult to determine their molecular type. Therefore, how to rapidly and efficiently analyze the Raman spectrum of the micro-infiltration range of the glioma by using the artificial intelligence technology, and extract the specific Raman spectrum of the glioma with different molecular types, and is important to accurately judge the micro-infiltration range and the molecular types thereof in the operation.
The brain glioma operation is usually performed under an operation microscope, an operator can continuously adjust the angle and the distance of the operation microscope around an operation area, the position of the raman spectrum measurement needs to be memorized in a three-dimensional coordinate system and reflected in the current operation picture in real time, and how to establish and convert the coordinate system is an important technical problem. How to integrate the fluorescence image navigation technology, the Raman spectrum technology and the artificial intelligence technology into an operation microscope to finally realize an integrated navigation and diagnosis system is a technical problem to be solved urgently at present.
Disclosure of Invention
In view of the above, the present invention is directed to a processing method, a software system and a surgical navigation system for a surgical navigation system, in which an optical positioning system monitors the spatial positions of positioning balls fixed on a fluorescence surgical microscope, a patient's skull side and a raman spectrum probe in real time, calculates the relative positions and spatial angles between the three by using the software system of the present invention, and provides data support for determining the positions of raman spectrum measurement points in a preoperative three-dimensional reconstructed image and determining the two-dimensional projection positions of the raman spectrum measurement points under the microscope viewing angle.
In a first aspect, an embodiment of the present invention provides a method for processing a surgical navigation system, which is applied to a software system of the surgical navigation system, and the method includes: acquiring the coordinate positions of a plurality of groups of positioning balls based on an optical positioning instrument, and calculating the barycentric coordinates of each group of positioning balls and the unit normal vector of a plane determined by the plurality of groups of positioning balls; wherein, the positioning ball is arranged on the fluorescence operation microscope, the skull side of the patient and the Raman spectrum probe; determining a three-dimensional coordinate system with the barycentric coordinate position of a group of positioning balls at the skull side of a patient as an origin as a world coordinate system, acquiring the space coordinates of each Raman point in the world coordinate system, and calculating the positions of corresponding pixel points of each Raman point on an image based on the space coordinates of each Raman point in the world coordinate system; and acquiring a preoperative medical image, translating the world coordinate system to enable the world coordinate system and the coordinate system of the preoperative medical image to be overlapped at a coordinate origin, and displaying the positions of the corresponding pixels of each Raman point on the image in the preoperative medical image.
In an optional embodiment of the present application, the step of obtaining coordinate positions of a plurality of sets of positioning balls based on the optical positioner, and calculating a barycentric coordinate of each set of positioning balls and a unit normal vector of a plane determined by the plurality of sets of positioning balls includes: measuring distance parameters and angle parameters among a plurality of groups of positioning balls; adjusting a default reference system of the optical positioning instrument to a positive direction taking the vertical direction of the skull as the z axis, taking the sagittal axial forward direction as the x axis positive direction, taking the frontal axial left direction as the y axis positive direction, and setting the gravity center position of a group of positioning balls at the skull side of a patient as an origin; determining the coordinates of a group of positioning balls based on the optical positioning instrument, and determining the barycentric coordinates of the group of positioning balls and the unit normal vector of the group of positioning balls based on the coordinates of the group of positioning balls; and calculating the line vector of the gravity center position and the central point of the group of positioning balls, the coordinates of the central point and the central axis direction based on the gravity center coordinates of the group of positioning balls, the unit normal vector of the group of positioning balls, the distance parameter and the angle parameter.
In an alternative embodiment of the present application, the distance parameter and the angle parameter include at least one of the following: the distance between the center of gravity of the positioning ball and the corresponding center of the microscope objective outlet, the center point of the cranium top of the patient and the tail end of the Raman probe; the distance connecting lines respectively form space included angles with the normal directions of the corresponding positioning balls; the projection line of the distance connecting line on the plane of the positioning ball is respectively included with the connecting line of the first ball and the second ball in each group of positioning balls; and the included angle between the connecting line of the first ball and the second ball and the connecting line of the first ball and the third ball in each group of positioning balls.
In an alternative embodiment of the application, each set of positioning balls is mounted such that its normal direction is parallel to the corresponding central axis of the microscope field of view, the patient's skull vertical axis, and the central axis of the raman probe.
In an optional embodiment of the application, the step of calculating the position of each raman point on the image corresponding to the pixel point based on the spatial coordinates of each raman point in the world coordinate system includes: converting the space coordinates of each Raman point in a world coordinate system into a camera coordinate system taking the equivalent optical center of a microscope lens as a coordinate origin; calculating two-dimensional coordinates of space coordinates of each Raman point in a camera coordinate system on a camera imaging plane; the camera imaging plane is arranged behind the equivalent back focus of the equivalent optical center; and translating the coordinate system where the two-dimensional coordinates are located to obtain the coordinates of each Raman point in the pixel coordinate system as the positions of the corresponding pixel points of each Raman point on the image.
In an alternative embodiment of the present application, after the step of translating the world coordinate system to overlap the world coordinate system with the coordinate system of the preoperative medical image at the origin of coordinates, the method further includes one of: identifying the position of the focus based on the preoperative medical image, and displaying the position of the focus in the preoperative medical image; calculating the view angle of a microscope, and displaying the view cone of the microscope in the preoperative medical image; the position of the Raman probe is displayed in the preoperative medical image, so that the Raman probe projects according to a given viewing angle direction.
In an alternative embodiment of the present application, the method further includes: inputting Raman spectrum data acquired by a Raman probe into a pre-established deep learning model based on a Raman spectrum diagnosis system, and outputting focus classification results; and displaying the focus classification result at the position of each Raman point corresponding to the pixel point in the preoperative medical image.
In an optional embodiment of the present application, the step of displaying the lesion classification result at a position corresponding to a pixel point of each raman point in the preoperative medical image includes: when the camera lens moves, determining a Raman point currently displayed in the preoperative medical image; and displaying the focus classification result corresponding to the currently displayed Raman point at the position of the pixel point corresponding to the currently displayed Raman point in the preoperative medical image.
In a second aspect, the embodiment of the invention further provides a software system, which is used for executing the processing method of the surgical navigation system.
In a third aspect, an embodiment of the present invention further provides a surgical navigation system, including: fluorescence surgical microscope, raman spectroscopy system, raman spectroscopy diagnostic system, optical positioning system and software system as described above.
The embodiment of the invention has the following beneficial effects:
The embodiment of the invention provides a processing method, a software system and a surgical navigation system of a surgical navigation system, wherein an optical positioning system monitors the space positions of positioning balls fixed on a fluorescence surgical microscope, the skull side of a patient and a Raman spectrum probe in real time, calculates the relative position and the space angle among the three through a software system of the invention, and provides data support for the subsequent determination of the position of a Raman spectrum measuring point in a preoperative three-dimensional reconstruction image and the determination of the two-dimensional projection position of the Raman spectrum measuring point under the view angle of the microscope.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the techniques of the disclosure.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a processing method of a surgical navigation system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of three-sphere coordinates and barycentric coordinates thereof according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a positional relationship between a three-ball and a target positioning point according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of calculating coordinate vectors of a target positioning point according to an embodiment of the present invention;
Fig. 5 is a schematic diagram of coordinate calculation in a camera imaging model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a navigation system and a fluorescence imaging display interface according to an embodiment of the present invention;
FIG. 7 is a flowchart of a processing method of another surgical navigation system according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a surgical navigation system according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Icon: 1-fluorescence surgical microscope; 2-an optical positioning system; 3-raman spectroscopy probe; 4-a first positioning ball group; 5-a second set of locating balls; 6-a third positioning ball group; 201-near infrared positioning of the laser emission position; 202-a visible light camera; 203 near infrared laser positioning camera combination; 100-memory; a 101-processor; 102-a bus; 103-communication interface.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, a brain glioma operation is usually performed under an operation microscope, an operator can continuously adjust the angle and the distance of the operation microscope around an operation area, the position of raman spectrum measurement needs to be memorized in a three-dimensional coordinate system and reflected in the current operation picture in real time, and how to establish and convert the coordinate system is an important technical problem. How to integrate the fluorescence image navigation technology, the Raman spectrum technology and the artificial intelligence technology into an operation microscope to finally realize an integrated navigation and diagnosis system is a technical problem to be solved urgently at present.
Based on the above, the processing method, the software system and the operation navigation system of the operation navigation system provided by the embodiment of the invention, the optical positioning system monitors the space positions of the positioning balls fixed on the fluorescence operation microscope, the skull side of the patient and the Raman spectrum probe in real time, calculates the relative positions and the space angles among the three through the software system of the invention, and provides data support for the subsequent determination of the position of the Raman spectrum measuring point in the preoperative three-dimensional reconstruction image and the determination of the two-dimensional projection position of the Raman spectrum measuring point under the microscope view angle.
For the convenience of understanding the present embodiment, a detailed description will be given of a processing method of a surgical navigation system disclosed in the embodiment of the present invention.
Embodiment one:
The embodiment of the invention provides a processing method of a surgical navigation system, which is applied to a software system of the surgical navigation system, wherein the software system in the embodiment comprises a space three-dimensional coordinate algorithm, a two-dimensional projection algorithm of a Raman measurement point under a microscope view angle, a real-time superposition algorithm of a preoperative three-dimensional reconstructed medical image, the microscope view angle and a Raman probe position, an image fusion algorithm of a fluorescence image and a Raman measurement result and an integral software display frame.
Based on the above description, referring to a flowchart of a processing method of a surgical navigation system shown in fig. 1, the processing method of the surgical navigation system includes the steps of:
step S102, acquiring coordinate positions of a plurality of groups of positioning balls based on an optical positioning instrument, and calculating barycentric coordinates of each group of positioning balls and unit normal vectors of planes determined by the plurality of groups of positioning balls; wherein, the positioning ball is arranged on the fluorescence operation microscope, the skull side of the patient and the Raman spectrum probe.
Referring to a schematic diagram of three-sphere coordinates and barycentric coordinates thereof shown in fig. 2, the embodiment provides a spatial three-dimensional coordinate algorithm (the coordinate units in the algorithm are unified as millimeters), and based on the coordinate positions of three groups of positioning spheres given by an optical positioning instrument, the barycentric coordinates of each group of positioning spheres and the unit normal vector of the plane determined by the three spheres are calculated. For example, coordinates for three balls,,Its barycentric coordinate position is() The unit normal vector isThe gravity center coordinates of a group of positioning balls on the microscope areThe unit normal vector isThe barycentric coordinates of a group of positioning balls at the skull side of the patient areThe unit normal vector isThe barycentric coordinates of a group of positioning balls on the Raman probe areThe unit normal vector is。
In some embodiments, distance parameters and angle parameters between sets of positioning balls may be measured; adjusting a default reference system of the optical positioning instrument to a positive direction taking the vertical direction of the skull as the z axis, taking the sagittal axial forward direction as the x axis positive direction, taking the frontal axial left direction as the y axis positive direction, and setting the gravity center position of a group of positioning balls at the skull side of a patient as an origin; determining the coordinates of a group of positioning balls based on the optical positioning instrument, and determining the barycentric coordinates of the group of positioning balls and the unit normal vector of the group of positioning balls based on the coordinates of the group of positioning balls; and calculating the line vector of the gravity center position and the central point of the group of positioning balls, the coordinates of the central point and the central axis direction based on the gravity center coordinates of the group of positioning balls, the unit normal vector of the group of positioning balls, the distance parameter and the angle parameter.
Wherein the distance parameter and the angle parameter comprise at least one of: the distance between the center of gravity of the positioning ball and the corresponding center of the microscope objective outlet, the center point of the cranium top of the patient and the tail end of the Raman probe; the distance connecting lines respectively form space included angles with the normal directions of the corresponding positioning balls; the projection line of the distance connecting line on the plane of the positioning ball is respectively included with the connecting line of the first ball and the second ball in each group of positioning balls; and the included angle between the connecting line of the first ball and the second ball and the connecting line of the first ball and the third ball in each group of positioning balls.
The distance between the center of gravity of each group of positioning balls and the corresponding center point of the microscope objective outlet, the center point of the cranium top of the patient and the tail end of the Raman probe can be measured in advance by referring to a schematic diagram of the position relation between the three balls and the target positioning points shown in fig. 3The distance connecting line (the center of gravity of the positioning ball is taken as a starting point) and the space included angle of the normal direction of the corresponding positioning ballThe distance connecting line projects an included angle between the line and the line connecting the ab ball on the plane of the positioning ball(The projection line is manually set between the ab ball line and the ac ball line when the positioning ball is installed), and the ab ball line defined in each group of ballsWire connected with ac ballIncluded angle of (a)(The included angle cannot be equal to 0 degrees or 180 degrees). When the locating balls are installed, the normal direction of each locating ball is parallel to the corresponding central axis of the microscope field of view, the vertical axis of the patient's skull and the central axis of the Raman probe.
As shown in fig. 3, the default reference system of the optical positioner is adjusted to a positive direction with the vertical direction of the skull being the z-axis, the sagittal axial forward direction being the x-axis positive direction, and the frontal axial left direction being the y-axis positive direction, and the center of gravity position of a set of positioning balls on the skull side of the patient is set as the origin (0, 0).
Obtaining the coordinates of a group of positioning balls on a microscope through an optical positioning instrument,,The barycentric coordinates of the positioning ball on the microscope can be calculatedUnit normal vector of positioning ball on microscope. To ensure the unit normal vector of the positioning ball on the microscopeThe x-axis positive unit vector is set in accordance with the observation direction of the microscopeNeed algorithm determinationIf the sign of (2) is negative, thenCorrect direction, otherwise pair in algorithmReassigning:。
Referring to FIG. 4, a schematic diagram of calculating coordinate vectors of the target positioning points is shown, and then a connecting line vector between the center of gravity position of the positioning ball and the center point of the exit of the microscope objective lens is calculated Let the unit vector of the projection line of the distance line on the plane of the positioning ball be. Since the projection line is artificially set between the ab ball line and the ac ball line when the positioning ball is mounted, i.e.Is positioned atAndInside the included angle of (1)Can be expressed asWhileSatisfy the relationTherefore, the method can be used for manufacturing the optical fiber,。
ThenThus solving the following:。
The other solution of this unitary quadratic equation is negative and does not satisfy the preset condition "the projection line is between the ab ball line and the ac ball line", and therefore Only one of the solutions is calculated from the solutionVector values of (2).
ThenThus, the coordinates of the center point of the outlet of the microscope objective lens can be obtained as followsThe microscope viewing angle direction is preset as before。
By analogy, parameters on the cranial side of the patient can be solved:
;
;
。
Wherein, to ensure the unit normal vector of the positioning ball on the skull side of the patient Is consistent with the vertical axis direction of the skull of the patient, and is provided with a positive unit vector of a z axisNeed algorithm determinationIf the sign of (2) is positive, thenCorrect direction, otherwise pair in algorithmReassigning:。
The coordinates of the center point of the cranium top of the patient are And the vertical axis direction of the patient is preset as before;
Parameters of the raman probe can be solved:
;
;
。
wherein, to ensure the unit normal vector of the positioning ball on the Raman probe Is consistent with the central axis direction (pointing to the tail end) of the Raman probe, only needs to ensure thatThe included angle between the Raman probe positioning ball and the line direction of the center of gravity of the Raman probe positioning ball and the center point of the cranium top of the patient is an acute angle, and the algorithm is needed to judgeIf the sign of (2) is positive, thenCorrect direction, otherwise pair in algorithmReassigning:。
The coordinates of the end of the Raman probe are While the central axis direction (pointing to the end) of the Raman probe is preset as before。
So far, the positions and directions of a microscope, a patient skull and a Raman probe can be calculated in real time through real-time positioning spherical coordinates provided by a positioning system and are used for the calculation of subsequent three-dimensional image fusion and projection.
Step S104, determining a three-dimensional coordinate system with the barycentric coordinate position of a group of positioning balls at the skull side of the patient as an origin as a world coordinate system, acquiring the space coordinates of each Raman point in the world coordinate system, and calculating the positions of the corresponding pixel points of each Raman point on the image based on the space coordinates of each Raman point in the world coordinate system.
The embodiment also provides a two-dimensional projection algorithm of the Raman measurement points under the microscope visual angle. The above method establishes a three-dimensional coordinate system (also called world coordinate system) with the center of gravity position of a group of positioning balls on the skull side of the patient as the origin, and in this coordinate system, the spatial coordinates of all Raman points can be obtained,,Is the number of raman points; the space coordinate of the center of the microscope lens can also be obtained in real time. The purpose of this partial algorithm is: when the position of the microscope lens isWhen calculating the Raman pointPositions of corresponding pixels on an image,Is the width value in the pixel coordinate system,Is a height value in pixels to mark all raman points in a fluorescence image frame of a microscope. To obtain in fluorescence imageCorresponding toIt is necessary to establish a mapping relationship between the world coordinate system and the pixel coordinate system.
In some embodiments, the spatial coordinates of the individual raman points in the world coordinate system may be converted to a camera coordinate system with the microscope lens equivalent optical center as the origin of coordinates; calculating two-dimensional coordinates of space coordinates of each Raman point in a camera coordinate system on a camera imaging plane; the camera imaging plane is arranged behind the equivalent back focus of the equivalent optical center; and translating the coordinate system where the two-dimensional coordinates are located to obtain the coordinates of each Raman point in the pixel coordinate system as the positions of the corresponding pixel points of each Raman point on the image.
Step one: requiring the world coordinate systemConversion to equivalent optical center with microscope lensCamera coordinate system for origin of coordinates:
;
Wherein the method comprises the steps ofIs an orthogonal matrix; representing three-dimensional translation vector, equivalent optical center of microscope lens The coordinates and the position of the center of gravity of a group of positioning balls on the skull side of the patient are obtained. According to the rotation angles, rotation matrixes in three directions can be obtained respectively, and the rotation matrixes are products of the rotation matrixes: . When it needs to be wound 、、Respectively rotate、、When (1):
;
;
。
Wherein the method comprises the steps of 、、Is transformed from a real world coordinate system with the center of gravity position of a group of positioning balls on the skull side of a patient as an origin to an equivalent optical center of a microscope lensThe rotation angle of the coordinate system (the main optical axis is the z ' axis outwards, and the x ' axis and the y ' axis are parallel to the CMOS length and width respectively) is the origin.
Step two: referring to FIG. 5, a schematic diagram of coordinate calculation in a camera imaging model is shown, at an equivalent optical centerRear distanceWhere there is an imaging plane, calculating points in three-dimensional spaceTwo-dimensional coordinates on camera imaging plane CMOSI.e. simulationThrough the optical center of the cameraProjection onto an imaging plane, whereinIs the equivalent back focus of the camera.
Since the light propagation direction passing through the equivalent optical center is unchanged, the principle of similar triangles can be obtained:
;
。
The image coordinates on the CMOS have been mirror transformed here taking into account the inverted nature of the imaging and thus agree with the coordinate symbols in the camera coordinate system.
Step three: the digital camera adopts CMOS as an imaging plane, images into discrete pixel points, and obtains a digital image. It is therefore necessary to discretize the imaging plane coordinate system into an array of pixel points, and since digital images are commonly used to have the upper left corner as the origin, it is necessary to do soThe origin position of the coordinate system (at the center point of the image) is expressed as a vectorTo the upper left corner of the image (i.e. the coordinate system translates), thenDiscretizing into pixel points on a digital image:
。
Wherein the method comprises the steps ofAndRespectively represent each pixel inAndThe actual size in the direction (unit: mm/pixel) is determined by the size of each photosensitive element in the camera sensor.
The world coordinate system is neutral point through the three steps(Unit: mm) conversion to points in the pixel coordinate system(Unit: pixels). If it isOr (b)When the value of (2) is beyond the pixel range which can be acquired by CMOS, the Raman point calculated currently is describedThis point may not be marked in the image, but not in the image. So far, the three-dimensional coordinates of the Raman measurement points can be projected into the microscope imaging picture in real time.
Step S106, acquiring a preoperative medical image, translating the world coordinate system to enable the world coordinate system and the coordinate system of the preoperative medical image to be overlapped at a coordinate origin, and displaying the positions of the corresponding pixel points of each Raman point on the image in the preoperative medical image.
In some embodiments, one of the following may also be included: identifying the position of the focus based on the preoperative medical image, and displaying the position of the focus in the preoperative medical image; calculating the view angle of a microscope, and displaying the view cone of the microscope in the preoperative medical image; the position of the Raman probe is displayed in the preoperative medical image, so that the Raman probe projects according to a given viewing angle direction.
Referring to a navigation system and a fluorescence imaging display interface schematic diagram shown in fig. 6, the embodiment also provides a real-time superposition algorithm of the preoperative three-dimensional reconstructed medical image, the microscope view angle and the raman probe position. In addition, FIG. 6 also shows the overall display framework of the software.
Through medical image scanning data such as preoperative CT (Computed Tomography, electronic computer tomography), PET-CT (Positron Emission Tomography-Computed Tomography, positron emission tomography) or MRI (Magnetic Resonance Imaging ), the preoperative three-dimensional reconstructed medical image can be obtained by reconstruction by utilizing the existing algorithm, and the translation of a three-dimensional coordinate system (namely a world coordinate system) in the real world is realizedThe origin becomes the skull fixed point of the patient, and the translated real world coordinate system and the three-dimensional reconstruction medical image coordinate system can be overlapped at the origin of coordinates because the units are unified as millimeters. Which is then projected onto the screen in any given viewing angle direction. The preoperative medical image can generally give the location and range of the lesion according to the existing mature image segmentation and recognition algorithm, as shown by the white area in the left graph of fig. 6.
The central position coordinate of the microscope lens and the viewing angle direction vector thereof can be obtained through a space three-dimensional coordinate algorithm. The angle of view of the microscope can be measured in advanceWith the object focusRelationship betweenThen the microscope field angle can be calculated under the condition that the arbitrary object focus u is known. The above information is combined to calculate the viewing angle light cone of the microscope in real time and project it on the screen according to any given viewing angle direction, as shown by the light cone projection in the left diagram of fig. 6.
By means of the space three-dimensional coordinate algorithm, the coordinates of the measuring points of the Raman probe and the pointing direction of the Raman probe can be simulated in a three-dimensional image by using a segment, and then the position of the Raman probe can be projected on a screen according to any given view angle direction, as shown by the segment projection in the left diagram of fig. 6.
The coordinates of the measuring points of the probe are recorded after each measurement of the Raman spectrum and marked on the three-dimensional reconstructed medical image, as shown by the marked points in the left graph of FIG. 6.
The embodiment of the invention provides a processing method of a surgical navigation system, wherein an optical positioning system monitors the spatial positions of positioning balls fixed on a fluorescence surgical microscope, the skull side of a patient and a Raman spectrum probe in real time, calculates the relative position and the spatial angle between the three by a software system of the invention, and provides data support for the subsequent determination of the position of a Raman spectrum measuring point in a preoperative three-dimensional reconstruction image and the determination of the two-dimensional projection position of the Raman spectrum measuring point under the view angle of the microscope.
Embodiment two:
the present embodiment provides another processing method of a surgical navigation system, which is implemented on the basis of the above embodiment, referring to a flowchart of another processing method of a surgical navigation system shown in fig. 7, and includes the following steps:
Step S702, acquiring coordinate positions of a plurality of groups of positioning balls based on an optical positioning instrument, and calculating barycentric coordinates of each group of positioning balls and unit normal vectors of planes determined by the plurality of groups of positioning balls; wherein, the positioning ball is arranged on the fluorescence operation microscope, the skull side of the patient and the Raman spectrum probe.
Step S704, determining a three-dimensional coordinate system with the barycentric coordinate position of a group of positioning balls at the skull side of the patient as an origin as a world coordinate system, acquiring the space coordinates of each Raman point in the world coordinate system, and calculating the positions of the corresponding pixel points of each Raman point on the image based on the space coordinates of each Raman point in the world coordinate system.
In step S706, a pre-operation medical image is acquired, the world coordinate system is translated so that the world coordinate system and the coordinate system of the pre-operation medical image overlap at the origin of coordinates, and the positions of the corresponding pixels of each raman point on the image are displayed in the pre-operation medical image.
Step S708, inputting Raman spectrum data acquired by a Raman probe into a pre-established deep learning model based on a Raman spectrum diagnosis system, and outputting focus classification results; and displaying the focus classification result at the position of each Raman point corresponding to the pixel point in the preoperative medical image.
The embodiment also provides an image fusion algorithm of the fluorescence image and the Raman measurement result. The Raman spectrum diagnosis system inputs Raman spectrum data acquired by a Raman probe into a pre-established deep learning model, the classification result comprises cancer negative/positive and molecular type thereof through the reasoning focus classification result of the model, and then the positions of Raman measurement points acquired by a two-dimensional projection algorithm of the Raman measurement points under a microscope view angle are marked with different colors according to different classification results.
In some embodiments, the raman point currently displayed in the pre-operative medical image may be determined as the camera lens is moved; and displaying the focus classification result corresponding to the currently displayed Raman point at the position of the pixel point corresponding to the currently displayed Raman point in the preoperative medical image.
One feature of the fusion algorithm is that raman measurement points in the right diagram of fig. 6 can be updated in real time: when the imaging content of the right graph of fig. 6 changes along with the movement of the camera lens, firstly traversing the stored raman points one by one in real time; then, the space coordinates of each Raman point are mapped into pixel coordinates under the current imaging content by utilizing the two-dimensional projection algorithm of the Raman measurement points under the microscope view angle; if the pixel coordinate value is in the current image size, the raman point is indicated in the right diagram of fig. 6, the point needs to be marked in the diagram, otherwise, the point is indicated not in the right diagram of fig. 6, and the marking operation is not needed. Therefore, the position of the raman measurement point is updated due to a change in the relative position of the microscope and the brain of the patient, or the measurement point is newly added. Clicking on either measurement point location in the three-dimensional or two-dimensional plot will reveal its detailed raman spectrum in the left plot of fig. 6.
The method provided by the embodiment of the invention specifically provides the following contents:
(1) The method comprises a space three-dimensional coordinate algorithm, a two-dimensional projection algorithm of a Raman measurement point under a microscope view angle, a real-time superposition algorithm of a preoperative three-dimensional reconstructed medical image and the microscope view angle and a Raman probe position, and an image fusion algorithm of a fluorescence image and a Raman measurement result.
(2) The relationship between the microscope imaging field of view and the raman measurement point in two dimensions is analyzed by three-dimensional coordinate calculation and projection.
(3) After normalization and standardization pretreatment, the Raman spectrum enters a model in a two-dimensional image mode for training so as to overcome the influence caused by hardware differences among different Raman spectrum devices and improve the universality of the model.
(4) The optical positioning system is connected with the fluorescence operation microscope, the Raman diagnosis system and the preoperative three-dimensional reconstruction system in series, so that the problem of tissue penetration depth is solved, and meanwhile, the problems of Raman measurement point position memory tracking and Raman fluorescence image precise fusion are solved.
According to the embodiment of the invention, the optical positioning instrument is used for detecting the positioning balls fixed on the operation microscope, the Raman spectrum probe and the human head side, and determining the relative positions of the three, so that a three-dimensional coordinate system is established, and the coordinates of a fluorescence imaging area, a Raman detection point and the brain can be determined in an operation by combining the three-dimensional brain three-dimensional images of the preoperation CT/MRI/PET-CT three-dimensional reconstruction, so that the problem of tissue penetration depth is solved, and a coordinate calculation basis is provided for the fusion of fluorescence and Raman images and the comparison of Raman measurement points with the tumor area diagnosed by the preoperation image. The method comprises the steps of rapidly determining the approximate range of a tumor through real-time large-area scanning of fluorescence imaging, accurately performing scanning diagnosis on an area where fluorescence cannot be determined through artificial intelligence Raman measurement guided by three-dimensional graphics, and simultaneously registering Raman measurement points in a three-dimensional reconstruction space of preoperative images to determine the operation progress in real time. The operator can refer to the relation between the three-dimensional reconstructed image of the preoperative image and the Raman measuring point and the relation between the fluorescent real-time two-dimensional image and the Raman measuring point to clearly judge whether the tumor is completely cut. Meanwhile, the artificial intelligence algorithm is based on deep learning, and brain glioma and normal brain tissue samples which are trained in a large amount in the early stage can judge whether a measured point is a tumor or not, and can also make quick pre-diagnosis on molecular typing of the tumor, so that doctors can timely adjust the operation excitation degree according to the molecular typing of the tumor.
Embodiment III:
the embodiment provides a software system, which is implemented on the basis of the embodiment, and is used for the processing method of the surgical navigation system provided by the embodiment.
The software system in this embodiment may include the spatial three-dimensional coordinate algorithm provided in the foregoing embodiment, the two-dimensional projection algorithm of the raman measurement point under the microscope view angle, the real-time superposition algorithm of the preoperative three-dimensional reconstructed medical image and the microscope view angle and the raman probe position, the image fusion algorithm of the fluorescence image and the raman measurement result, and the overall software display framework.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the software system described above may refer to the corresponding process in the foregoing embodiment of the processing method of the surgical navigation system, which is not described herein again.
Embodiment four:
The present embodiment provides a surgical navigation system, and the method is implemented on the basis of the above embodiment, where the surgical navigation system includes: fluorescence surgical microscope, raman spectroscopy measurement system, raman spectroscopy diagnostic system, optical positioning system and software system provided by the foregoing embodiments.
Referring to a schematic diagram of a surgical navigation system shown in fig. 8, there is shown: the fluorescence surgical microscope 1, the optical positioning system 2, the Raman spectrum probe 3, the first positioning ball group 4, the second positioning ball group 5, the third positioning ball group 6, the near infrared positioning laser emitting position 201, the visible light camera 202 and the near infrared laser positioning camera combination 203.
The fluorescence operation microscope system consists of an operation microscope light path system, a laser, an optical fiber, an optical filter combination and a camera, wherein the laser preferably adopts 785nm laser and is led into the optical filter combination through the optical fiber. The filter combination comprises a notch dichroic mirror and a notch filter, the notch center wavelength is preferably 785nm, and the filter combination is arranged right below the microscope objective lens. The laser is reflected by a notch dichroic mirror (45 degrees) from an angle of 90 degrees and is coaxial with a microscope field of view, when fluorescent agent exists in the microscope field of view, the fluorescent agent can be excited by excitation light to generate fluorescence, and in addition, white light illumination is led out to the front end of the optical filter combination through an optical fiber and an annular illumination light guide beam, so that the white light illumination does not interfere with fluorescence imaging at the same time. The fluorescence and reflected white light directly pass through the notch dichroic mirror and the notch filter to filter out the stray excitation light reflected light, and then enter the operation microscope optical system and finally enter the inside of the camera. The camera is internally provided with a beam splitting prism and a band-pass coating, and white light and fluorescence can be separated and focused on the CMOS chip to form white light and fluorescence images.
The Raman spectrum measurement system consists of an optical fiber lens type Raman spectrum probe, a Raman spectrometer and a laser arranged in the Raman spectrometer, and can excite and read Raman spectrum signals in biological tissues, process the Raman spectrum signals into digital signals and send the digital signals into the Raman spectrum diagnosis system.
The Raman spectrum diagnosis system inputs the acquired Raman spectrum digital signals into a pre-established deep learning model, and a classification result is obtained through reasoning of the model, wherein the classification result comprises whether malignant tumors exist or not and molecular typing. The deep learning model is based on a one-dimensional convolutional neural network, or YOLO, VGG, resNet model. The raman spectrum is subjected to normalization and standardization pretreatment, and enters a model in a two-dimensional image mode for training, so that the influence caused by hardware differences among different raman spectrum devices (such as differences of wave number step sizes among data points caused by differences of slit widths and differences of digital signal intensities caused by differences of detector sensitivity) is overcome, and the universality of the model is improved.
The optical positioning system monitors the space positions of positioning balls fixed on a fluorescence operation microscope, the skull side of a patient and a Raman spectrum probe in real time, calculates the relative position and the space angle among the three through the software system, and provides data support for the follow-up determination of the position of a Raman spectrum measuring point in a preoperative three-dimensional reconstruction image and the determination of the two-dimensional projection position of the Raman spectrum measuring point under the microscope view angle.
The software system comprises a space three-dimensional coordinate algorithm, a two-dimensional projection algorithm of a Raman measurement point under a microscope view angle, a real-time superposition algorithm of a preoperative three-dimensional reconstructed medical image, the microscope view angle and a Raman probe position, an image fusion algorithm of a fluorescence image and a Raman measurement result and an integral software display frame.
In addition, the positioning method of the raman probe in the present embodiment may be replaced with CT guidance or MRI guidance, but is preferably an optical positioner because the apparatus is bulky and it is difficult to position the position and view angle of the microscope.
The excitation wavelength band of the fluorescence operation microscope can be replaced by the excitation wavelength of other clinically common fluorescent agents, such as 405+/-20 nm for 5-ALA, 660+/-10 nm for methylene blue and 460-488nm for fluorescein sodium, and correspondingly, the notch filter combination is selected for the selected excitation wavelength.
The raman probe can adopt a fiber lens type probe or an optical filter and lens type probe, but the raman probe has the advantages of small volume and easy operation aiming at a narrow operation space of brain glioma operation, so the raman probe is preferably a fiber lens type probe.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described surgical navigation system may refer to the corresponding process in the foregoing embodiment of the processing method of the surgical navigation system, which is not described herein again.
Fifth embodiment:
The embodiment of the invention also provides electronic equipment, which is used for running the processing method of the operation navigation system; referring to fig. 9, an electronic device includes a memory 100 and a processor 101, where the memory 100 is configured to store one or more computer instructions, and the one or more computer instructions are executed by the processor 101 to implement the processing method of the surgical navigation system described above.
Further, the electronic device shown in fig. 9 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The memory 100 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in fig. 9, but not only one bus or one type of bus.
The processor 101 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 101 or instructions in the form of software. The processor 101 may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), and the like; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application Specific Integrated Circuit (ASIC), field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 100 and the processor 101 reads information in the memory 100 and in combination with its hardware performs the steps of the method of the previous embodiments.
The embodiment of the invention also provides a computer readable storage medium, which stores computer executable instructions that, when being called and executed by a processor, cause the processor to implement the processing method of the surgical navigation system, and the specific implementation can be referred to the method embodiment and will not be repeated herein.
The processing method, the software system and the computer program product of the surgical navigation system provided by the embodiments of the present invention include a computer readable storage medium storing program codes, and instructions included in the program codes may be used to execute the method in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and/or apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (5)
1. A software system for executing a method of processing a surgical navigation system, the method of processing a surgical navigation system comprising:
measuring a plurality of groups of distance parameters and angle parameters in advance;
Acquiring coordinate positions of a plurality of groups of positioning balls based on an optical positioning instrument, and calculating the barycentric coordinates of each group of positioning balls and unit normal vectors of planes determined by the plurality of groups of positioning balls; wherein the positioning ball is arranged on the fluorescence operation microscope, the skull side of the patient and the Raman spectrum probe; calculating a connecting line vector of the gravity center position of each group of positioning balls and a corresponding center point, a center point coordinate and a central shaft direction based on the gravity center coordinate of each group of positioning balls, the unit normal vector of the group of positioning balls, the distance parameter and the angle parameter; the center point includes: an exit center point of the microscope objective, a center point of the cranium top of the patient and a tail end of the Raman probe; the central shaft includes: a microscope field of view central axis, a patient skull vertical axis, and a raman probe central axis;
Determining a three-dimensional coordinate system with the barycentric coordinate position of a group of positioning balls at the skull side of the patient as an origin as a world coordinate system, acquiring the space coordinates of each Raman point in the world coordinate system, and calculating the positions of corresponding pixel points of each Raman point on an image based on the space coordinates of each Raman point in the world coordinate system;
Acquiring a preoperative medical image, translating the world coordinate system to enable the world coordinate system and the coordinate system of the preoperative medical image to be overlapped at a coordinate origin, and displaying the positions of corresponding pixel points of all Raman points on an image in the preoperative medical image;
The method for calculating the center of gravity coordinates of each group of positioning balls and the unit normal vector of the plane determined by the positioning balls based on the coordinate positions of the plurality of groups of positioning balls obtained by the optical positioning instrument comprises the following steps: adjusting a default reference system of the optical positioning instrument to a positive direction taking the vertical direction of the skull as the z axis, taking the sagittal axial forward direction as the x axis positive direction, taking the frontal axial left direction as the y axis positive direction, and setting the gravity center position of a group of positioning balls at the skull side of a patient as an origin; determining the coordinates of a group of positioning balls based on the optical positioning instrument, and determining the barycentric coordinates of the group of positioning balls and the unit normal vector of the group of positioning balls based on the coordinates of the group of positioning balls;
The distance parameter and the angle parameter include: the gravity center of the positioning ball is located at a distance from the corresponding center point of the microscope objective outlet, the center point of the cranium top of the patient and the tail end of the Raman probe; the distance connecting lines respectively form space included angles with the normal directions of the corresponding positioning balls; the projection line of the distance connecting line on the plane of the positioning ball forms an included angle with the connecting line of the first ball and the second ball in each group of the positioning balls respectively; the included angle between the connecting line of the first ball and the second ball in each group of positioning balls and the connecting line of the first ball and the third ball;
when the locating balls are installed, the normal direction of each locating ball is parallel to the corresponding central axis of the microscope visual field, the vertical axis of the patient's skull and the central axis of the Raman probe;
The step of calculating the positions of the corresponding pixels of each raman point on the image based on the spatial coordinates of each raman point in the world coordinate system comprises the steps of: converting the space coordinates of each Raman point in the world coordinate system into a camera coordinate system taking the equivalent optical center of the microscope lens as the origin of coordinates; calculating two-dimensional coordinates of space coordinates of each Raman point in the camera coordinate system on a camera imaging plane; the camera imaging plane is arranged behind the equivalent back focus of the equivalent optical center; and translating the coordinate system where the two-dimensional coordinates are located to obtain the coordinates of each Raman point in the pixel coordinate system as the positions of the corresponding pixel points of each Raman point on the image.
2. The software system of claim 1, wherein after the step of translating the world coordinate system such that the world coordinate system overlaps the coordinate system of the preoperative medical image at the origin of coordinates, the method of processing of the surgical navigation system further comprises one of:
identifying a location of a lesion based on the pre-operative medical image, displaying the location of the lesion in the pre-operative medical image;
Calculating a microscope field angle, and displaying a field light cone of a microscope in the preoperative medical image;
And displaying the position of the Raman probe in the preoperative medical image, so that the Raman probe projects according to a given viewing angle direction.
3. The software system according to claim 1 or 2, wherein the processing method of the surgical navigation system further comprises:
inputting Raman spectrum data acquired by a Raman probe into a pre-established deep learning model based on a Raman spectrum diagnosis system, and outputting focus classification results;
and displaying the focus classification result at the position of each Raman point corresponding to the pixel point in the preoperative medical image.
4. A software system according to claim 3, wherein the step of displaying the lesion classification result at the location of each raman point corresponding pixel point in the pre-operative medical image comprises:
Determining a Raman point currently displayed in the preoperative medical image when a camera lens moves;
And displaying the focus classification result corresponding to the currently displayed Raman point at the position of the pixel point corresponding to the currently displayed Raman point in the preoperative medical image.
5. A surgical navigation system, the surgical navigation system comprising: a fluorescence surgical microscope, a raman spectrometry system, a raman spectroscopic diagnostic system, an optical positioning system and a software system according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410488038.1A CN118078443B (en) | 2024-04-23 | 2024-04-23 | Processing method of operation navigation system, software system and operation navigation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410488038.1A CN118078443B (en) | 2024-04-23 | 2024-04-23 | Processing method of operation navigation system, software system and operation navigation system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118078443A CN118078443A (en) | 2024-05-28 |
CN118078443B true CN118078443B (en) | 2024-08-06 |
Family
ID=91155258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410488038.1A Active CN118078443B (en) | 2024-04-23 | 2024-04-23 | Processing method of operation navigation system, software system and operation navigation system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118078443B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110584783A (en) * | 2019-10-14 | 2019-12-20 | 中国科学技术大学 | Surgical navigation system |
CN111465351A (en) * | 2017-12-11 | 2020-07-28 | 豪洛捷公司 | Ultrasonic localization system with advanced biopsy site markers |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015154187A1 (en) * | 2014-04-08 | 2015-10-15 | Polyvalor, Limited Partnership | System and method for assessing a cancer status of biological tissue |
WO2020207583A1 (en) * | 2019-04-10 | 2020-10-15 | Brainlab Ag | Method of sampling relevant surface points of a subject |
CN113876426B (en) * | 2021-10-28 | 2023-04-18 | 电子科技大学 | Intraoperative positioning and tracking system and method combined with shadowless lamp |
CN115281583B (en) * | 2022-09-26 | 2022-12-13 | 南京诺源医疗器械有限公司 | Navigation system for medical endoscopic Raman spectral imaging |
-
2024
- 2024-04-23 CN CN202410488038.1A patent/CN118078443B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111465351A (en) * | 2017-12-11 | 2020-07-28 | 豪洛捷公司 | Ultrasonic localization system with advanced biopsy site markers |
CN110584783A (en) * | 2019-10-14 | 2019-12-20 | 中国科学技术大学 | Surgical navigation system |
Also Published As
Publication number | Publication date |
---|---|
CN118078443A (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3314234B1 (en) | Fluorescence biopsy specimen imager | |
JP2950340B2 (en) | Registration system and registration method for three-dimensional data set | |
US11986252B2 (en) | ENT image registration | |
JP5658747B2 (en) | Recalibration of recorded images during intervention using a needle device | |
US20130345509A1 (en) | System and method for endoscopic measurement and mapping of internal organs, tumors and other objects | |
CN113974830B (en) | Surgical navigation system for ultrasonic guided thyroid tumor thermal ablation | |
US20110245660A1 (en) | Projection image generation apparatus and method, and computer readable recording medium on which is recorded program for the same | |
US11183295B2 (en) | Medical image processing apparatus and medical image processing method which are for medical navigation device | |
KR20130109838A (en) | Apparatus and method for supporting lesion diagnosis | |
CN110072467B (en) | System for providing images for guided surgery | |
JP2015504737A (en) | Imaging device | |
CN110720985A (en) | Multi-mode guided surgical navigation method and system | |
EP3292835A1 (en) | Ent image registration | |
US20120188240A1 (en) | Medical image display apparatus, method and program | |
CN115444355A (en) | Endoscope lesion size information determining method, electronic device and storage medium | |
CN118078443B (en) | Processing method of operation navigation system, software system and operation navigation system | |
JP7014509B2 (en) | Medical image processing equipment and endoscopic equipment | |
KR20190024194A (en) | Medical Image Processing Apparatus Using Augmented Reality and Medical Image Processing Method Using The Same | |
JP2007014483A (en) | Medical diagnostic apparatus and diagnostic support apparatus | |
JP7172086B2 (en) | Surgery simulation device and surgery simulation program | |
Suter et al. | Macro-optical color assessment of the pulmonary airways with subsequent three-dimensional multidetector-x-ray-computed-tomography<? xpp qa?> assisted display | |
CN109068969A (en) | Diagnose the device of tissue | |
Safavian | A novel endoscopic system for determining the size and location of polypoidal lesions in the upper gastrointestinal tract | |
Weersink | Image Fusion and Visualization | |
EP4447844A1 (en) | Augmented and mixed reality incorporating pathology results in surgical settings |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |