CN115137988A - Medical navigation method - Google Patents

Medical navigation method Download PDF

Info

Publication number
CN115137988A
CN115137988A CN202210757984.2A CN202210757984A CN115137988A CN 115137988 A CN115137988 A CN 115137988A CN 202210757984 A CN202210757984 A CN 202210757984A CN 115137988 A CN115137988 A CN 115137988A
Authority
CN
China
Prior art keywords
coordinate system
point cloud
data
binocular camera
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210757984.2A
Other languages
Chinese (zh)
Inventor
杨镇郡
陈林俐
张延慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yinhe Fangyuan Technology Co ltd
Original Assignee
Beijing Yone Galaxy Technology Co ltd
Beijing Yinhe Fangyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yone Galaxy Technology Co ltd, Beijing Yinhe Fangyuan Technology Co ltd filed Critical Beijing Yone Galaxy Technology Co ltd
Priority to CN202210757984.2A priority Critical patent/CN115137988A/en
Publication of CN115137988A publication Critical patent/CN115137988A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N2/00Magnetotherapy
    • A61N2/02Magnetotherapy using magnetic fields produced by coils, including single turn loops or electromagnets
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0036Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room including treatment, e.g., using an implantable medical device, ablating, ventilating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N2/00Magnetotherapy
    • A61N2/004Magnetotherapy specially adapted for a specific therapy
    • A61N2/006Magnetotherapy specially adapted for a specific therapy for magnetic stimulation of nerve tissue

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Neurology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a medical navigation method, which relates to the technical field of medical navigation positioning and comprises the steps of establishing a conversion matrix of a binocular camera coordinate system and a model coordinate system; acquiring MRI data and converting the MRI data into MRI point cloud data; obtaining human face point cloud data through the environment point cloud data and the environment RGB data; registering the MRI point cloud data and the human face point cloud data to obtain a point cloud matching matrix; establishing a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system; and displaying the position relation between the model coil and the MRI data on a computer through a double-view navigation algorithm, and determining the target position according to the position relation. According to the invention, the target position is determined according to the position relation between the model coil and the MRI data by adopting the binocular camera and the point cloud camera, so that the target position can be tracked more effectively, more accurate and more convenient navigation guidance can be provided, the treatment efficiency and the treatment effect are improved, and the navigation operation difficulty is improved.

Description

Medical navigation method
Technical Field
The invention relates to the technical field of medical navigation and positioning, in particular to a medical navigation method.
Background
With the rapid development of society, the pace of life and work is accelerated, the pressure of modern people on life is increased, words of mental diseases such as depression and anxiety often appear in life, and the incidence rate of the mental diseases is increased day by day.
In 1985, barker et al placed an energized Stimulation coil over the subject's head, magnetically stimulated the motor zone of the cerebral cortex, observed twitching of the subject's hand muscles, and recorded Motor Evoked Potentials (MEPs) of the little finger abductor muscles via surface electrodes, in the form of Transcranial Magnetic Stimulation (TMS). In 1987, amassian et al experimentally demonstrated that the orientation of the stimulation coil had an effect on the stimulation effect of TMS on the cerebral cortex; in 1993, hoflide and the like apply transcranial magnetic stimulation to the treatment of depression, and experiments prove that TMS has a certain treatment effect on depression; in 2005, edwards et al used experiments to demonstrate that low-intensity repeated transcranial magnetic stimulation can cause excitability of cerebral cortical neurons; in 2007, joo E Y and other researches find that repeated transcranial magnetic stimulation for a long time and at a low frequency has a relieving effect on symptoms of epilepsy; in 2015, ku Y et al used single-pulse transcranial magnetic stimulation to stimulate sensory cortex and lateral posterior parietal cortex of brain, and found that this stimulation form has an intervention effect on the cognitive function of brain.
However, there are many difficulties in clinical application of transcranial magnetic stimulation therapy, which greatly restricts the application and popularization of transcranial magnetic stimulation technology in the treatment field of mental and neurological diseases. The main difficulties are: firstly, the TMS stimulation coil is positioned by depending on the experience and technology of doctors, the subjectivity is strong, and the treatment effect is influenced by inaccurate placement of the stimulation coil. Secondly, in the process of placing the stimulating coil, the brain structure of the patient is invisible, the head structure of each person is different, the positioning cap has no universality, and the precision is too poor. Thirdly, each treatment time of transcranial magnetic stimulation treatment lasts 15-30 minutes, and the process causes the placement position of the stimulation coil to change if the head of the patient slightly moves. If the head of the patient is fixed, the muscle is contracted and tensed with the increase of the stimulation time, and the patient feels discomfort. The problems are the main reasons influencing the precision of transcranial magnetic stimulation treatment at present and the technical problems needing to be solved in the popularization and application of the transcranial magnetic stimulation technology.
With the rapid development of medical imaging technology and medical image processing technology, image-guided surgical systems have come to be available, and doctors can intuitively and accurately analyze the structure of an organ or tissue and its surrounding tissues by means of a three-dimensional reconstruction model of a medical image. Image-guided surgery systems use intraoperative images of a patient and three-dimensional models of the associated lesion and surrounding tissue to guide the implementation of a clinical procedure in real time. During a surgical procedure, image guidance software can accurately display details of the patient's anatomical structure and the three-dimensional space surrounding the lesion. The image guidance is to scan a head image by using a medical imaging technology, segment, three-dimensionally reconstruct and the like the acquired medical head image, establish a head three-dimensional model containing brain tissues, plan and stimulate a target point on the reconstructed brain three-dimensional model, and map the target point on the brain three-dimensional model to the head of a patient in operation by using an image registration technology, thereby guiding a doctor to position the target point.
The TMS navigation system is currently the most representative commercialized TMS navigation system adopting optical auxiliary navigation. In 2008, lars Matthau ·· s et al used a Polaris Spectra optical tracking device and an Adept Viper s850 six-axis robot to form a TMS robot treatment system. The system utilizes a robot to clamp the stimulating coil, the optical tracking device is fixed by a bracket, and a marker is fixed on the head of a tester and used for positioning the head coordinate by the optical tracking device. The optical navigation positioning system can realize the visual operation of the transcranial magnetic stimulation treatment process, and improve the positioning accuracy of the stimulation coil to a certain extent, however, the current navigation is relatively complicated in operation, if a camera is not touched, the problem of re-registration and the like can be caused, and meanwhile, the visual operation view angle is single and the operation is inconvenient. In addition, the infrared binocular camera is used independently, so that the patient is required to wear the reflective model, and the patient with autism and the like may be subjected toNot applicable.
Disclosure of Invention
In view of this, the invention provides a medical navigation method, which uses a binocular camera and a point cloud camera, and determines a target position according to a position relationship between a model coil and MRI (Magnetic Resonance Imaging) data through a point cloud matching algorithm, so that the target position can be tracked more effectively, the tracking accuracy can be improved, more accurate and more convenient navigation guidance can be provided, the treatment efficiency and the treatment effect can be improved, and the operation difficulty of navigation can be improved.
The application has the following technical scheme:
the application provides a medical navigation method, which comprises the following steps:
establishing a conversion matrix of a binocular camera coordinate system and a model coordinate system;
acquiring MRI data and converting the MRI data into MRI point cloud data;
acquiring environmental point cloud data and environmental RGB data by using a point cloud camera, and obtaining human face point cloud data through the environmental point cloud data and the environmental RGB data;
registering the MRI point cloud data and the face point cloud data to obtain a point cloud matching matrix;
establishing a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system;
and displaying the position relation between the model coil and the MRI data on a computer by using the conversion matrix of the binocular camera coordinate system and the model coordinate system, the calibration matrix between the binocular camera coordinate system and the point cloud matching matrix through a double-view navigation algorithm, and determining the target position according to the position relation.
Optionally, wherein:
the establishing of the conversion matrix of the binocular camera coordinate system and the model coordinate system specifically comprises the following steps:
establishing a model coil through a computer, and setting at least 4 first calibration points in the model coil;
disposing a first marker on the stimulation coil;
setting second calibration points which correspond to at least 4 first calibration points in the model coil one to one on the stimulating coil;
obtaining a transformation matrix of the first marker coordinate system and the model coordinate system according to the coordinates of the first calibration point in the model coordinate system and the coordinates of the second calibration point in the first marker coordinate system;
obtaining a transformation matrix of the first marker coordinate system and the binocular camera coordinate system according to the binocular camera;
and obtaining a conversion matrix of the binocular camera coordinate system and the model coordinate system according to the conversion matrix of the first marker coordinate system and the model coordinate system and the conversion matrix of the first marker coordinate system and the binocular camera coordinate system.
Optionally, wherein:
the face point cloud data is obtained through the environment point cloud data and the environment RGB data, and the method specifically comprises the following steps:
extracting face data from the environmental RGB data through a face detection algorithm;
registering the environmental point cloud data into an RGB coordinate system, and removing non-face point cloud data according to the face data to obtain face point cloud data.
Optionally, wherein:
registering the MRI point cloud data and the face point cloud data to obtain a point cloud matching matrix, which specifically comprises the following steps:
performing rough matching on the MRI point cloud data and the human face point cloud data by using an RANSAC algorithm to obtain rough matching data;
and accurately matching the rough matching data by utilizing an ICP (inductively coupled plasma) algorithm, and establishing a relation between the MRI point cloud data and the face point cloud data under the point cloud camera to obtain a point cloud matching matrix.
Optionally, wherein:
the establishment of the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system specifically comprises the following steps:
selecting at least 4 first coordinate points under the point cloud camera coordinate system;
selecting at least 4 second coordinate points under the binocular camera coordinate system;
and calculating a calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system according to the first coordinate point and the second coordinate point.
Optionally, wherein:
the establishment of the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system specifically comprises the following steps:
setting a third marker on the point cloud camera;
setting at least 4 third calibration points on the point cloud camera;
obtaining a transformation matrix of the third marker coordinate system and the point cloud camera coordinate system according to the coordinates of the third calibration point under the third marker coordinate system and the coordinates of the third calibration point under the point cloud camera coordinate system;
obtaining a transformation matrix of the coordinate system of the third marker and the coordinate system of the binocular camera according to the binocular camera;
and obtaining a calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera according to the transformation matrix of the coordinate system of the third marker and the coordinate system of the point cloud camera and the transformation matrix of the coordinate system of the third marker and the coordinate system of the binocular camera.
Optionally, wherein:
the dual-view navigation algorithm comprises the following steps:
obtaining the coordinates of the model coil under the binocular camera coordinate system according to the model coordinate system and the conversion matrix of the binocular camera coordinate system and the model coordinate system;
obtaining the coordinates of the model coil under the point cloud camera coordinate system according to the coordinates of the model coil under the binocular camera coordinate system and a calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system;
and obtaining the coordinates of the model coil under the MRI coordinate system according to the coordinates of the model coil under the point cloud camera coordinate system and the point cloud matching matrix, and obtaining the view angle of the MRI data motionless model coil motion.
Optionally, wherein:
the dual view navigation algorithm further comprises:
obtaining the coordinates of the MRI data under the point cloud camera coordinate system according to the coordinates of the MRI data under the MRI coordinate system and the point cloud matching matrix;
obtaining the coordinates of the MRI data under the coordinate system of the binocular camera according to the coordinates of the MRI data under the coordinate system of the point cloud camera and a calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera;
and obtaining the coordinates of the MRI data under the model coordinate system according to the coordinates of the MRI data under the binocular camera coordinate system and the conversion matrix of the binocular camera coordinate system and the model coordinate system, and obtaining the view angle of the MRI data movement without moving the model coil.
Compared with the prior art, the medical navigation method provided by the invention at least realizes the following beneficial effects:
(1) According to the medical navigation method, the binocular camera and the point cloud camera are adopted, the point cloud matching algorithm is adopted, the position relation between the MRI data of the patient and the model coil is displayed through the computer, the target position is determined according to the position relation between the model coil and the MRI data, and therefore a doctor can be guided to find the optimal stimulation position. Therefore, the patient does not need to wear the reflective model, the range of applicable people can be enlarged, the operation difficulty is reduced, and the manual error is reduced.
(2) According to the medical navigation method, the double-view algorithm is adopted, so that a user can track the target position at two views respectively, the target position can be tracked more effectively, the problem of inconvenient operation due to single view can be avoided, the tracking accuracy can be improved, more accurate and more convenient navigation guidance is provided, the treatment efficiency and the treatment effect are improved, and the navigation operation difficulty is improved.
Of course, it is not necessary for any product in which the present invention is practiced to achieve all of the above-described technical effects simultaneously.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart illustrating a medical navigation method according to an embodiment of the present application;
fig. 2 is a flowchart illustrating the establishment of a transformation matrix between a binocular camera coordinate system and a model coordinate system according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a model coil provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a structure of a stimulation coil provided by an embodiment of the present application;
fig. 5 is a flowchart illustrating a process of obtaining point cloud data of a human face according to an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a method for obtaining a point cloud matching matrix according to an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating the establishment of a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system according to an embodiment of the present disclosure;
fig. 8 is another flowchart illustrating establishing a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system according to an embodiment of the present disclosure;
FIG. 9 is a flow chart of dual view navigation provided by an embodiment of the present application;
FIG. 10 is another flow chart of dual view navigation provided by embodiments of the present application.
Detailed Description
As used in the specification and in the claims, certain terms are used to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This specification and claims do not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. "substantially" means within an acceptable error range, and a person skilled in the art can solve the technical problem within a certain error range to substantially achieve the technical effect. Furthermore, the term "coupled" is intended to encompass any direct or indirect electrical coupling. Thus, if a first device couples to a second device, that connection may be through a direct electrical coupling or through an indirect electrical coupling via other devices and couplings. The description which follows is a preferred embodiment of the present application, but is made for the purpose of illustrating the general principles of the application and not for the purpose of limiting the scope of the application. The protection scope of the present application shall be subject to the definitions of the appended claims.
The TMS navigation system which adopts optical auxiliary navigation is the most representative commercialized TMS navigation system at present. 2008, lars Matthau ·· s et al used a Polaris Spectra optical tracking device with an Adept Viper s850 six axis robot to form a TMS robotic treatment system. The system utilizes a robot to clamp the stimulating coil, the optical tracking device is fixed by a bracket, and a marker is fixed on the head of a tester and used for positioning the head coordinate by the optical tracking device. The optical navigation positioning system can realize the visual operation of the transcranial magnetic stimulation treatment process, improve the positioning accuracy of the stimulation coil to a certain extent, but the current navigation is relatively complicated in operation, the problem of re-registration can be caused if a camera is not touched, and the like. In addition, the infrared binocular camera is used independently, and a patient needs to wear the reflective model, so that the infrared binocular camera is not suitable for patients with autism and the like.
In view of this, the invention provides a medical navigation method, which adopts a binocular camera and a point cloud camera, and determines a target position according to a position relation between a model coil and MRI data through a point cloud matching algorithm, so that the target position can be tracked more effectively, the tracking accuracy can be improved, more accurate and more convenient navigation guidance can be provided, the treatment efficiency and the treatment effect can be improved, and the navigation operation difficulty can be improved.
The following description and the accompanying drawings are combined the examples are described in detail.
Fig. 1 is a flowchart of a medical navigation method according to an embodiment of the present application, and referring to fig. 1, the medical navigation method according to the embodiment of the present application includes:
step 1: establishing a conversion matrix of a binocular camera coordinate system and a model coordinate system;
and 2, step: acquiring MRI data, and converting the MRI data into MRI point cloud data;
and step 3: acquiring environmental point cloud data and environmental RGB data by using a point cloud camera, and obtaining face point cloud data through the environmental point cloud data and the environmental RGB data;
and 4, step 4: registering the MRI point cloud data and the human face point cloud data to obtain a point cloud matching matrix;
and 5: establishing a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system;
and 6: and displaying the position relation between the model coil and the MRI data on a computer by using a conversion matrix of the binocular camera coordinate system and the model coordinate system, a calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system and a point cloud matching matrix through a double-view navigation algorithm, and determining the target position according to the position relation.
Specifically, referring to fig. 1, the medical navigation method provided in the embodiment of the present application includes a binocular camera and a point cloud camera, and establishes a transformation matrix between a coordinate system of the binocular camera and a coordinate system of a model through step 1, and when establishing the transformation matrix between the coordinate system of the binocular camera and the coordinate system of the model, a model coil having a shape consistent with that of a stimulation coil needs to be established first, and then establishes the transformation matrix between the coordinate system of the binocular camera and the coordinate system of the model by means of the stimulation coil and the model coil.
It should be noted that, in order to make the binocular camera recognize the stimulation coil, a marker needs to be disposed on the stimulation coil, for example, a first marker is fixed on the back of the stimulation coil, the first marker may be composed of 4 first glistening balls and a first bracket, and the stimulation coil disposed with the marker is placed in the visible range of the binocular camera. The number of the first reflective small balls can be other, for example, 5, 6, etc., which is not limited in this application.
And 2, acquiring MRI data and converting the MRI data into MRI point cloud data, wherein the MRI (Magnetic Resonance Imaging) data refers to Magnetic Resonance Imaging, and the point cloud data refers to a set of points obtained after acquiring the spatial coordinates of each sampling point on the surface of the object. And 3, acquiring environment point cloud data and environment RGB data by using a point cloud camera, and obtaining face point cloud data through the environment point cloud data and the environment RGB data. The environment point cloud data comprises face point cloud data and other objects around a face, namely non-face point cloud data, the environment RGB data comprises face data and non-face data in an RGB coordinate system, face feature information can be extracted from the environment RGB data through a face detection algorithm, and the face point cloud data can be extracted by registering in the environment point cloud data according to the extracted face feature information.
It should be noted that, in practical application, the steps 1, 2, and 3 are not necessarily performed in the order, but the steps 2 and 3 may be performed first, and the step 1 may be performed, or three steps may be performed simultaneously, which may be specifically set according to practical needs, and this is not specifically limited in this application.
After the MRI point cloud data and the human face point cloud data are obtained, the MRI point cloud data and the human face point cloud data are registered by using the step 4, and a conversion relation between the MRI point cloud data and the human face point cloud data, namely a point cloud matching matrix, is obtained. When the MRI Point cloud data and the face Point cloud data are registered, coarse matching may be performed by using a Random Sample Consensus (Random Sample Consensus) algorithm, and then accurate matching may be performed by using an ICP (Iterative Closest Point) algorithm. Coarse matching refers to coarse registration under the condition that transformation between two point clouds is completely unknown, the purpose is mainly to provide a better transformation initial value for fine registration, and accurate matching refers to giving an initial transformation and further optimizing to obtain more accurate transformation.
And 5, establishing a calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera to obtain a conversion relation between the binocular camera and the point cloud camera. When the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system is established, different calibration matrices can be obtained according to the fact that the binocular camera and the point cloud camera are fixed or move mutually.
It should be noted that fig. 1 is only for schematically illustrating the steps included in the medical navigation method provided in the present application, and is not intended to limit the execution sequence of each step, that is, in practical applications, the execution sequence of step 5 and the aforementioned steps may be interchanged, for example, step 5 may be executed first and step 1 may be executed, or may be executed simultaneously, and this is not specifically limited in the present application.
After obtaining a transformation matrix between a binocular camera coordinate system and a model coordinate system, a calibration matrix between the binocular camera coordinate system and a point cloud matching matrix, in step 6, the transformation matrix between the binocular camera coordinate system and the model coordinate system, the calibration matrix between the binocular camera coordinate system and the point cloud matching matrix are utilized, the position relation of the model coil and the MRI data is displayed on a computer through a binocular camera and point cloud camera double-view navigation algorithm, the visual angle of the relative movement of the model coil and the MRI data or the visual angle of the relative movement of the MRI data and the model coil are respectively obtained, and the target position is determined according to the position relation of the model coil and the MRI data, so that a doctor can be guided to find the optimal stimulation position.
According to the medical navigation method provided by the embodiment of the application, the binocular camera and the point cloud camera are adopted, the point cloud matching algorithm is adopted, the position relation between the MRI data of a patient and the model coil is displayed through the computer, the target position is determined according to the position relation between the model coil and the MRI data, and therefore a doctor can be guided to find the optimal stimulation position. Therefore, the patient does not need to wear the reflective model, the range of applicable people can be enlarged, the operation difficulty is reduced, and the manual error is reduced.
In addition, this application adopts two visual angle algorithms, makes the user can track the target location with two kinds of visual angles respectively to can more effectual tracking target location, not only can avoid the problem of the inconvenient operation of single visual angle, can improve the accuracy of tracking moreover, provide more accurate, more convenient navigation and guide, improve treatment effeciency and treatment, improve the operation degree of difficulty of navigation.
Optionally, fig. 2 is a flowchart illustrating the establishment of a transformation matrix between a binocular camera coordinate system and a model coordinate system according to an embodiment of the present application, fig. 3 is a schematic structural diagram illustrating a model coil 100 according to an embodiment of the present application, and fig. 4 is a schematic structural diagram illustrating a stimulation coil 200 according to an embodiment of the present application, please refer to fig. 1 to 4, in step 1, the establishment of the transformation matrix between the binocular camera coordinate system and the model coordinate system specifically includes:
step 11: establishing a model coil 100 through a computer, and setting at least 4 first calibration points 101 in the model coil 100;
step 12: a first marker (not shown) is provided on the stimulation coil 200;
step 13: setting second calibration points 201 corresponding to at least 4 first calibration points 101 in the model coil 100 one by one on the stimulation coil 200;
step 14: obtaining a transformation matrix of a first marker coordinate system and a model coordinate system according to the coordinates of the first calibration point 101 and the coordinates of the second calibration point 201 in the first marker coordinate system;
step 15: obtaining a transformation matrix of a first marker coordinate system and a binocular camera coordinate system according to a binocular camera;
step 16: and obtaining a conversion matrix of the coordinate system of the binocular camera and the coordinate system of the model according to the conversion matrix of the coordinate system of the first marker and the coordinate system of the model and the conversion matrix of the coordinate system of the first marker and the coordinate system of the binocular camera.
Specifically, referring to fig. 1-4, when the transformation matrix between the coordinate system of the binocular camera and the coordinate system of the model is established in step 1, the model coil 100 having the same shape as the stimulation coil 200 needs to be established by a computer, and at least 4 first calibration points 101 are set in the model coil 100. Meanwhile, at least 4 second calibration points 201 are arranged on the stimulation coil 200, and the second calibration points 201 are in one-to-one correspondence with the first calibration points 101, for example, when the first calibration points 101 are arranged, a two-dimensional cartesian coordinate system may be calibrated on the surface of the model coil 100, two points are symmetrically taken on the X axis with the origin O as the center, and two points are symmetrically taken on the Y axis with the origin O as the center, so that when the second calibration points 201 are arranged on the stimulation coil 200, the arrangement may also be performed according to the method for arranging the first calibration points 101. Of course, the symmetry points on the X axis and the Y axis are only schematic illustrations and are not intended to limit the present application, and the first calibration point 101 and the second calibration point 201 may be specifically configured according to the actual application.
In order to make the binocular camera recognize the stimulation coil 200, a marker needs to be disposed on the stimulation coil 200, and in step 12 of this embodiment, a first marker is disposed on the stimulation coil 200, for example, the first marker is fixed behind the stimulation coil 200, and the first marker may be composed of 4 first retro-reflective beads and a first bracket, wherein the 4 first retro-reflective beads are fixed on the first bracket. The second index point 201 is disposed on the stimulation coil 200 and the first marker is bound to the stimulation coil 200 such that the second index point 201 exists at coordinates below the first marker.
In acquiring the coordinates of the second calibration point 201 under the first marker, it may be implemented by a probe on which the marker needs to be set in order that the binocular camera may recognize the probe. For example, a second marker is bound on the probe, the second marker comprises 4 second beads and a second support, the 4 second beads are fixed on the second support, and the distance between the tip of the probe and each second bead can be obtained through measurement. In order to facilitate the click of the probe, a groove sticker is pasted on the second mark point. The probe tips are sequentially placed in the grooves, the posture matrix of the probe tips under the binocular camera when the probe tips are at each second mark point and the posture matrix of the first marker under the binocular camera at the moment are obtained by using the binocular camera, and the coordinates of the probe tips under the binocular camera when the probe tips are at each second mark point can be obtained by using the distance between the probe tips and each second reflective small ball. And multiplying the coordinate of the probe tip under the binocular camera at each second mark point by the inverse matrix of the attitude matrix of the first mark under the binocular camera at the corresponding moment to obtain the coordinate of the probe tip under the first mark point at each second mark point, namely the coordinate of the second mark point under the first mark point.
Since the coordinates of the first and second calibration points 101 and 201 are corresponding, the transformation matrices of the first and second marker coordinate systems can be obtained according to the coordinates of the first calibration point 101 and the second calibration point 201 in the model coordinate system. The stimulation coil 200 is placed within the visual range of a binocular camera, and a transformation matrix of the first marker coordinate system and the binocular camera coordinate system can be obtained using the binocular camera. And then obtaining a conversion matrix of the coordinate system of the binocular camera and the coordinate system of the model according to the conversion matrix of the coordinate system of the first marker and the coordinate system of the model and the conversion matrix of the coordinate system of the first marker and the coordinate system of the binocular camera.
It should be noted that fig. 2 is only for schematically illustrating steps included in establishing the transformation matrix between the binocular camera coordinate system and the model coordinate system, and does not completely represent the execution steps of each step, for example, the execution order of step 12 and step 13 may be changed sequentially, and the execution order of step 15 and step 14 may also be changed sequentially. In practical use, the method can be set according to specific needs, and the application is not particularly limited to this.
Optionally, referring to fig. 1 and fig. 5, fig. 5 is a flowchart illustrating a process of obtaining face point cloud data according to an embodiment of the present application, and in step 3, the face point cloud data is obtained through environment point cloud data and environment RGB data, specifically: step 31: extracting face data from the environmental RGB data through a face detection algorithm; step 32: registering the environment point cloud data into an RGB coordinate system, and removing non-human face point cloud data according to the human face data to obtain human face point cloud data.
Specifically, referring to fig. 1 and 5, the environment RGB data includes face feature information and non-face feature information around a face, and when obtaining the face point cloud data, the face data is first extracted from the environment RGB data through a face detection algorithm in step 31. Similarly, the environmental point cloud data comprises face point cloud data and non-face point cloud data, the environmental point cloud data is registered into an RGB coordinate system, the face data obtained in the step 31 is used for removing information outside the face area in the environmental point cloud data, and the rest is the face point cloud data. Because the information irrelevant to the human face is removed, the efficiency and the robustness of a subsequent point cloud matching algorithm can be improved.
After the face point cloud data is obtained, the face point cloud data needs to be registered under a point cloud camera coordinate system, and when the registration is performed, if the point cloud camera coordinate system is inconsistent with the RGB coordinate system, the RGB data is firstly transferred under the point cloud camera coordinate system, and then the point cloud is processed. Or the point cloud data can be transferred to an RGB coordinate system and then transferred to a point cloud camera coordinate system after being processed.
Optionally, referring to fig. 1 and 6, fig. 6 is a flowchart illustrating a process of obtaining a point cloud matching matrix according to an embodiment of the present application, and in step 4, the MRI point cloud data and the face point cloud data are registered to obtain the point cloud matching matrix, which specifically includes: step 41: performing rough matching on the MRI point cloud data and the human face point cloud data by using an RANSAC algorithm to obtain rough matching data; step 42: and accurately matching the rough matching data by utilizing an ICP (inductively coupled plasma) algorithm, and establishing a relation between the MRI point cloud data and the face point cloud data under the point cloud camera to obtain a point cloud matching matrix.
Specifically, referring to fig. 1 and fig. 6, when registering MRI point cloud data and face point cloud data, coarse registration is performed first by using a RANSAC algorithm, which is an iterative algorithm for correctly estimating mathematical model parameters from a set of data containing "outliers" (outlers). "outliers" generally refer to noise in the data, such as mismatches in the match and outliers in the estimated curve. The RANSAC algorithm is an uncertain algorithm that produces results only with a probability that increases with the number of iterations. After the registration result of the RANSAC algorithm is obtained, the ICP algorithm is used for fine registration, and the ICP algorithm is an algorithm based on a data registration method and a point-to-point, point-to-line or point-to-surface nearest distance search method, so that the problem of the free form curved surface-based algorithm is solved. The specific algorithm flows of the RANSAC algorithm and the ICP algorithm may refer to the existing algorithm, and are not described herein again.
Optionally, referring to fig. 1 and fig. 7, fig. 7 is a flowchart illustrating a process of establishing a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system according to an embodiment of the present application, and in step 5, establishing a calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system specifically includes: step 51: selecting at least 4 first coordinate points under a point cloud camera coordinate system; step 52: selecting at least 4 second coordinate points under a binocular camera coordinate system; step 53: and calculating a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system according to the first coordinate point and the second coordinate point.
Specifically, referring to fig. 1 and 7, when a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system is established, first, in step 51, at least 4 first coordinate points are selected in the point cloud camera coordinate system; then, in step 52, at least 4 second coordinate points are selected under the binocular camera coordinate system; in step 53, a calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system is calculated using a quaternion algorithm using the first coordinate points and the second coordinate points.
The calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera obtained in this embodiment needs to ensure that the positions of the binocular camera and the point cloud camera are fixed, and when displacement exists between the two, the navigation process is influenced.
Referring to fig. 1 and 8, fig. 8 is another flowchart illustrating the establishment of the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system according to the embodiment of the present application, and in step 5, the establishment of the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system specifically includes: step 61: setting a third marker on the point cloud camera; step 62: setting at least 4 third calibration points on the point cloud camera; and step 63: obtaining a transformation matrix of a third marker coordinate system and a point cloud camera coordinate system according to the coordinates of the third calibration point in the third marker coordinate system and the coordinates of the third calibration point in the point cloud camera coordinate system; step 64: obtaining a transformation matrix of a coordinate system of a third marker and a coordinate system of a binocular camera according to the binocular camera; step 65: and obtaining a calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera according to the transformation matrix of the third marker coordinate system and the coordinate system of the point cloud camera and the transformation matrix of the third marker coordinate system and the coordinate system of the binocular camera.
Specifically, referring to fig. 1 and 8, when the calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera is established in the embodiment, the third marker is first set on the point cloud camera, so as to facilitate the identification of the binocular camera. And setting at least 4 third calibration points on the point cloud camera, and obtaining a conversion matrix of a third marker coordinate system and a point cloud camera coordinate system according to the coordinates of the third calibration points in the third marker coordinate system and the coordinates of the third calibration points in the point cloud camera coordinate system. When obtaining the coordinate of the third calibration point in the coordinate system of the third marker, the coordinate may be obtained by using a probe, and reference may be specifically made to the method for obtaining the coordinate of the second calibration point 201 in the first marker in step 14, which is not described herein again.
And then obtaining a calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera according to the transformation matrix of the coordinate system of the third marker and the coordinate system of the point cloud camera and the transformation matrix of the coordinate system of the third marker and the coordinate system of the binocular camera. In the embodiment, the third marker is fixed with the point cloud camera, and the relation between the third marker and the point cloud camera is established, so that the binocular camera and the point cloud camera can move randomly in the operation range in the navigation process without influencing the navigation process, the operation process is facilitated to be simplified, and the navigation operation difficulty is reduced.
It should be noted that fig. 8 is only to schematically illustrate steps included in the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system, and is not intended to limit the execution order of each step, for example, the execution order of step 61 and step 62 may be exchanged one after another.
Optionally, referring to fig. 1, fig. 3-fig. 4 and fig. 9, fig. 9 is a flowchart of dual-view navigation provided in an embodiment of the present application, and in step 6, the dual-view navigation algorithm includes: step 71: obtaining coordinates of the model coil 100 in a binocular camera coordinate system according to the model coordinate system and a conversion matrix of the binocular camera coordinate system and the model coordinate system; step 72: obtaining the coordinates of the model coil 100 in the point cloud camera coordinate system according to the coordinates of the model coil 100 in the binocular camera coordinate system and the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system; step 73: and obtaining the coordinate of the model coil 100 in the MRI coordinate system according to the coordinate of the model coil 100 in the point cloud camera coordinate system and the point cloud matching matrix, and obtaining the view angle of the MRI data which is static and the model coil 100 which moves.
Specifically, referring to fig. 1, fig. 3-fig. 4 and fig. 9, in the present embodiment, when performing dual-view navigation, in step 71, coordinates of the model coil 100 in the binocular camera coordinate system are obtained according to the model coordinate system and the transformation matrix between the binocular camera coordinate system and the model coordinate system, and then in step 72, coordinates of the model coil 100 in the point cloud camera coordinate system can be obtained according to the coordinates of the model coil 100 in the binocular camera coordinate system, the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system. In step 73, the coordinates of the model coil 100 in the MRI coordinate system can be obtained according to the coordinates of the model coil 100 in the point cloud camera coordinate system and the point cloud matching matrix, so that tracking of the target position under the view angle of the MRI data motionless and the model coil 100 movement can be realized, and a doctor can be guided to find the optimal stimulation position, so that the problem of inconvenient operation due to single view angle can be avoided, the tracking accuracy can be improved, more accurate and more convenient navigation guidance is provided, the treatment efficiency and the treatment effect are improved, and the navigation operation difficulty is improved.
Optionally, referring to fig. 1, fig. 3-fig. 4, and fig. 10, fig. 10 is another flowchart of dual-view navigation provided in an embodiment of the present application, and in step 6, the dual-view navigation algorithm further includes: step 74: obtaining the coordinates of the MRI data in a point cloud camera coordinate system according to the coordinates of the MRI data in the MRI coordinate system and the point cloud matching matrix; step 75: obtaining the coordinates of the MRI data under a binocular camera coordinate system according to the coordinates of the MRI data under the point cloud camera coordinate system and the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system; step 76: and obtaining the coordinates of the MRI data in the model coordinate system according to the coordinates of the MRI data in the binocular camera coordinate system and the transformation matrix of the binocular camera coordinate system and the model coordinate system, and obtaining the view angle of the MRI data movement without moving the model coil 100.
Specifically, referring to fig. 1, fig. 3-fig. 4 and fig. 10, in the present embodiment, after obtaining the view angle of the MRI data that is not moved and the model coil 100 moves during the dual-view navigation, the coordinates of the MRI data in the point cloud camera coordinate system are obtained according to the coordinates of the MRI data in the MRI coordinate system and the point cloud matching matrix in step 74, and then the coordinates of the MRI data in the point cloud camera coordinate system are obtained according to the coordinates of the MRI data in the point cloud camera coordinate system, the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system in step 75. In step 76, the coordinates of the MRI data in the model coordinate system are obtained according to the coordinates of the MRI data in the binocular camera coordinate system and the transformation matrix of the binocular camera coordinate system and the model coordinate system, so that tracking of the target position under the visual angle of the fixed model coil 100 and the movement of the MRI data can be realized, and a doctor can be guided to find the optimal stimulation position, thereby not only avoiding the problem of inconvenient operation due to single visual angle, but also improving the tracking accuracy, providing more accurate and more convenient navigation guidance, improving the treatment efficiency and the treatment effect, and improving the operation difficulty of navigation.
According to the embodiments, the application has the following beneficial effects:
(1) According to the medical navigation method, the binocular camera and the point cloud camera are adopted, the point cloud matching algorithm is adopted, the position relation between the MRI data of the patient and the model coil is displayed through the computer, the target position is determined according to the position relation between the model coil and the MRI data, and therefore a doctor can be guided to find the optimal stimulation position. Therefore, the patient does not need to wear the reflective model, the range of applicable people can be enlarged, the operation difficulty is reduced, and the manual error is reduced.
(2) According to the medical navigation method, the double-view algorithm is adopted, so that a user can track the target position at two views respectively, the target position can be tracked more effectively, the problem of inconvenient operation due to single view can be avoided, the tracking accuracy can be improved, more accurate and more convenient navigation guidance is provided, the treatment efficiency and the treatment effect are improved, and the navigation operation difficulty is improved.
The foregoing description shows and describes several preferred embodiments of the present application, but as aforementioned, it is to be understood that the application is not limited to the forms disclosed herein, and is not to be construed as excluding other embodiments, but rather is capable of use in various other combinations, modifications, and environments and is capable of changes within the scope of the inventive concept as expressed herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the application, which is to be protected by the claims appended hereto.

Claims (8)

1. A medical navigation method, comprising:
establishing a conversion matrix of a binocular camera coordinate system and a model coordinate system;
acquiring MRI data and converting the MRI data into MRI point cloud data;
acquiring environmental point cloud data and environmental RGB data by using a point cloud camera, and obtaining human face point cloud data through the environmental point cloud data and the environmental RGB data;
registering the MRI point cloud data and the face point cloud data to obtain a point cloud matching matrix;
establishing a calibration matrix between a binocular camera coordinate system and a point cloud camera coordinate system;
and displaying the position relation between the model coil and the MRI data on a computer through a double-view navigation algorithm by utilizing the conversion matrix of the binocular camera coordinate system and the model coordinate system, the calibration matrix between the binocular camera coordinate system and the point cloud matching matrix, and determining the target position according to the position relation.
2. The medical navigation method according to claim 1, wherein the establishing of the transformation matrix between the binocular camera coordinate system and the model coordinate system specifically comprises:
establishing a model coil through a computer, and setting at least 4 first calibration points in the model coil;
disposing a first marker on the stimulation coil;
setting second calibration points which correspond to at least 4 first calibration points in the model coil one to one on the stimulating coil;
obtaining a transformation matrix of the first marker coordinate system and the model coordinate system according to the coordinates of the first calibration point in the model coordinate system and the coordinates of the second calibration point in the first marker coordinate system;
obtaining a transformation matrix of the first marker coordinate system and the binocular camera coordinate system according to the binocular camera;
and obtaining a conversion matrix of the binocular camera coordinate system and the model coordinate system according to the conversion matrix of the first marker coordinate system and the model coordinate system and the conversion matrix of the first marker coordinate system and the binocular camera coordinate system.
3. The medical navigation method according to claim 1, wherein the face point cloud data is obtained from the environment point cloud data and the environment RGB data, and specifically:
extracting face data from the environmental RGB data through a face detection algorithm;
registering the environmental point cloud data into an RGB coordinate system, and removing non-human face point cloud data according to the human face data to obtain human face point cloud data.
4. The medical navigation method according to claim 1, wherein the MRI point cloud data and the face point cloud data are registered to obtain a point cloud matching matrix, specifically:
performing rough matching on the MRI point cloud data and the human face point cloud data by using an RANSAC algorithm to obtain rough matching data;
and accurately matching the rough matching data by utilizing an ICP (inductively coupled plasma) algorithm, and establishing a relation between the MRI point cloud data and the face point cloud data under the point cloud camera to obtain a point cloud matching matrix.
5. The medical navigation method of claim 1, the establishment of the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system specifically comprises the following steps:
selecting at least 4 first coordinate points under the point cloud camera coordinate system;
selecting at least 4 second coordinate points under the binocular camera coordinate system;
and calculating a calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system according to the first coordinate point and the second coordinate point.
6. The medical navigation method according to claim 1, wherein the establishing of the calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system specifically comprises:
setting a third marker on the point cloud camera;
setting at least 4 third calibration points on the point cloud camera;
obtaining a transformation matrix of the third marker coordinate system and the point cloud camera coordinate system according to the coordinates of the third calibration point under the third marker coordinate system and the coordinates of the third calibration point under the point cloud camera coordinate system;
obtaining a transformation matrix of the coordinate system of the third marker and the coordinate system of the binocular camera according to the binocular camera;
and obtaining a calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera according to the transformation matrix of the coordinate system of the third marker and the coordinate system of the point cloud camera and the transformation matrix of the coordinate system of the third marker and the coordinate system of the binocular camera.
7. The medical navigation method of claim 1, wherein the dual view navigation algorithm comprises:
obtaining the coordinates of the model coil under the binocular camera coordinate system according to the model coordinate system and the conversion matrix of the binocular camera coordinate system and the model coordinate system;
obtaining the coordinates of the model coil under the point cloud camera coordinate system according to the coordinates of the model coil under the binocular camera coordinate system and a calibration matrix between the binocular camera coordinate system and the point cloud camera coordinate system;
and obtaining the coordinates of the model coil under the MRI coordinate system according to the coordinates of the model coil under the point cloud camera coordinate system and the point cloud matching matrix, and obtaining the view angle of the MRI data motionless model coil motion.
8. The medical navigation method of claim 7, wherein the dual view navigation algorithm further comprises:
obtaining the coordinates of the MRI data under the point cloud camera coordinate system according to the coordinates of the MRI data under the MRI coordinate system and the point cloud matching matrix;
obtaining the coordinates of the MRI data under the coordinate system of the binocular camera according to the coordinates of the MRI data under the coordinate system of the point cloud camera and a calibration matrix between the coordinate system of the binocular camera and the coordinate system of the point cloud camera;
and obtaining the coordinates of the MRI data under the model coordinate system according to the coordinates of the MRI data under the binocular camera coordinate system and the conversion matrix of the binocular camera coordinate system and the model coordinate system, and obtaining the view angle of the MRI data movement without moving the model coil.
CN202210757984.2A 2022-06-29 2022-06-29 Medical navigation method Pending CN115137988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210757984.2A CN115137988A (en) 2022-06-29 2022-06-29 Medical navigation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210757984.2A CN115137988A (en) 2022-06-29 2022-06-29 Medical navigation method

Publications (1)

Publication Number Publication Date
CN115137988A true CN115137988A (en) 2022-10-04

Family

ID=83409398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210757984.2A Pending CN115137988A (en) 2022-06-29 2022-06-29 Medical navigation method

Country Status (1)

Country Link
CN (1) CN115137988A (en)

Similar Documents

Publication Publication Date Title
DK2061556T3 (en) PROCEDURE AND APPARATUS TO CORRECT A ERROR IN THE CO-REGISTRATION OF COORDINATE SYSTEMS USED TO REPRESENT OBJECTS UNDER NAVIGATED BRAIN STIMULATION
US10278787B2 (en) Patient reference tool for rapid registration
US10166078B2 (en) System and method for mapping navigation space to patient space in a medical procedure
Ettinger et al. Experimentation with a transcranial magnetic stimulation system for functional brain mapping
D'Haese et al. Computer-aided placement of deep brain stimulators: from planningto intraoperative guidance
US11045257B2 (en) System and method for mapping navigation space to patient space in a medical procedure
US10603118B2 (en) Method for recovering patient registration
JP6550660B2 (en) Operation teaching device and transcranial magnetic stimulation device
CN103284792B (en) Operation image guiding and positioning device and system thereof
US20120087559A1 (en) Device and method for cerebral location assistance
Westwood Planning and analyzing robotized TMS using virtual reality
Richter et al. Towards direct head navigation for robot-guided transcranial magnetic stimulation using 3D laserscans: Idea, setup and feasibility
CN114279435B (en) Positioning navigation method and readable storage medium
CN115137988A (en) Medical navigation method
CN115116113A (en) Optical navigation method
Ettinger et al. Non-invasive functional brain mapping using registered transcranial magnetic stimulation
Leuze et al. Landmark-based mixed-reality perceptual alignment of medical imaging data and accuracy validation in living subjects
Zagorchev et al. Patient-specific sensor registration for electrical source imaging using a deformable head model
CN114288560A (en) Three-dimensional registration method, system and computer equipment for transcranial magnetic stimulation navigation process
Wang et al. Non-orthogonal one-step calibration method for robotized transcranial magnetic stimulation
Ettinger et al. Experimentation with a transcranial magnetic stimulation system for functional brain mapping
Liu et al. MRI-guided navigation and positioning solution for repetitive transcranial magnetic stimulation
CN116580820B (en) Intelligent trans-perineal prostate puncture anesthesia system based on multi-mode medical image
Pascau et al. Spatial localisation of EEG dipoles in MRI using the 10-20 International System anatomical references
CN116328191A (en) Transcranial magnetic stimulation treatment method based on mixed reality space registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240430

Address after: Room 504, floor 5, building 2, hospital 9, Yiyi Road, Life Science Park, Changping District, Beijing 102206

Applicant after: Beijing Yinhe Fangyuan Technology Co.,Ltd.

Country or region after: China

Address before: Room 504, floor 5, building 2, hospital 9, Yiyi Road, Life Science Park, Changping District, Beijing 102206

Applicant before: Beijing Yinhe Fangyuan Technology Co.,Ltd.

Country or region before: China

Applicant before: Beijing yone Galaxy Technology Co.,Ltd.

TA01 Transfer of patent application right