CN114190922B - TMS head movement detection method - Google Patents

TMS head movement detection method Download PDF

Info

Publication number
CN114190922B
CN114190922B CN202010987015.7A CN202010987015A CN114190922B CN 114190922 B CN114190922 B CN 114190922B CN 202010987015 A CN202010987015 A CN 202010987015A CN 114190922 B CN114190922 B CN 114190922B
Authority
CN
China
Prior art keywords
image
dimensional
face feature
feature pixel
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010987015.7A
Other languages
Chinese (zh)
Other versions
CN114190922A (en
Inventor
黄晓琦
幸浩洋
龚启勇
李静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202010987015.7A priority Critical patent/CN114190922B/en
Publication of CN114190922A publication Critical patent/CN114190922A/en
Application granted granted Critical
Publication of CN114190922B publication Critical patent/CN114190922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N2/00Magnetotherapy
    • A61N2/02Magnetotherapy using magnetic fields produced by coils, including single turn loops or electromagnets
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N2/00Magnetotherapy
    • A61N2/004Magnetotherapy specially adapted for a specific therapy
    • A61N2/006Magnetotherapy specially adapted for a specific therapy for magnetic stimulation of nerve tissue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Neurology (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a TMS head movement detection method, which comprises the following steps of: scanning to obtain a magnetic resonance image of a patient, and obtaining a head depth image of the patient and an rgb plane image thereof through a depth camera; step s2: obtaining three-dimensional coordinates corresponding to the face feature pixel points according to the corresponding relation of the head depth map and the rgb plan map of the patient; step s3: using a marking cube surface drawing algorithm to combine with the Vtk tool kit to realize the three-dimensional reconstruction of the magnetic resonance image obtained in the step s 1; step s4: acquiring two-dimensional coordinates of a face feature pixel point of the two-dimensional screenshot by using an MTCNN algorithm; step s5: calculating the three-dimensional coordinates of the face feature pixel points in the Vtk world coordinate system according to the conversion relation of the Vtk coordinate system; step s6: and carrying out affine registration on the two groups of three-dimensional coordinates obtained in the step s2 and the step s5 through a LandMark classical algorithm. The invention can detect the head movement condition of the patient in real time according to the image acquired by the depth camera, and synchronously display the head movement condition of the patient in the computer.

Description

TMS head movement detection method
Technical Field
The invention relates to the technical field of medical image processing, in particular to a TMS head movement detection method.
Background
TMS is an abbreviation of Transcranial Magnetic Stimulation, transcranial magnetic stimulation technology, which is a magnetic stimulation technology that uses a pulsed magnetic field to act on the central nervous system (mainly the brain) to change the membrane potential of cortical nerve cells, so that induced currents are generated, and the metabolism and the neuroelectric activity in the brain are affected, thereby causing a series of physiological and biochemical reactions.
In the prior art, most of head registration based on TMS is static registration, and dynamic registration in a computer cannot be realized, and even if dynamic registration can be realized, the head registration can be realized only by a large field and a large dynamic capturing device, so that the cost is high.
Disclosure of Invention
The invention aims to provide a method for detecting the head movement condition of a patient when TMS regulation is performed.
In order to achieve the above purpose, the invention is realized by adopting the following technical scheme:
the TMS head movement detection method comprises the following steps:
step s1: scanning to obtain a magnetic resonance image of a patient, obtaining a head depth image of the patient and an rgb plane image thereof through a depth camera, and obtaining two-dimensional coordinates of face feature pixel points of the patient, which are captured for the first time on the rgb plane image, by using an MTCNN algorithm, wherein the face feature pixel points comprise 5 points corresponding to left eyes, right eyes, noses, left mouth angles and right mouth angles;
step s2: obtaining three-dimensional coordinates corresponding to the face feature pixel points according to the corresponding relation of the head depth map and the rgb plan map of the patient;
step s3: using a marking cube surface drawing algorithm to combine with the Vtk tool kit to realize the three-dimensional reconstruction of the magnetic resonance image obtained in the step s1, so as to obtain a reconstructed three-dimensional image;
step s4: acquiring a two-dimensional screenshot of which the view angle is the right front, and acquiring the two-dimensional coordinates of the face feature pixel point of the two-dimensional screenshot by using an MTCNN algorithm;
step s5: calculating the three-dimensional coordinates of the face feature pixel points in the Vtk world coordinate system according to the conversion relation of the Vtk coordinate system;
step s6: and (3) carrying out affine registration on the two groups of three-dimensional coordinates obtained in the step s2 and the step s5 through a LandMark classical algorithm to ensure that the physical coordinates of the patient are unified with a Vtk world coordinate system, so that the head movement condition of the patient in the real physical world is simulated and displayed in real time in a computer.
Preferably, the step s1 includes the steps of:
step s101: carrying out multi-scale transformation on each frame of image of the input rgb plan view to manufacture image pyramids with different scales;
step s102: and inputting the pyramid image into a P-Net convolutional neural network to obtain a candidate window and a boundary regression vector. And simultaneously, calibrating the candidate window according to the boundary box. And then, removing the overlapped window by utilizing non-maximum value inhibition, and outputting to obtain the face image.
Step s103: and inputting the face image obtained by the P-Net network output into an R-Net convolutional neural network, utilizing a bounding box vector to finely tune candidate window forms, and finally removing overlapped window forms by utilizing a non-maximum suppression algorithm to obtain the face image. The face detection frame is more accurate;
step s104: the face image obtained by outputting the R-Net network is input into an O-Net convolutional neural network, the coordinates of a face detection frame are further refined, the network has one more convolutional layer than the R-Net, the function is similar to the R-Net, and only 5 face key point positions are calibrated while overlapping candidate windows are removed.
Preferably, in step s2, the face feature pixel point obtained in step s1 is searched in the head depth map, a corresponding pixel value is obtained, and the pixel value is used as a depth value of the corresponding face feature pixel point, so that the three-dimensional coordinate corresponding to the face feature pixel point obtained in step s1 is obtained.
Preferably, the marking cube surface rendering algorithm of step s3 includes the following steps:
step s301: reading the magnetic resonance image obtained by scanning in the step s1 into a memory in a layering manner;
step s302: scanning two layers of data, constructing voxels one by one, wherein 8 corner points in each voxel are taken from two adjacent layers;
step s303: comparing the function value of each corner of the voxel with an isosurface value c given according to the condition of a patient, and constructing a state table of the voxel according to a comparison result;
step s304: obtaining boundary voxels with intersection points with the isosurface according to the state table;
step s305: calculating the intersection point of the voxel edge and the isosurface by a linear interpolation method;
step s306: a central difference method is utilized to calculate the normal vector of each corner point of the voxel, and then a linear interpolation method is utilized to calculate the normal of each vertex point of the triangular patch;
step s307: and drawing an isosurface image according to the coordinates and normal vector of each vertex on each triangular surface patch, thereby obtaining a reconstructed three-dimensional image of the magnetic resonance image.
Preferably, the step s5 includes the steps of:
step s501: calculating the ratio r of the coordinate value of the face feature pixel point obtained in the step s4 to the central value of the centremost pixel of the two-dimensional screenshot obtained in the step s 4;
step s502: according to the ratio r, coordinate values of view coordinate systems of the face feature pixel points of the reconstructed three-dimensional image in the Vtk three-dimensional view can be respectively obtained;
step s503: according to the value of the view coordinate system, the coordinate value of the display coordinate system of the face characteristic pixel point of the reconstructed three-dimensional image in the Vtk three-dimensional view can be respectively calculated;
step s504: and simulating a vector which is perpendicular to the display screen and has a display coordinate point with a starting point of the face feature pixel point by using a Vtk face patch pickup mode, and calculating a voxel coordinate point which is intersected with the vector at the first time, so as to respectively obtain the three-dimensional coordinates of the face feature pixel point in the Vtk world coordinate system obtained in the step s 4.
Preferably, the step s6 includes the steps of:
step s601: setting the face feature pixel points obtained after the step s1 and the step s2 as a source point set;
step s602: setting face characteristic pixel points of the reconstructed three-dimensional image obtained after the step s3, the step s4 and the step s5 as a target point set;
step s603: calculating an original registration matrix comprising translation, rotation and scaling transformation, so that the average distance of the two point sets after registration is minimum;
step s604: multiplying the target point set by the original registration matrix to finish first registration;
step s605: operating the rgb plane diagrams obtained in each frame according to the step s1 and the step s2, and obtaining the three-dimensional coordinates of the face feature pixel points corresponding to the rgb plane diagrams of each frame;
step s606: registering the three-dimensional coordinates of the face feature pixel points of each frame rgb plan with the three-dimensional coordinates obtained in the step s2 again by using a LandMark algorithm to obtain a secondary registration matrix corresponding to each frame rgb plan;
step s607: and multiplying the secondary registration matrix by the original registration matrix to obtain a real registration matrix of each frame of image except the first frame of image, and multiplying the target point set except the first frame of image by the original registration matrix in the Vtk so as to simulate and display the head movement condition of the patient in the real physical world in real time in a computer.
The invention has the following beneficial effects:
the invention utilizes the TMS device and the depth camera in combination with the depth neural network and the algorithm to simulate the head movement of the patient in the real world in real time with relatively low cost and higher precision, thereby providing convenience for the development of medical experiments and treatment technologies aiming at the head.
Drawings
FIG. 1 is a diagram showing an output result screenshot a of the present invention;
fig. 2 is a screenshot b of the output result of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent.
The TMS head movement detection method comprises the following steps:
step s1: the method comprises the steps of scanning to obtain a magnetic resonance image of a patient, obtaining a head depth image of the patient and an rgb plane image thereof through a depth camera, and obtaining two-dimensional coordinates of face feature pixel points of the patient, which are captured first on the rgb plane image, by using an MTCNN algorithm, wherein the face feature pixel points comprise 5 points corresponding to left eyes, right eyes, noses, left mouth angles and right mouth angles. It should be noted that there may be multiple faces on a picture, and the pixels are traversed in order from top left to bottom right of the picture, and only the first identified face is captured in step s 1.
Specifically, the step s1 includes the steps of:
step s101: and carrying out multi-scale transformation on each frame of image of the input rgb plane graph to manufacture image pyramids with different scales.
Step s102: and inputting the pyramid image into a P-Net convolutional neural network to obtain a candidate window and a boundary regression vector. The P-Net convolutional neural Network is a Proposal Network.
And simultaneously, calibrating the candidate window according to the boundary box. And then, removing the overlapped window by utilizing non-maximum value inhibition, and outputting to obtain the face image.
Step s103: and inputting the face image obtained by the P-Net network output into an R-Net (Refine Network) convolutional neural network, utilizing a bounding box vector to finely tune candidate window forms, and finally removing overlapped window forms by utilizing a non-maximum suppression algorithm to obtain the face image. The face detection frame reached at this time is more accurate. The R-Net convolutional neural Network is a finer Network.
Step s104: the face image obtained by outputting the R-Net network is input into an O-Net (Output Network) convolutional neural network, the coordinates of a face detection frame are further refined, the network has one more convolutional layer than the R-Net, the function is similar to the R-Net, and only 5 face key point positions are calibrated while overlapping candidate windows are removed. The O-net convolutional neural Network is Output Network.
Step s2: and obtaining the three-dimensional coordinates corresponding to the face feature pixels according to the corresponding relation of the head depth map and the rgb plan of the patient.
In step s2, the face feature pixel point obtained in step s1 is searched in the head depth map, the corresponding pixel value is obtained, and the pixel value is used as the depth value of the corresponding face feature pixel point, so that the three-dimensional coordinate corresponding to the face feature pixel point obtained in step s1 is obtained. The retrieval of the face feature pixel points and the acquisition of the corresponding pixel values are conventional technical means in the field of image processing.
Step s3: and (3) realizing the three-dimensional reconstruction of the magnetic resonance image obtained in the step s1 by using a marking cube surface drawing algorithm and combining with a Vtk kit to obtain a reconstructed three-dimensional image.
Specifically, the marking cube surface rendering algorithm in the step s3 includes the following steps:
step s301: reading the magnetic resonance image obtained by scanning in the step s1 into a memory in a layering manner;
step s302: scanning two layers of data, constructing voxels one by one, wherein 8 corner points in each voxel are taken from two adjacent layers;
step s303: comparing the function value of each corner of the voxel with an isosurface value c given according to the condition of a patient, and constructing a state table of the voxel according to a comparison result;
step s304: obtaining boundary voxels with intersection points with the isosurface according to the state table;
step s305: calculating the intersection point of the voxel edge and the isosurface by a linear interpolation method;
step s306: a central difference method is utilized to calculate the normal vector of each corner point of the voxel, and then a linear interpolation method is utilized to calculate the normal of each vertex point of the triangular patch;
step s307: and drawing an isosurface image according to the coordinates and normal vector of each vertex on each triangular surface patch, thereby obtaining a reconstructed three-dimensional image of the magnetic resonance image.
In step s303, the c value of the iso-surface is given according to the specific situation of the different patients. When the code is called in the step, a parameter is transmitted to the marking cube surface drawing algorithm applied in the step, and the parameter is the c value of the isosurface. The c-value may be defined differently according to different requirements and the required c-value for the magnetic resonance images generated by different brands of scanning instruments is also different, so that the c-value needs to be dependent on the image given by the specific patient.
Step s4: and acquiring a two-dimensional screenshot of which the view angle is the right front, and acquiring the two-dimensional coordinates of the face feature pixel point of the two-dimensional screenshot by using an MTCNN algorithm.
Step s5: and (3) calculating the three-dimensional coordinates of the face feature pixel points in the Vtk world coordinate system according to the conversion relation of the Vtk coordinate system.
Specifically, the step s5 includes the steps of:
step s501: calculating the ratio r of the coordinate value of the face feature pixel point obtained in the step s4 to the central value of the centremost pixel of the two-dimensional screenshot obtained in the step s 4;
step s502: according to the ratio r, coordinate values of view coordinate systems of the face feature pixel points of the reconstructed three-dimensional image in the Vtk three-dimensional view can be respectively obtained;
step s503: according to the value of the view coordinate system, the coordinate value of the display coordinate system of the face characteristic pixel point of the reconstructed three-dimensional image in the Vtk three-dimensional view can be respectively calculated;
step s504: and simulating a vector which is perpendicular to the display screen and has a display coordinate point with a starting point of the face feature pixel point by using a Vtk face patch pickup mode, and calculating a voxel coordinate point which is intersected with the vector at the first time, so as to respectively obtain the three-dimensional coordinates of the face feature pixel point in the Vtk world coordinate system obtained in the step s 4.
Step s6: and (3) carrying out affine registration on the two groups of three-dimensional coordinates obtained in the step s2 and the step s5 through a LandMark classical algorithm to ensure that the physical coordinates of the patient are unified with a Vtk world coordinate system, so that the head movement condition of the patient in the real physical world is simulated and displayed in real time in a computer.
Specifically, the step s6 includes the steps of:
step s601: setting the face feature pixel points obtained after the step s1 and the step s2 as a source point set;
step s602: setting face characteristic pixel points of the reconstructed three-dimensional image obtained after the step s3, the step s4 and the step s5 as a target point set;
step s603: calculating an original registration matrix comprising translation, rotation and scaling transformation, so that the average distance of the two point sets after registration is minimum;
step s604: multiplying the target point set by the original registration matrix to finish first registration;
step s605: operating the rgb plane diagrams obtained in each frame according to the step s1 and the step s2, and obtaining the three-dimensional coordinates of the face feature pixel points corresponding to the rgb plane diagrams of each frame;
step s606: registering the three-dimensional coordinates of the face feature pixel points of each frame rgb plan with the three-dimensional coordinates obtained in the step s2 again by using a LandMark algorithm to obtain a secondary registration matrix corresponding to each frame rgb plan;
step s607: and multiplying the secondary registration matrix by the original registration matrix to obtain a real registration matrix of each frame of image except the first frame of image, and multiplying the target point set except the first frame of image by the original registration matrix in the Vtk so as to simulate and display the head movement condition of the patient in the real physical world in real time in a computer.
As shown in fig. 1 and 2, the screenshot displayed in the computer is taken, and an operator realizes real-time simulation display of the head movement condition of the patient in the real physical world in the computer through the invention.
Of course, the present invention is capable of other various embodiments and its several details are capable of modification and variation in light of the present invention by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (4)

  1. The TMS head movement detection method is characterized by comprising the following steps of:
    step s1: scanning to obtain a magnetic resonance image of a patient, obtaining a head depth image of the patient and an rgb plane image thereof through a depth camera, and obtaining two-dimensional coordinates of face feature pixel points of the patient, which are captured for the first time on the rgb plane image, by using an MTCNN algorithm, wherein the face feature pixel points comprise 5 points corresponding to left eyes, right eyes, noses, left mouth angles and right mouth angles;
    step s2: obtaining three-dimensional coordinates corresponding to the face feature pixel points according to the corresponding relation of the head depth map and the rgb plan map of the patient;
    step s3: using a marking cube surface drawing algorithm to combine with the Vtk tool kit to realize the three-dimensional reconstruction of the magnetic resonance image obtained in the step s1, so as to obtain a reconstructed three-dimensional image;
    step s4: acquiring a two-dimensional screenshot of which the view angle is the right front, and acquiring the two-dimensional coordinates of the face feature pixel point of the two-dimensional screenshot by using an MTCNN algorithm;
    step s5: calculating the three-dimensional coordinates of the face feature pixel points in the Vtk world coordinate system according to the conversion relation of the Vtk coordinate system;
    step s6: carrying out affine registration on the two groups of three-dimensional coordinates obtained in the step s2 and the step s5 through a LandMark classical algorithm to unify physical coordinates of a patient with a Vtk world coordinate system, so that head movement conditions of the patient in a real physical world are simulated and displayed in real time in a computer;
    the step s1 includes the steps of:
    step s101: carrying out multi-scale transformation on each frame of image of the input rgb plan view to manufacture image pyramids with different scales;
    step s102: inputting the pyramid image into a P-Net convolutional neural network to obtain a candidate window and a boundary regression vector;
    meanwhile, the candidate window is calibrated according to the boundary frame;
    then, non-maximum value inhibition is utilized to remove the overlapped window, and the face image is output;
    step s103: inputting the face image obtained by the P-Net network output into an R-Net convolutional neural network, utilizing a bounding box vector to finely tune candidate window forms, and finally removing overlapped window forms by utilizing a non-maximum suppression algorithm to output the face image;
    the face detection frame is more accurate;
    step s104: inputting the face image obtained by outputting the R-Net network into an O-Net convolutional neural network to further refine the coordinates of the face detection frame, wherein the network has one more convolutional layer than the R-Net, has the function similar to the R-Net, and only calibrates 5 face key point positions while removing overlapping candidate windows;
    the marking cube surface drawing algorithm of the step s3 comprises the following steps:
    step s301: reading the magnetic resonance image obtained by scanning in the step s1 into a memory in a layering manner;
    step s302: scanning two layers of data, constructing voxels one by one, wherein 8 corner points in each voxel are taken from two adjacent layers;
    step s303: comparing the function value of each corner of the voxel with an isosurface value c given according to the condition of a patient, and constructing a state table of the voxel according to a comparison result;
    step s304: obtaining boundary voxels with intersection points with the isosurface according to the state table;
    step s305: calculating the intersection point of the voxel edge and the isosurface by a linear interpolation method;
    step s306: a central difference method is utilized to calculate the normal vector of each corner point of the voxel, and then a linear interpolation method is utilized to calculate the normal of each vertex point of the triangular patch;
    step s307: and drawing an isosurface image according to the coordinates and normal vector of each vertex on each triangular surface patch, thereby obtaining a reconstructed three-dimensional image of the magnetic resonance image.
  2. 2. The TMS head movement detection method of claim 1, wherein: in step s2, the face feature pixel point obtained in step s1 is searched in the head depth map, the corresponding pixel value is obtained, and the pixel value is used as the depth value of the corresponding face feature pixel point, so that the three-dimensional coordinate corresponding to the face feature pixel point obtained in step s1 is obtained.
  3. 3. The TMS head movement detection method of claim 1, wherein: said step s5 comprises the steps of:
    step s501: calculating the ratio r of the coordinate value of the face feature pixel point obtained in the step s4 to the central value of the centremost pixel of the two-dimensional screenshot obtained in the step s 4;
    step s502: according to the ratio r, coordinate values of view coordinate systems of the face feature pixel points of the reconstructed three-dimensional image in the Vtk three-dimensional view can be respectively obtained;
    step s503: according to the value of the view coordinate system, the coordinate value of the display coordinate system of the face characteristic pixel point of the reconstructed three-dimensional image in the Vtk three-dimensional view can be respectively calculated;
    step s504: and (3) simulating a vector which is perpendicular to the display screen and has a display coordinate point with a starting point of the face feature pixel point by using a Vtk face patch pickup mode, and calculating a voxel coordinate point which is intersected with the vector at the first time, so as to respectively obtain the three-dimensional coordinates of the face feature pixel point in the Vtk world coordinate system obtained in the step s 4.
  4. 4. The TMS head movement detection method of claim 1, wherein: said step s6 comprises the steps of:
    step s601: setting the face feature pixel points obtained after the step s1 and the step s2 as a source point set;
    step s602: setting face characteristic pixel points of the reconstructed three-dimensional image obtained after the step s3, the step s4 and the step s5 as a target point set;
    step s603: calculating an original registration matrix comprising translation, rotation and scaling transformation, so that the average distance of the two point sets after registration is minimum;
    step s604: multiplying the target point set by the original registration matrix to finish first registration;
    step s605: operating the rgb plane diagrams obtained in each frame according to the step s1 and the step s2, and obtaining the three-dimensional coordinates of the face feature pixel points corresponding to the rgb plane diagrams of each frame;
    step s606: registering the three-dimensional coordinates of the face feature pixel points of each frame rgb plan with the three-dimensional coordinates obtained in the step s2 again by using a LandMark algorithm to obtain a secondary registration matrix corresponding to each frame rgb plan;
    step s607: and multiplying the secondary registration matrix by the original registration matrix to obtain a real registration matrix of each frame of image except the first frame of image, and multiplying the target point set except the first frame of image by the original registration matrix in the Vtk so as to simulate and display the head movement condition of the patient in the real physical world in real time in a computer.
CN202010987015.7A 2020-09-18 2020-09-18 TMS head movement detection method Active CN114190922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010987015.7A CN114190922B (en) 2020-09-18 2020-09-18 TMS head movement detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010987015.7A CN114190922B (en) 2020-09-18 2020-09-18 TMS head movement detection method

Publications (2)

Publication Number Publication Date
CN114190922A CN114190922A (en) 2022-03-18
CN114190922B true CN114190922B (en) 2023-04-21

Family

ID=80645000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010987015.7A Active CN114190922B (en) 2020-09-18 2020-09-18 TMS head movement detection method

Country Status (1)

Country Link
CN (1) CN114190922B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101912668A (en) * 2010-07-26 2010-12-15 香港脑泰科技有限公司 Navigation transcranial magnetic stimulation treatment system
WO2012121341A1 (en) * 2011-03-09 2012-09-13 国立大学法人大阪大学 Image data processing device and transcranial magnetic stimulation apparatus
WO2013172981A1 (en) * 2012-05-16 2013-11-21 Beth Israel Deaconess Medical Center, Inc. Identifying individual target sites for transcranial magnetic stimulation applications
KR20160044183A (en) * 2014-10-15 2016-04-25 나기용 The TMS System For Enhancing Cognitive Functions
CN109731227A (en) * 2018-10-23 2019-05-10 四川大学华西医院 A kind of system of transcranial magnetic stimulation
WO2020036898A1 (en) * 2018-08-13 2020-02-20 Magic Leap, Inc. A cross reality system
CN111414798A (en) * 2019-02-03 2020-07-14 沈阳工业大学 Head posture detection method and system based on RGB-D image
CN111657947A (en) * 2020-05-21 2020-09-15 四川大学华西医院 Positioning method of nerve regulation target area

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101912668A (en) * 2010-07-26 2010-12-15 香港脑泰科技有限公司 Navigation transcranial magnetic stimulation treatment system
WO2012121341A1 (en) * 2011-03-09 2012-09-13 国立大学法人大阪大学 Image data processing device and transcranial magnetic stimulation apparatus
EP2919194A1 (en) * 2011-03-09 2015-09-16 Osaka University Image data processing device and transcranial magnetic stimulation apparatus
WO2013172981A1 (en) * 2012-05-16 2013-11-21 Beth Israel Deaconess Medical Center, Inc. Identifying individual target sites for transcranial magnetic stimulation applications
KR20160044183A (en) * 2014-10-15 2016-04-25 나기용 The TMS System For Enhancing Cognitive Functions
WO2020036898A1 (en) * 2018-08-13 2020-02-20 Magic Leap, Inc. A cross reality system
CN109731227A (en) * 2018-10-23 2019-05-10 四川大学华西医院 A kind of system of transcranial magnetic stimulation
CN111414798A (en) * 2019-02-03 2020-07-14 沈阳工业大学 Head posture detection method and system based on RGB-D image
CN111657947A (en) * 2020-05-21 2020-09-15 四川大学华西医院 Positioning method of nerve regulation target area

Also Published As

Publication number Publication date
CN114190922A (en) 2022-03-18

Similar Documents

Publication Publication Date Title
Jähne Digital image processing
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
Stoyanov et al. A practical approach towards accurate dense 3D depth recovery for robotic laparoscopic surgery
CN1226707C (en) Method and system for simulation of surgical procedures
Cartucho et al. VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery
CN111833237B (en) Image registration method based on convolutional neural network and local homography transformation
CN109712223B (en) Three-dimensional model automatic coloring method based on texture synthesis
Legg et al. Feature neighbourhood mutual information for multi-modal image registration: an application to eye fundus imaging
CN116129235B (en) Cross-modal synthesis method for medical images from cerebral infarction CT to MRI conventional sequence
CN106327479A (en) Apparatus and method for identifying blood vessels in angiography-assisted congenital heart disease operation
CN109345581A (en) Augmented reality method, apparatus and system based on more mesh cameras
CN110993067A (en) Medical image labeling system
Di Luca et al. Inconsistency of perceived 3D shape
CN114190922B (en) TMS head movement detection method
CN107945203A (en) PET image processing method and processing device, electronic equipment, storage medium
US10832420B2 (en) Dynamic local registration system and method
CN112734628B (en) Projection position calculation method and system for tracking point after three-dimensional conversion
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN114463167A (en) Model display method and device, electronic equipment and storage medium
Bhatla et al. Development of Anatomy Learning System based on Augmented Reality
CN112581460A (en) Scanning planning method, device, computer equipment and storage medium
Fuster-Guilló et al. 3D technologies to acquire and visualize the human body for improving dietetic treatment
Sun et al. 3D reconstruction based on capsule endoscopy image sequences
CN117745989B (en) Nerve root blocking target injection path planning method and system based on vertebral canal structure
CN109410224B (en) Image segmentation method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant