US20220036584A1 - Transcranial magnetic stimulation (tms) positioning and navigation method for tms treatment - Google Patents

Transcranial magnetic stimulation (tms) positioning and navigation method for tms treatment Download PDF

Info

Publication number
US20220036584A1
US20220036584A1 US17/279,202 US201917279202A US2022036584A1 US 20220036584 A1 US20220036584 A1 US 20220036584A1 US 201917279202 A US201917279202 A US 201917279202A US 2022036584 A1 US2022036584 A1 US 2022036584A1
Authority
US
United States
Prior art keywords
patient
positioning
tms
face
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/279,202
Inventor
Cong Sun
Bo Wang
Shengan CAI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Znion Technology Co Ltd
Original Assignee
Wuhan Znion Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Znion Technology Co Ltd filed Critical Wuhan Znion Technology Co Ltd
Publication of US20220036584A1 publication Critical patent/US20220036584A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N2/00Magnetotherapy
    • A61N2/004Magnetotherapy specially adapted for a specific therapy
    • A61N2/006Magnetotherapy specially adapted for a specific therapy for magnetic stimulation of nerve tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N2/00Magnetotherapy
    • A61N2/02Magnetotherapy using magnetic fields produced by coils, including single turn loops or electromagnets
    • G06K9/00228
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • A61B2034/2057Details of tracking cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the total number of patients with mental illness in China has exceeded 100 million at present, but the public's awareness of mental illness is less than 50%, and the consultation rate is even lower. At present, about 20% of the patients with mental illness receive timely treatment, and 80% of the patients with mental illness are not treated in time, or even the most basic treatment. The number of patients with severe mental illness is as high as 16 million. According to the latest statistical data from the IMS Health, the global consumption of drugs for mental illness have exceeded 36 billion U.S. dollars, accounting for 5% of the total drug sales. However, as far as China is concerned, the current mental illness drug market is still relatively small, accounting for about 1.5% of the total hospital sales.
  • the method based on skin color features one of the most significant external features of a face is skin color.
  • the advantage of the skin color is that it does not rely on remaining features of the face, and is not sensitive to changes in a face pose and a face shape.
  • the face can be relatively easily detected by constructing a skin color model that distinguishes the face from other different color backgrounds.
  • this method cannot distinguish skin-like objects well, which will cause false detection.
  • it is relatively sensitive to illumination changes, thereby affecting the accuracy of face detection.
  • the planning a movement path of a mechanical arm includes: planning a stimulation sequence of magnetic stimulation points, and planning an optimal path and a movement speed of the mechanical arm to reach each magnetic stimulation point.
  • the present embodiment provides a TMS positioning and navigation method for TMS treatment, specifically includes the following steps:
  • the face detection algorithm is a method based on template matching, a method based on skin color features, or a method based on AdaBoost.
  • the method based on template matching the implementation principle of a template matching method is to calculate a correlation coefficient between a target and a preset template by comparing a similarity between the template and a region to be detected. Face detection is to search an image to be detected for a region closest to a face by using a gray-scale template of the face. Compared with other methods for face detection based on features, the face detection method based on template matching is intuitive, simple, and easy to implement, and has strong adaptability, low dependence on image quality, and high robustness.
  • the method based on skin color features one of the most significant external features of a face is skin color.
  • the advantage of the skin color is that it does not rely on remaining features of the face, and is not sensitive to changes in a face pose and a face shape.
  • the face can be relatively easily detected by constructing a skin color model that distinguishes the face from other different color backgrounds.
  • this method cannot distinguish skin-like objects well, which will cause false detection.
  • it is relatively sensitive to illumination changes, thereby affecting the accuracy of face detection.
  • step S 4 the method for spatially positioning the patient face includes the following steps:
  • point cloud data of the two images are calculated by using the displacement and rotation information calculated above as a result of initial coarse matching of the ICP algorithm to obtain more precise displacement and rotation data.
  • the position alignment method includes: first marking face feature points (corners of eyes, a tip of a nose, etc.) for alignment in a head model, then automatically identifying the face feature points in a real-time image, and calculating a conversion relationship between the real-time image and the head model through feature point matching, calculating a position of the head model in space, and then calculating position coordinates of a magnetic stimulation point on the head model in space.

Abstract

A Transcranial Magnetic Stimulation (TMS) positioning and navigation method for TMS treatment comprises the steps of acquiring RGBD image data of a patient face to obtain stable RGBD image data and 3D point cloud image data; adopting a face detection algorithm for an RGBD image of the patient face to obtain patient face feature point information; spatially positioning the patient face to obtain 3D coordinate values in a camera space coordinate system; modeling a patient head, matching a spatial pose to determine a spatial pose of an actual stimulation site of a patient, and planning a movement path of a mechanical arm, to complete first positioning; saving spatial pose data of a magnetic stimulation point and movement path planning data of the mechanical arm in the first positioning, and during a next treatment, only needing to invoke the data saved during the first positioning to achieve one-key positioning.

Description

    FIELD
  • The present invention belongs to the field of TMS medical technologies, specifically, to a TMS positioning and navigation method for TMS treatment.
  • BACKGROUND
  • According to statistics from the Mental Health Center of the Chinese Center for Disease Control and Prevention, the total number of patients with mental illness in China has exceeded 100 million at present, but the public's awareness of mental illness is less than 50%, and the consultation rate is even lower. At present, about 20% of the patients with mental illness receive timely treatment, and 80% of the patients with mental illness are not treated in time, or even the most basic treatment. The number of patients with severe mental illness is as high as 16 million. According to the latest statistical data from the IMS Health, the global consumption of drugs for mental illness have exceeded 36 billion U.S. dollars, accounting for 5% of the total drug sales. However, as far as China is concerned, the current mental illness drug market is still relatively small, accounting for about 1.5% of the total hospital sales. There are already more than 600 psychiatric hospitals in China, but compared with the increasing incidence of mental illness, there is still a great gap to the needs of patients with mental illness in quantity and quality. There are still a large number of patients with mental illness who cannot get professional, systematic, and effective treatment.
  • TMS is a technique to generate an electric current in the local cerebral cortex by a pulsed magnetic field to temporarily activate or inhibit the cortex. In the current field of medical devices, the operation of a TMS treatment device is to control a TMS coil by manual operation or by fixing same using a support to treat a patient. Manual operation is very inconvenient, and the coil needs to be held by hand for a long time or needs to be fixed at a specific angle by a support. The patient does not have good experience, because the patient needs to sit and keep a posture and dare not move. Repositioning is required after the patient moves. Manual positioning is complicated and not accurate enough, so that a treatment effect of the patient is greatly reduced.
  • The patent with application number 201710467812.0 discloses a TMS treatment device, including a TMS coil, a support, a mechanical arm, a controller, and a positioning means. The positioning means detects positions of a human head and a TMS coil, and sends the positions to the controller, and the controller controls six driving mechanisms of the mechanical arm to rotate corresponding angles. Because the mechanical arm has six degrees of freedom, the TMS coil can stimulation of the entire brain region. However, the positioning means in this patent adopts two infrared camera and one processor, and position information obtained thereby is not precise enough, so that a specific part of the head cannot be effectively treated, and thus the treatment effect will be much reduced. In addition, this positioning means does not have a repeated positioning function. A same patient needs to be repositioned every time the patient goes for treatment, thereby reducing treatment efficiency. Further improvement is urgently needed.
  • SUMMARY
  • The objective of the present invention is to provide, for the problem existing in the prior art, a TMS positioning and navigation method for TMS treatment. An RGBD image, an infrared image, and a depth image of a patient face can be effectively obtained through a three-dimensional (3D) camera, and spatial positions of patient face feature points are obtained based on these images, to provide data support for precise positioning of a pose of a patient head, a movement path of a mechanical arm is reasonably planned, so that the mechanical arm can automatically move a TMS coil to a magnetic stimulation point of the patient head for treatment, thereby reducing the burden on a doctor, and also improving treatment efficiency and a treatment effect. The problem in the prior art of imprecise positioning of the patient head by only two infrared cameras and one processing module is solved.
  • To achieve the above objective, the present invention adopts the following technical solution.
  • A TMS positioning and navigation method for TMS treatment, comprising the following steps:
  • S1, making a patient lie flat on a horizontal translation platform, translating the horizontal translation platform to a predetermined position, and turning on a TMS treatment system by an operator;
  • S2, acquiring RGBD image data of a patient face by adopting a 3D camera, and obtaining stable RGBD image data and 3D point cloud image data through calibration, background removal, filtering, and an illumination compensation algorithm;
  • S3, adopting a face detection algorithm for an RGBD image of the patient face to obtain a patient face region, and then obtaining patient face feature point information through an ASM feature point detection algorithm;
  • S4, spatially positioning the patient face, establishing a space coordinate system by taking a 3D camera as an origin point, and obtaining 3D coordinate values of patient face feature points in a 3D camera space coordinate system according to the 3D point cloud image data;
  • S5, modeling the patient head, rotating the 3D camera around the head at a uniform speed or performing photographing at the same time by using multiple cameras to obtain image data of the patient head in all directions, by identifying the face feature point information in the image data in all the directions, calculating matching relationships between images, then obtaining a spatial position relationship between point clouds through an ICP algorithm of the 3D point clouds, integrating all the 3D point cloud image data in scan data to obtain complete 3D data of the patient head, then by taking MINI brain space coordinates commonly used in medicine as a standard, mapping 3D skull data of an MINI space obtained by brain 3D scanning to the complete 3D data to obtain a patient head model, and then building a magnetic stimulation point model on the obtained patient head model;
  • S6, performing first positioning: matching a spatial pose of the patient head model with an actual spatial pose of the patient head to determine a spatial pose of an actual stimulation site of a patient, modeling an operating device, planning a movement path of a mechanical arm, and moving, by the mechanical arm automatically, to a magnetic stimulation point of the patient head according to the planned path for magnetic stimulation treatment; and
  • S7, performing repeated positioning: after the first positioning is completed, saving magnetic stimulation point information of the patient head and path planning data of the mechanical arm during the first positioning in the patient head model, and during next treatment of the patient, directly invoking the data during the previous positioning to repeatedly position the patient head.
  • Specifically, in step S2, the illumination compensation algorithm includes the following steps:
  • S21, analyzing the RGBD image by adopting a contour tracking algorithm to obtain a face contour line of the patient;
  • S22, mapping the contour line in S21 to the 3D point cloud image of the patient face;
  • S23, analyzing coordinate values of point clouds on both sides of the contour line along the contour line on the 3D point cloud image, if there is a relatively large sudden change in a height of the coordinate values of the point clouds on the both sides, indicating that a contour is true and effective, and if there is no relatively large sudden change, indicating that the contour is a false contour formed due to shadows caused by too strong light;
  • S24, adjusting, on the RGBD image, brightness of pixels on both sides of a corresponding position of the false contour to be approximate, that is, adjusting from a dark region to a bright region, to eliminate shadows caused by too strong light;
  • S25, traversing on the 3D point cloud image to find a region with a relatively large sudden change in the height; and
  • S26, mapping the region with the relatively large sudden change in the height in S25 to the RGBD image, increasing brightness of pixels in a region with a sudden increase in the height, and decreasing brightness of pixels of a region with a sudden decrease in the height to eliminate an influence caused by too weak light.
  • Specifically, in step S3, the face detection algorithm is a method based on template matching, a method based on skin color features, or a method based on AdaBoost.
  • The method based on template matching: the implementation principle of a template matching method is to calculate a correlation coefficient between a target and a preset template by comparing a similarity between the template and a region to be detected. Face detection is to search an image to be detected for a region closest to a face by using a gray-scale template of the face. Compared with other methods for face detection based on features, the face detection method based on template matching is intuitive, simple, and easy to implement, and has strong adaptability, low dependence on image quality, and high robustness.
  • The method based on skin color features: one of the most significant external features of a face is skin color. The advantage of the skin color is that it does not rely on remaining features of the face, and is not sensitive to changes in a face pose and a face shape. The face can be relatively easily detected by constructing a skin color model that distinguishes the face from other different color backgrounds. However, this method cannot distinguish skin-like objects well, which will cause false detection. In addition, it is relatively sensitive to illumination changes, thereby affecting the accuracy of face detection.
  • The method based on AdaBoost: first, Haar-like features are adopted to represent a face, and an integral image is used to accelerate a process of Haar-like feature value calculation; and then AdaBoost is used to screen out best face rectangular features. These features is called weak classifiers, and finally these classifiers are connected in series to form a strong classifier so as to achieve the purpose of detecting a face. In addition, this method is not easily sensitive to the illumination changes.
  • Specifically, in step S4, the method for spatially positioning the patient face includes the following steps:
  • S41, obtaining a 3D point cloud image of a face by using an infrared camera in the 3D camera, finding a point of the patient face closest to the camera, that is, the tip of a nose, and obtaining space coordinates (x, y, z) in the camera coordinate system;
  • S42, using the stable RGBD image obtained in step S2, finding feature points comprising the tip of the nose, a root of the nose, corners of eyes, corners of a mouth, and eyebrows of a human in the image according to a face feature recognition algorithm, and deducing an angle (rx, ry, rz) of the patient face relative to the camera according to left-right symmetry of the face; and
  • S43, synthesizing the point coordinates and the angle obtained in S41 and S42 into complete 3D space coordinates (x, y, z, rx, ry, rz).
  • Specifically, in step S6, the planning a movement path of a mechanical arm includes: planning a stimulation sequence of magnetic stimulation points, and planning an optimal path and a movement speed of the mechanical arm to reach each magnetic stimulation point.
  • Specifically, in the process of the first positioning or repeated positioning, it is required to perform follow-up positioning of the patient head; position information of the patient head each time positioning is completed is recorded in a treatment process, if at a next moment, a distance between positions of the magnetic stimulation point at a current moment and a previous moment exceeds 5 mm due to movement of the patient head, follow-up positioning is started, and if the distance does not exceed 5 mm, follow-up positioning is not started.
  • Further, the step of the follow-up positioning is: adjusting a spatial pose of the patient head model, so that the spatial pose of the head model is matched with a current actual spatial pose of the patient head, then repositioning a latest magnetic stimulation point on the head model, finally re-planning the movement path of the mechanical arm, and moving the TMS coil to the latest magnetic stimulation point for treatment.
  • Compared with the prior art, the present invention has the following beneficial effects: (1) in the present invention, RGBD image data of a patient face is acquired by using a 3D camera, and stable RGBD image data and 3D point cloud image data are obtained through calibration, background removal, filtering, and an illumination compensation algorithm, to provide a high-quality data source for face region detection and face feature point detection, and avoid an influence of external factors such as illumination on the face feature point detection, thereby improving the accuracy of 3D pose positioning of a patient head; (2) in the present invention, in a process of the first positioning, an operating device is modeled, and a site where a mechanical arm should be moved is determined through an algorithm; through an algorithm, a stimulation sequence of magnetic stimulation points, an optimal path and a movement speed of the mechanical arm to reach each magnetic stimulation point are planned, thereby ensuring that the mechanical arm will not collide with other devices or other parts of a patient when moving a TMS coil to the magnetic stimulation point; (3) the present invention also has a repeated positioning function, after the first positioning is completed, magnetic stimulation point information of the patient head and path planning data of the mechanical arm during the first positioning are stored in a patient head model; during next treatment, the patient can directly invoke the data during the previous positioning to position the patient head with one key, thereby improving treatment efficiency; and (4) in the present invention, in the process of first positioning and repeated positioning, follow-up positioning of the patient head can also be performed, during the treatment of the patient, even if the pose of the head changes, fine adjustment of the head model can be quickly performed, so that the spatial pose of the head model is matched with a current actual spatial posture of the patient head, then a latest magnetic stimulation point on the head model is repositioned, finally the movement path of the mechanical arm is re-planned, and the TMS coil is moved to the latest magnetic stimulation point for treatment, thereby enhancing the experience effect of the patient.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flow schematic block diagram of a TMS positioning and navigation method for TMS treatment according to the present invention;
  • FIG. 2 is a flow schematic block diagram of an illumination compensation algorithm according to the present invention; and
  • FIG. 3 is a schematic diagram of a space coordinate system used for obtaining an angle of a human face relative to a camera according to the present invention.
  • DETAILED DESCRIPTION
  • The technical solutions of the present invention are clearly and fully described below with reference to the accompanying drawings in the present invention. Apparently, the described embodiments are merely some of the embodiments of the present invention, but not all the embodiments. Based on the embodiments of the present invention, all other embodiments that may be implemented by persons of ordinary skill in the art without involving an inventive effort shall fall within the scope of protection of the present invention.
  • Embodiment 1
  • As shown in FIG. 1, the present embodiment provides a TMS positioning and navigation method for TMS treatment, specifically includes the following steps:
  • S1, making a patient lie flat on a horizontal translation platform, translating (moving) the horizontal translation platform to a predetermined position, and turning on a TMS treatment system by an operator;
  • S2, acquiring RGBD image data of a patient face by adopting a 3D camera, and obtaining stable RGBD image data and 3D point cloud image data through calibration, background removal, filtering, and an illumination compensation algorithm, where the RGBD image data includes an RGBD image, an infrared image, and a depth image;
  • S3, adopting a face detection algorithm for an RGBD image of the patient face to obtain a patient face region, and then obtaining patient face feature point information through an ASM feature point detection algorithm;
  • S4, spatially positioning the patient face, establishing a space coordinate system by taking a 3D camera as an origin point, and obtaining 3D coordinate values of patient face feature points in a 3D camera space coordinate system according to the 3D point cloud image data;
  • S5, modeling the patient head, rotating the 3D camera around the head at a uniform speed or performing photographing at the same time by using multiple cameras to obtain image data of the patient head in all directions, by identifying the face feature point information in the image data in all the directions, calculating matching relationships between images, then obtaining a spatial position relationship between point clouds through an ICP algorithm of the 3D point clouds, integrating all the 3D point cloud image data in scan data to obtain complete 3D data of the patient head, then by taking MNI brain space coordinates commonly used in medicine as a standard, mapping 3D skull data of an MNI space obtained by brain 3D scanning to the complete 3D data to obtain a patient head model, and then building a magnetic stimulation point model on the obtained patient head model;
  • S6, performing first positioning: matching a spatial pose of the patient head model with an actual spatial pose of the patient head to determine a spatial pose of an actual stimulation site of a patient, modeling an operating device, determining a site where a mechanical arm should be moved through an algorithm, planning a movement path of the mechanical arm through an algorithm, planning a stimulation sequence of magnetic stimulation points, planning an optimal path of the mechanical arm to reach each magnetic stimulation point, and automatically moving, by the mechanical arm, to a magnetic stimulation point of the patient head according to the planned path for magnetic stimulation treatment; and
  • S7, performing repeated positioning: after the first positioning is completed, saving magnetic stimulation point information of the patient head and path planning data of the mechanical arm during the first positioning in the patient head model, and during next treatment of the patient, directly invoking the data during the previous positioning to repeatedly position the patient head.
  • Specifically, as shown in FIG. 2, in step S2, the illumination compensation algorithm includes the following steps:
  • S21, analyzing the RGBD image by adopting a contour tracking algorithm to obtain a face contour line of the patient;
  • S22, mapping the contour line in S21 to the 3D point cloud image of the patient face;
  • S23, analyzing coordinate values of point clouds on both sides of the contour line along the contour line on the 3D point cloud image, if there is a relatively large sudden change in a height of the coordinate values of the point clouds on the both sides, indicating that a contour is true and effective, and if there is no relatively large sudden change, indicating that the contour is a false contour formed due to shadows caused by too strong light;
  • S24, adjusting, on the RGBD image, brightness of pixels on both sides of a corresponding position of the false contour to be approximate, that is, adjusting from a dark region to a bright region, to eliminate shadows caused by too strong light;
  • S25, traversing on the 3D point cloud image to find a region with a relatively large sudden change in the height; and
  • S26, mapping the region with the relatively large sudden change in the height in S25 to the RGBD image, increasing brightness of pixels in a region with a sudden increase in the height, and decreasing brightness of pixels of a region with a sudden decrease in the height to eliminate an influence caused by too weak light.
  • Specifically, in step S3, the face detection algorithm is a method based on template matching, a method based on skin color features, or a method based on AdaBoost.
  • The method based on template matching: the implementation principle of a template matching method is to calculate a correlation coefficient between a target and a preset template by comparing a similarity between the template and a region to be detected. Face detection is to search an image to be detected for a region closest to a face by using a gray-scale template of the face. Compared with other methods for face detection based on features, the face detection method based on template matching is intuitive, simple, and easy to implement, and has strong adaptability, low dependence on image quality, and high robustness.
  • The method based on skin color features: one of the most significant external features of a face is skin color. The advantage of the skin color is that it does not rely on remaining features of the face, and is not sensitive to changes in a face pose and a face shape. The face can be relatively easily detected by constructing a skin color model that distinguishes the face from other different color backgrounds. However, this method cannot distinguish skin-like objects well, which will cause false detection. In addition, it is relatively sensitive to illumination changes, thereby affecting the accuracy of face detection.
  • The method based on AdaBoost: first, Haar-like features are adopted to represent a face, and an integral image is used to accelerate a process of Haar-like feature value calculation; and then AdaBoost is used to screen out best face rectangular features. These features is called weak classifiers, and finally these classifiers are connected in series to form a strong classifier so as to achieve the purpose of detecting a face. In addition, this method is not easily sensitive to the illumination changes.
  • Specifically, in step S4, the method for spatially positioning the patient face includes the following steps:
  • S41, obtaining a 3D point cloud image of a face by using an infrared camera in the 3D camera, finding a point of the patient face closest to the camera, that is, the tip of a nose, and obtaining space coordinates (x, y, z) in the camera coordinate system;
  • S42, using the stable RGBD image obtained in step S2, finding feature points comprising the tip of the nose, a root of the nose, corners of eyes, corners of a mouth, and eyebrows of a human in the image according to a face feature recognition algorithm, and deducing an angle (rx, ry, rz) of the patient face relative to the camera according to left-right symmetry of the face; and
  • S43, synthesizing the point coordinates and the angle obtained in S41 and S42 into complete 3D space coordinates (x, y, z, rx, ry, rz).
  • Further, in step S41, the (x, y) coordinates of the tip of the nose are obtained by the following method: first, 68 feature points on the face are drawn on Demo through Opencv, and numbered; and then the following model is used:
  • circle(temp, cvPoint(shapes[0].part(i).x( ), shapes[0].part(i).y( )), 3, cv::Scalar(0, 0, 255), −1),
  • where, part(i) represents the ith feature point, and x( ) and y( ) are ways to access two-dimensional coordinates of feature points.
  • Further, a z-axis coordinate of the tip of the nose, that is, a distance from the tip of the nose to the camera, can be obtained through a binocular matching triangulation principle; the triangulation principle refers to that a difference between imaging abscissas of a target point in left and right images (disparity) is inversely proportional to a distance from the target point to an imaging plane: Z=ft/d, thereby obtaining the Z-axis coordinate; binocular matching adopts the triangulation principle and is completely based on image processing technology, and matching points are obtained by searching for the same feature points in the two images. The space coordinates of the face in a camera coordinate system are the prior art and are mature, and details are not described herein repeatedly.
  • Further, in step S42, the angle (rx, ry, rz) of the patient face relative to the camera is obtained in the following manner.
  • First, the space coordinate system is constructed, a human head model is constructed according to human face feature points (the tip of the nose, the root of the nose, the corners of the eyes, the corners of the mouth, the eyebrows, etc.) in a picture, then the picture is mapped to the head model, and the face feature points respectively correspond to feature points on the head model; as shown in FIG. 3, on the head model, a plane is created by taking three points, i.e., a left earlobe (point A), a right earlobe (point B), and the tip of the nose (point C) of a human, and a face-based space coordinate system is constructed by using a midpoint point O of point A and point B as an origin of the coordinate system, a direction perpendicular to the plane as an Z axis, a direction of a connection line between point A and B as an X axis, and a direction of a connection line between point C and point O as an Y axis, where the X axis is perpendicular to the Y axis; and then the angle (rx, ry, rz) of the human face relative to the camera can be deduced by calculating included angles α, β, and γ between the connection line from point O to the camera and the X axis, the Y axis, and the Z axis.
  • Specifically, in step S5, modeling a head needs to acquire 3D scan data of a head of a patient by a 3D camera. Each time the 3D camera performs photographing, a color image, a depth image, and a 3D point cloud image are generated. The three images are generated at the same time, and thus, points on the images have a fixed corresponding relationship. The corresponding relationship is known and is obtained through calibration of the camera. 3D scanning is to capture a series of images around a head of a patient, and then stitch the images into a complete image. Image stitching involves finding the same parts of two images and matching same. No 3D point cloud for hair can be obtained in a 3D camera, but 3D data of the skull (without hair) is needed in a medical treatment head model. Therefore, a patient needs to wear a specific cap during scanning for a head model. In order to make the matching more accurate, some mark points are usually provided on the cap. 3D scanning ultimately needs to stitch 3D point clouds. The rotation and translation relationship between point clouds of all images is needed during stitching. The stitching of point clouds mainly relies on an ICP algorithm.
  • Further, stitching of point clouds includes the follow steps.
  • At S51, “key points” are first calculated in a color image through cv::FeatureDetector and cv::DescriptorExtractor in OpenCV, “descriptors” of pixels around the key points are calculated, then the above descriptors are matched using cv::DMatch, and then a SolvePnPRansac function in OpenCV is called to solve PnP to obtain displacement and rotation information of two images.
  • At S52, point cloud data of the two images are calculated by using the displacement and rotation information calculated above as a result of initial coarse matching of the ICP algorithm to obtain more precise displacement and rotation data.
  • At S53, a displacement and rotation matrix is obtained using the above displacement and rotation data, all points in a previous point cloud image are rotated and translated, and a new point cloud calculated is added to a current point cloud image to obtain a larger point cloud and complete integration of the two point clouds.
  • At S54, steps S51 to S53 are repeated, all point cloud images are integrated into a larger point cloud image, then filtering and smoothing processing is performed on the point cloud image, sampling is performed to reduce the number of points, and fitting is performed to obtain 3D curved surface data, so as to obtain complete 3D data of the head of the patient.
  • Specifically, in step S6, the spatial pose of the patient head model is matched with the actual spatial pose of the patient head. The specific matching method is: in the treatment process, the 3D image captured by the 3D camera in real time has only face information of the patient but no head information, and thus, the head model constructed in S5 needs to be aligned in position with face data captured in real time. Requirements of real-time detection cannot be satisfied due to a large calculation amount of an ICP algorithm. The position alignment method includes: first marking face feature points (corners of eyes, a tip of a nose, etc.) for alignment in a head model, then automatically identifying the face feature points in a real-time image, and calculating a conversion relationship between the real-time image and the head model through feature point matching, calculating a position of the head model in space, and then calculating position coordinates of a magnetic stimulation point on the head model in space.
  • Further, the modeling an operating device specifically includes modeling the mechanical arm, the 3D camera, and the TMS coil, placing the built model and the patient head model in the same space coordinate (camera space coordinate system), then selecting a target to be magnetically stimulated on the head model, calculating the best path of the TMS coil to reach the target point in the space model, and ensuring that there is no collision during movement.
  • Further, a movement path planning algorithm for a mechanical arm is relatively complicated in general, and since the models, obstacle, and path in the present embodiment are all known, a manual path planning method is adopted. A straight path is used at a position far away from the head model, and an arc path is used near the head model, so as to move the TMS coil around the head to a next magnetic stimulation target to be magnetically stimulated. Since 3D data of the head model is known, head model data can be enlarged to leave a safe distance for running, and a shortest arc path between two points on the head model can be calculated.
  • Embodiment 2
  • The present embodiment provides a TMS positioning and navigation method for TMS treatment, and differs from the above embodiment in that the TMS positioning and navigation method according to the present embodiment further includes a following position method.
  • Specifically, in the process of first positioning or repeated positioning, it is required to perform follow-up positioning of a patient head. In a treatment process, position information of the patient head every time the positioning is completed will be recorded. If a distance between a current target position and a target position at a previous moment exceeds 5 mm due to motion of the patient head, a follow-up positioning action is started, fine adjustment of the head model and mechanical arm navigation is performed through an algorithm, a correct magnetic stimulation point is repositioned, a motion path of the mechanical arm is re-calculated, and a coil is moved to a new target position; if the distance does not exceed 5 mm, the follow-up positioning action will not be started. If the patient turns a lot, the following of the camera and a mechanical arm are paused, and coil magnetic stimulation is paused; if the patient is not within an adjustable range of the camera or leaves, mechanical arm and coil magnetic stimulation actions are stopped.
  • Embodiment 3
  • The present embodiment further provides a TMS positioning and navigation device for TMS treatment, including a lying bed, a headrest, a 3D camera, and a mechanical arm. During treatment, a patient lies flat on the lying bed, and rests the head on the headrest, 3D data of the patient head is acquired through he 3D camera, in combination with an algorithm, a magnetic stimulation point on the patient head is precisely positioned, and then the mechanical arm is controlled to move a TMS coil to the magnetic stimulation point to treat the patient.
  • The lying bed can be moved forwards and backwards, and is configured to adjust a relative position of the patient head and the camera.
  • The headrest mainly functions as a bracket, and supporting sites are a skull and a neck, are configured to limit movement of the patient without causing discomfort to the patient, and does not affect magnetic stimulation of the back of the head.
  • The 3D camera adopts an existing camera, and multiple 3D cameras can be used to acquire images of the patient head from multiple angles, or one camera can be combined with other devices to move, rotate and scan the patient head to acquire the 3D data of the patient head.
  • The location where the mechanical arm and the coil are combined is provided with a pressure sensor, and the contact between the coil and a patient brain will not cause loose fit or excessive pressure, which can effectively improve the experience of the patient.
  • Although the embodiments of the present invention are shown and described, persons of ordinary skill in the art can understand that various changes, modifications, substitutions and transformations can be made to the embodiments without departing from the principle and spirit of the present invention. The scope of the present invention is defined by the appended claims and equivalents thereof

Claims (8)

What is claimed is:
1. A Transcranial Magnetic Stimulation (TMS) positioning and navigation method for TMS treatment, comprising the following steps:
S1, translating a patient to a predetermined position through a horizontal translation platform, and turning on a TMS treatment system;
S2, scanning a head of the patient through a three-dimensional (3D) scanner to obtain 3D model data of the patient head;
S3, performing first positioning: detecting positions of the patient head and a TMS coil, matching a 3D model with the patient head to obtain a precise treatment location, planning, by a controller, a movement path of a mechanical arm according to the positions of the patient head and the TMS coil, and controlling the mechanical arm to precisely move the TMS coil to the patient head for treatment; and
S4, performing follow-up positioning: during treatment, if the position of the patient head changes, detecting a new position of the patient head in real time, and matching the 3D model of the patient head with the new position to obtain a precise treatment location, so as to perform real-time precise positioning treatment of the patient head.
2. The TMS positioning and navigation method for TMS treatment according to claim 1, wherein, step S2 specifically comprises the following steps:
S21, acquiring RGBD image data of a patient face by adopting a 3D camera, and obtaining stable RGBD image data and 3D point cloud image data through calibration, background removal, filtering, and an illumination compensation algorithm;
S22, obtaining patient face feature point information: processing an RGBD image of the patient face by adopting a face detection algorithm to obtain a patient face region, and then obtaining patient face feature point information through an ASM feature point detection algorithm;
S23, spatially positioning the patient face, establishing a space coordinate system by taking a 3D camera as an origin point, and obtaining 3D coordinate values of a patient face feature point in a 3D camera space coordinate system according to the 3D point cloud image data; and
S24, modeling the patient head, rotating the 3D camera around the head at a uniform speed or performing photographing at a same time by using multiple 3D cameras to obtain image data of the patient head in all directions, by identifying the face feature point information in the image data in all the directions, calculating matching relationships between images in all the directions, then obtaining a spatial position relationship between point clouds through an ICP algorithm of the 3D point clouds, and integrating all the 3D point cloud image data in scan data to obtain complete 3D data of the patient head; and then using MNI brain space coordinates as a standard, mapping 3D skull data of an MNI space to the complete 3D data to obtain a patient head model, and then building a magnetic stimulation point model on the patient head model.
3. The TMS positioning and navigation method for TMS treatment according to claim 2, wherein, in step S21, the illumination compensation algorithm comprises the following steps:
S211, analyzing the RGBD image by adopting a contour tracking algorithm to obtain a face contour line of the patient;
S212, mapping the contour line in step S211 to the 3D point cloud image of the patient face;
S213, analyzing coordinate values of point clouds on both sides of the contour line along the contour line on the 3D point cloud image, if there is a relatively large sudden change in a height of the coordinate values of the point clouds on the both sides, indicating that a contour is true and effective, and if there is no relatively large sudden change, indicating that the contour is a false contour formed due to shadows caused by too strong light;
S214, adjusting, on the RGBD image, brightness of pixels on both sides of a corresponding position of the false contour to be approximate, that is, adjusting from a dark region to a bright region, to eliminate shadows caused by too strong light;
S215, traversing on the 3D point cloud image to find a region with a relatively large sudden change in the height; and
S216, mapping the region with the relatively large sudden change in the height in step S215 to the RGBD image, increasing brightness of pixels in a region with a sudden increase in the height, and decreasing brightness of pixels of a region with a sudden decrease in the height to eliminate an influence caused by too weak light.
4. The TMS positioning and navigation method for TMS treatment according to claim 2, wherein,
in step S22, the face detection algorithm is a method based on template matching, a method based on skin color features, or a method based on AdaBoost.
5. The TMS positioning and navigation method for TMS treatment according to claim 2, wherein, in step S23, the method for spatially positioning the patient face comprises the following steps:
S231, obtaining a 3D point cloud image of a face by using the 3D camera, finding a point of the patient face closest to the camera, that is, the tip of a nose, and obtaining space coordinates (x, y, z) in the camera coordinate system;
S232, using the stable RGBD image obtained in step S21, finding feature points comprising the tip of the nose, a root of the nose, corners of eyes, corners of a mouth, and eyebrows of a human in the image according to a face feature recognition algorithm, and deducing an angle (rx, ry, rz) of the patient face relative to the camera according to left-right symmetry of the face; and
S233, synthesizing the point coordinates and the angle obtained in step S231 and step S232 into complete 3D space coordinates (x, y, z, rx, ry, rz).
6. The TMS positioning and navigation method for TMS treatment according to claim 1, wherein,
in step S3, the planning a movement path of a mechanical arm comprises: planning a stimulation sequence of magnetic stimulation points, and planning an optimal path and a movement speed of the mechanical arm to reach each magnetic stimulation point.
7. The TMS positioning and navigation method for TMS treatment according to claim 1, wherein,
in step S4, the follow-up positioning specifically is: recording position information of a magnetic stimulation point of the patient head each time positioning is completed in a treatment process, if at a next moment, a distance between positions of the magnetic stimulation point at a current moment and a previous moment exceeds 5 mm due to movement of the patient head, starting follow-up positioning, and if the distance does not exceed 5 mm, not starting follow-up positioning;
after starting the follow-up positioning, first adjusting a spatial pose of the patient head model, so that the spatial pose of the head model is matched with a current actual spatial pose of the patient head, then repositioning a latest magnetic stimulation point on the head model, finally re-planning the movement path of the mechanical arm, and moving the TMS coil to the latest magnetic stimulation point for treatment.
8. The TMS positioning and navigation method for TMS treatment according to claim 1, further comprising repeated positioning, wherein the repeated positioning is specifically:
after the first positioning is completed, saving treatment location information of the patient head and path planning data of the mechanical arm during the first positioning in patient head model data, and during next treatment of the patient, directly invoking the data during the previous positioning to repeatedly position the patient head.
US17/279,202 2018-09-27 2019-02-26 Transcranial magnetic stimulation (tms) positioning and navigation method for tms treatment Pending US20220036584A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201811131181.6 2018-09-27
CN201811131181.6A CN109260593B (en) 2018-09-27 2018-09-27 Transcranial magnetic stimulation treatment method and equipment
PCT/CN2019/076099 WO2020062773A1 (en) 2018-09-27 2019-02-26 Tms positioning navigation method used for transcranial magnetic stimulation treatment

Publications (1)

Publication Number Publication Date
US20220036584A1 true US20220036584A1 (en) 2022-02-03

Family

ID=65198606

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/279,202 Pending US20220036584A1 (en) 2018-09-27 2019-02-26 Transcranial magnetic stimulation (tms) positioning and navigation method for tms treatment
US17/279,219 Pending US20220031408A1 (en) 2018-09-27 2019-02-26 Transcranial magnetic stimulation diagnostic and treatment device

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/279,219 Pending US20220031408A1 (en) 2018-09-27 2019-02-26 Transcranial magnetic stimulation diagnostic and treatment device

Country Status (4)

Country Link
US (2) US20220036584A1 (en)
EP (2) EP3858432B1 (en)
CN (1) CN109260593B (en)
WO (2) WO2020062773A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220019771A1 (en) * 2019-04-19 2022-01-20 Fujitsu Limited Image processing device, image processing method, and storage medium
CN114376726A (en) * 2022-02-08 2022-04-22 西安科悦医疗股份有限公司 Path planning method and related device for transcranial magnetic stimulation navigation process
CN115154907A (en) * 2022-07-19 2022-10-11 深圳英智科技有限公司 Transcranial magnetic stimulation coil positioning control method and system and electronic equipment
US11794029B2 (en) 2016-07-01 2023-10-24 Btl Medical Solutions A.S. Aesthetic method of biological structure treatment by magnetic field
US11806528B2 (en) 2020-05-04 2023-11-07 Btl Healthcare Technologies A.S. Device and method for unattended treatment of a patient
US11826565B2 (en) 2020-05-04 2023-11-28 Btl Healthcare Technologies A.S. Device and method for unattended treatment of a patient
US11878162B2 (en) 2016-05-23 2024-01-23 Btl Healthcare Technologies A.S. Systems and methods for tissue treatment
US11883643B2 (en) 2016-05-03 2024-01-30 Btl Healthcare Technologies A.S. Systems and methods for treatment of a patient including RF and electrical energy
US11896816B2 (en) 2021-11-03 2024-02-13 Btl Healthcare Technologies A.S. Device and method for unattended treatment of a patient

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109260593B (en) * 2018-09-27 2020-09-08 武汉资联虹康科技股份有限公司 Transcranial magnetic stimulation treatment method and equipment
CN110300993B (en) * 2019-02-26 2023-10-27 武汉资联虹康科技股份有限公司 Imaging system for transcranial magnetic stimulation diagnosis and treatment
CN110896611B (en) * 2019-02-26 2023-11-21 武汉资联虹康科技股份有限公司 Transcranial magnetic stimulation diagnosis and treatment navigation system based on camera
EP3905256A4 (en) * 2019-02-26 2022-02-09 Wuhan Znion Technology Co., Ltd Camera-based transcranial magnetic stimulation diagnosis head model building system
CN110382046B (en) * 2019-02-26 2023-11-24 武汉资联虹康科技股份有限公司 Transcranial magnetic stimulation diagnosis and treatment detection system based on camera
CN112546447B (en) * 2020-11-24 2021-07-27 四川君健万峰医疗器械有限责任公司 Stimulating coil positioning device

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8047979B2 (en) * 2001-04-20 2011-11-01 Mclean Hospital Corporation Magnetic field treatment techniques
US9884200B2 (en) * 2008-03-10 2018-02-06 Neuronetics, Inc. Apparatus for coil positioning for TMS studies
EP2684518A4 (en) * 2011-03-09 2014-09-17 Univ Osaka Image data processing device and transcranial magnetic stimulation apparatus
US9265965B2 (en) * 2011-09-30 2016-02-23 Board Of Regents, The University Of Texas System Apparatus and method for delivery of transcranial magnetic stimulation using biological feedback to a robotic arm
CN102814002B (en) * 2012-08-08 2015-04-01 深圳先进技术研究院 Cerebral magnetic stimulation navigation system and cerebral magnetic stimulation coil positioning method
CN103558910B (en) * 2013-10-17 2016-05-11 北京理工大学 A kind of intelligent display system of automatic tracking head pose
US10675479B2 (en) * 2013-12-24 2020-06-09 Osaka University Operation teaching device and transcranial magnetic stimulation device
WO2015120479A1 (en) * 2014-02-10 2015-08-13 Neuronetics, Inc. Head modeling for a therapeutic or diagnostic procedure
CN107106860B (en) * 2014-10-07 2020-09-22 帝人制药株式会社 Transcranial magnetic stimulation system
CN104474636B (en) * 2014-11-20 2017-02-22 西安索立德医疗科技有限公司 Multi-point multi-frequency three-dimensional transcranial magnetic stimulation system and intracranial and extracranial coordinate conversion method
US10330958B2 (en) * 2015-04-10 2019-06-25 Bespoke, Inc. Systems and methods for creating eyewear with multi-focal lenses
EP3374021B1 (en) * 2015-11-09 2019-08-28 Axilum Robotics (Société par Actions Simplifiée) Magnetic stimulation device comprising a force-sensing resistor
CN106110507A (en) * 2016-07-26 2016-11-16 沈阳爱锐宝科技有限公司 The navigation positional device of a kind of transcranial magnetic stimulation device and localization method
GB2552810B (en) * 2016-08-10 2021-05-26 The Magstim Company Ltd Headrest assembly
CN106345062B (en) * 2016-09-20 2018-01-16 华东师范大学 A kind of cerebral magnetic stimulation coil localization method based on magnetic resonance imaging
DE102017104627A1 (en) * 2017-03-06 2018-09-06 Mag & More Gmbh POSITIONING AID FOR TMS
CN107330976B (en) * 2017-06-01 2023-05-02 北京大学第三医院 Human head three-dimensional modeling device and use method
CN107497049A (en) * 2017-09-30 2017-12-22 武汉资联虹康科技股份有限公司 A kind of electromagnetic location air navigation aid and device for transcranial magnetic stimulation device
JP2021508522A (en) * 2017-12-21 2021-03-11 ニューラレース メディカル,インコーポレイテッド Devices, systems, and methods for non-invasive chronic pain therapy
CN108344420A (en) * 2018-02-22 2018-07-31 王昕� A kind of transcranial magnetic stimulation intelligent navigation positioning device
CN109173062B (en) * 2018-09-17 2022-03-22 武汉资联虹康科技股份有限公司 Efficient TMS (TMS) repeated positioning method
CN109260593B (en) * 2018-09-27 2020-09-08 武汉资联虹康科技股份有限公司 Transcranial magnetic stimulation treatment method and equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11883643B2 (en) 2016-05-03 2024-01-30 Btl Healthcare Technologies A.S. Systems and methods for treatment of a patient including RF and electrical energy
US11878162B2 (en) 2016-05-23 2024-01-23 Btl Healthcare Technologies A.S. Systems and methods for tissue treatment
US11896821B2 (en) 2016-05-23 2024-02-13 Btl Healthcare Technologies A.S. Systems and methods for tissue treatment
US11794029B2 (en) 2016-07-01 2023-10-24 Btl Medical Solutions A.S. Aesthetic method of biological structure treatment by magnetic field
US20220019771A1 (en) * 2019-04-19 2022-01-20 Fujitsu Limited Image processing device, image processing method, and storage medium
US11806528B2 (en) 2020-05-04 2023-11-07 Btl Healthcare Technologies A.S. Device and method for unattended treatment of a patient
US11813451B2 (en) 2020-05-04 2023-11-14 Btl Healthcare Technologies A.S. Device and method for unattended treatment of a patient
US11826565B2 (en) 2020-05-04 2023-11-28 Btl Healthcare Technologies A.S. Device and method for unattended treatment of a patient
US11878167B2 (en) 2020-05-04 2024-01-23 Btl Healthcare Technologies A.S. Device and method for unattended treatment of a patient
US11896816B2 (en) 2021-11-03 2024-02-13 Btl Healthcare Technologies A.S. Device and method for unattended treatment of a patient
CN114376726A (en) * 2022-02-08 2022-04-22 西安科悦医疗股份有限公司 Path planning method and related device for transcranial magnetic stimulation navigation process
CN115154907A (en) * 2022-07-19 2022-10-11 深圳英智科技有限公司 Transcranial magnetic stimulation coil positioning control method and system and electronic equipment

Also Published As

Publication number Publication date
CN109260593B (en) 2020-09-08
EP3858433A4 (en) 2021-11-24
CN109260593A (en) 2019-01-25
WO2020062773A1 (en) 2020-04-02
EP3858432A4 (en) 2021-12-01
EP3858432A1 (en) 2021-08-04
EP3858433B1 (en) 2023-10-11
US20220031408A1 (en) 2022-02-03
EP3858432B1 (en) 2022-10-26
WO2020062774A1 (en) 2020-04-02
EP3858433A1 (en) 2021-08-04

Similar Documents

Publication Publication Date Title
EP3858433B1 (en) Tms positioning navigation apparatus for transcranial magnetic stimulation treatment
CN110896609B (en) TMS positioning navigation method for transcranial magnetic stimulation treatment
CN110896611B (en) Transcranial magnetic stimulation diagnosis and treatment navigation system based on camera
CN110300993B (en) Imaging system for transcranial magnetic stimulation diagnosis and treatment
US10653381B2 (en) Motion tracking system for real time adaptive motion compensation in biomedical imaging
US7929661B2 (en) Method and apparatus for radiographic imaging
WO2021169191A1 (en) Fast ct scanning method and system based on virtual stereoscopic positioned image
CN110382046B (en) Transcranial magnetic stimulation diagnosis and treatment detection system based on camera
CN106139423A (en) A kind of image based on photographic head guides seeds implanted system
CN116630382B (en) Nerve regulation and control image monitoring registration system and control method
US11948247B2 (en) Camera-based transcranial magnetic stimulation diagnosis and treatment head modeling system
CN113081013B (en) Spacer scanning method, device and system
CN115006737A (en) Radiotherapy body position monitoring system based on depth camera
CN110896610A (en) Transcranial magnetic stimulation diagnosis and treatment equipment
EP4277529B1 (en) Chest x-ray system and method
WO2024069042A1 (en) A dental x-ray imaging system and a method for dental x-ray imaging of a patient
Shin et al. Development of stereo camera module using webcam for navigation Transcranial Magnetic Stimulation system

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION