CN117959605B - TMS individuation positioning method based on functional magnetic resonance language task guidance - Google Patents

TMS individuation positioning method based on functional magnetic resonance language task guidance Download PDF

Info

Publication number
CN117959605B
CN117959605B CN202410087931.3A CN202410087931A CN117959605B CN 117959605 B CN117959605 B CN 117959605B CN 202410087931 A CN202410087931 A CN 202410087931A CN 117959605 B CN117959605 B CN 117959605B
Authority
CN
China
Prior art keywords
language
individual
mask
functional
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410087931.3A
Other languages
Chinese (zh)
Other versions
CN117959605A (en
Inventor
王翠翠
伍新野
张喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Normal University
Original Assignee
Hangzhou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Normal University filed Critical Hangzhou Normal University
Priority to CN202410087931.3A priority Critical patent/CN117959605B/en
Publication of CN117959605A publication Critical patent/CN117959605A/en
Application granted granted Critical
Publication of CN117959605B publication Critical patent/CN117959605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a TMS individuation positioning method based on functional magnetic resonance language task guidance. According to the invention, the language region is positioned through the functional image data of the individual when the language task is executed, so that the positioning accuracy is improved, the limitation of the prior method is effectively overcome, and the problem of inaccurate positioning method in the past is successfully solved. According to the invention, the language zone mask in the standard space is mapped to the individual space in the inverse matrix conversion mode, so that a method for accurately positioning the language function area in the individual space is provided for the functional image, and the problem that the language function area is difficult to accurately position in the individual space by the TMS in the past is solved.

Description

TMS individuation positioning method based on functional magnetic resonance language task guidance
Technical Field
The invention belongs to the technical field of image processing, and relates to a magnetic resonance and Transcranial Magnetic Stimulation (TMS) individuation positioning method based on functional magnetic resonance language task guidance.
Background
Transcranial Magnetic Stimulation (TMS) is a process that generates a time-varying magnetic field through a strong current in a coil, acts on the cerebral cortex through the skull, and generates induced current, thereby causing a series of physiological changes and brain metabolic changes, including changes such as action potential generation, long-term enhancement, long-term inhibition and the like. Although TMS can improve language defects of aphasia, language processing is a very complex processing process, and the condition of each aphasia patient is different (such as age, focus position, disease course, aphasia type, bilateral hemisphere dominance degree, etc.), and the influence of accurate positioning on the curative effect of TMS is great. Existing TMS positioning techniques often employ an electroencephalogram 10-20 electrode positioning system or structural image for positioning. The electroencephalogram 10-20 electrode positioning system positioning method is to position according to the arrangement mode of electroencephalogram electrodes (Herwig et al, 2003), namely, an electroencephalogram hat is put on a tested belt, and a well-marked Broca area or Wernicke area on the electroencephalogram hat is found. Structural image localization is based on brain structure, which is anatomically accurate but functionally unknown because it ignores the problem of incomplete correspondence of the anatomical location of the brain to the brain functional areas.
Therefore, the TMS individuation accurate positioning method based on the functional magnetic resonance language task guidance provided by the invention performs language region positioning through functional image data of an individual when executing a language task, and simultaneously maps the language region mask in the standard space to the individual space in an inverse matrix conversion mode, so that the positioning accuracy is improved, the limitation of the prior method is effectively overcome, and the problem that the language functional region is difficult to accurately position in the individual space in the past is solved.
Disclosure of Invention
The invention aims at overcoming the defects of the prior art and provides a TMS individuation accurate positioning method based on functional magnetic resonance language task guidance.
In a first aspect, the invention provides a TMS individuation positioning method based on functional magnetic resonance language task guidance, comprising the following steps:
Step S1, acquiring functional magnetic resonance image data of an individual, wherein the functional magnetic resonance image data comprises structural image data of the individual and functional image data when the individual performs a picture naming task;
s2, preprocessing the functional magnetic resonance image data to obtain preprocessed data;
S3, manufacturing a language area mask from a standard brain template in a Montreal standard space, and inversely converting the standard language brain area mask into an individual space to obtain the language area mask of the individual space so as to locate a language function area of the individual brain; comprising the following steps:
s301, selecting a language brain region from a standard brain template to manufacture a language region mask, and obtaining the language region mask of a standard space;
S302, dividing the structural image pretreated in the step S2 into white matter, gray matter, cerebrospinal fluid and the like; and setting the deformation field (Deformation fields) to Forward and reverse (Inverse + Forward) so that the separated white matter, gray matter, cerebrospinal fluid, etc. are registered to the montreal standard space to obtain a Forward conversion matrix for each voxel to be displaced from the individual space to the standard space and a reverse conversion matrix for each voxel to be displaced from the standard space to the individual space;
s303, registering the language zone mask of the standard space to the individual space according to the inverse transformation matrix of the step S302 to obtain the language zone mask of the individual space;
And S4, performing activation analysis on the whole brain of the individual, and taking the maximum activation point in the mask of the language area of the individual as a stimulation target point of TMS.
Preferably, in step S1, the picture naming task includes n picture naming conditions and n control conditions, n >1, and the picture naming conditions and the control conditions are alternately presented; the picture naming condition comprises that m pictures are randomly presented each time, m is more than 1, the presentation time of each picture is 4 seconds, and the picture content is named each time one picture is required to be seen by a test.
Preferably, in step S1, the control conditions include that the "+" character gaze point is presented for 20 seconds each time, and the subject only needs to look at the gaze point and not speak.
Preferably, in step S2, the preprocessing specifically includes:
S201, performing format conversion on the functional magnetic resonance image data in the step S1, including structural image data and functional image data, and converting original DICOM file format data into NIFTI format capable of performing data analysis;
s202, performing time layer correction and head movement correction on the functional image data after the format conversion;
S203, aligning the functional image processed in the step S202 to the structural image transformed in the step S201, so as to ensure that the activation position of the functional image can be positioned on the structural image, but not change the voxel size of the functional image.
S204, opening software capable of checking nii images, taking the structural image as a bottom layer and the functional image as a covering layer, checking whether the brain of the functional image is overlapped with the brain of the structural image, if yes, performing step S205, otherwise, considering that the alignment effect of the functional image and the structural image is not good, performing origin correction on the functional image subjected to head motion correction in step S202 and the structural image subjected to format conversion in step S201, and then aligning the functional image subjected to origin correction with the structural image again;
S205, performing spatial smoothing on the aligned functional image data;
Preferably, in step S204, the origin correction means that the functional magnetic resonance image data is subjected to rigid body conversion in 6 directions of up-down, left-right, front-back, roll, pitch, and yaw: the parameters of the roll direction, the pitch direction and the yaw direction are adjusted firstly until the intersection point of the cross-hair is placed on the front combination in the sagittal plane, the rear combination and the front combination are kept on the same straight line, and then the space coordinates of the intersection point position of the cross-hair are taken as the parameters in the up-down direction, the left-right direction and the front-back direction respectively.
Preferably, in step S205, the smoothing kernel of the spatial smoothing process is a full width half maximum of 6 mm.
Preferably, step S4 specifically includes:
s401, constructing a First-level model for the functional image preprocessed in the step S2;
the First-level model comprises setting experimental conditions, setting presentation time and duration of each experimental condition, and taking the head movement parameter file as a regression variable to exclude false positive results possibly caused by head movement.
The experimental conditions comprise picture naming conditions and control conditions;
S402, estimating the constructed First-level model, and carrying out T test on the picture naming condition and the activation degree under the control condition on the whole brain level to obtain a T value image under the whole brain language task.
S403, re-slicing the T-value image under the whole brain language task according to the language area mask of the individual space, so that the voxel size of the T-value image is the same as the voxel size of the individual language area mask.
S404, acquiring a three-dimensional matrix of the T value image under the whole brain language task and a three-dimensional matrix of the mask image of the individual language area after re-slicing in the step S403, and multiplying the three-dimensional matrix and the three-dimensional matrix to obtain the T value matrix in the mask of the individual language area.
The three-dimensional matrix of the T-value image of the whole brain language task consists of T values of whole brain voxels; while the individual language field mask matrix is composed of 1 and 0, the values within the mask range are 1, and the values outside the mask are 0.
S405, arranging T values in the mask matrix of the individual language area in the step S404 in a descending order, and recording three-dimensional coordinates of voxels corresponding to each T value. And finally, selecting the voxel coordinate corresponding to the maximum T value, namely the coordinate of the maximum activation point in the mask of the individual language area as a stimulation target point of the TMS.
In a second aspect, the present invention provides a TMS personalized positioning system for implementing the above method, characterized in that:
the data acquisition and preprocessing module is used for acquiring functional magnetic resonance image data generated by the individual languages and preprocessing the functional magnetic resonance image data;
The language zone mask acquisition module is used for reversely converting the language zone mask into the individual space according to the standard language brain zone mask to obtain the language zone mask of the individual space so as to locate the language function zone of the individual brain;
And the TMS positioning module is used for performing activation analysis on the whole brain of the individual, and taking the maximum activation point in the individual language mask as a TMS stimulation target.
In a third aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-7.
In a fourth aspect, the present invention provides a computing device comprising a memory having executable code stored therein and a processor which, when executing the executable code, implements the method of any of claims 1-7.
The beneficial effects of the invention are as follows:
(1) According to the invention, the language region positioning is carried out through the functional image data of the individual when the language task is executed, so that the TMS positioning accuracy is improved, the limitation of the prior method is effectively overcome, and the problem that the prior positioning method ignores the mismatch of the brain structure and the function is successfully solved.
(2) According to the invention, the language zone mask in the standard space is mapped to the individual space in the inverse matrix conversion mode, so that a method for accurately positioning the language function area in the individual space is provided for the functional image, and the problem that the language function area is difficult to accurately position in the individual space in the past is solved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an exemplary diagram of a magnetic resonance imaging picture naming task;
FIG. 3 is a structural image (bottom layer) and functional image (red overlay) alignment diagram;
FIG. 4 is a flow chart for creating a language field mask;
FIG. 5 is a graph of T values under linguistic tasks, where red is positive and blue is negative;
FIG. 6 is an exemplary graph of determining stimulation targets.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "comprising" and "having" and any variations thereof, as used in the embodiments of the present invention, are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention provides a TMS individuation accurate positioning method based on functional magnetic resonance language task guidance, which comprises the following steps as shown in figure 1:
Step S1, performing functional magnetic resonance image scanning on an individual to obtain T1 structural image data of the individual and functional image data when the individual performs a picture naming task;
The picture naming task comprises 12 picture naming conditions and 12 control conditions, and the picture naming conditions and the control conditions are alternately presented. For the picture naming condition, 5 pictures are randomly presented each time, the presentation time of each picture is 4 seconds, and the picture content is named each time one picture is required to be seen by a test. For the control conditions, each time a 20 second "+" character gaze point is presented, the subject only needs to look at the gaze point and not speak (fig. 2).
S2, preprocessing the functional magnetic resonance image data to obtain preprocessed data; comprising the following steps:
S201, performing format conversion on the functional magnetic resonance image data in the step S1, including structural image data and functional image data, and converting original DICOM file format data into NIFTI format capable of performing data analysis;
s202, performing time layer correction and head movement correction on the functional image data after the format conversion;
The time layer correction refers to that the functional image of other time layers is aligned with the functional image of the middle time layer so as to eliminate errors caused by time delay in data acquisition or processing;
The head movement correction means that the image at each time point is aligned with the average image, thereby correcting the displacement caused by the head movement.
S203, aligning the functional image processed in the step S202 to the structural image transformed in the step S201, so as to ensure that the activation position of the functional image can be positioned on the structural image, but not change the voxel size of the functional image.
S204, opening software capable of checking nii images, taking the structural image as a bottom layer and the functional image as a covering layer, checking whether the brain of the functional image is overlapped with the brain of the structural image, if so (fig. 3), performing step S205, otherwise, considering that the alignment effect of the functional image and the structural image is not good, performing origin correction on the functional image after the head correction in step S202 and the structural image after the format conversion in step S201, and then aligning the functional image after the origin correction with the structural image again;
The origin correction refers to performing rigid body conversion on functional magnetic resonance image data in the up-down, left-right, front-back, roll, pitch and yaw directions, and firstly adjusting parameters of the roll, pitch and yaw directions until the intersection point of the cross-shaped guideline is placed on the front union in the sagittal plane, and the rear union and the front union are kept on the same straight line. And then taking three numerical values of the space coordinates (x, y, z) of the intersection point position of the cross standard line as the parameters in the up-down, left-right, front-back directions respectively after taking the opposite numbers.
S205, performing space smoothing on the aligned functional image data by using a full-width half-height 6 mm smoothing kernel;
S3, manufacturing a language area mask from a standard brain template in a Montreal standard space, and inversely converting the standard language brain area mask into an individual space to obtain the language area mask of the individual space so as to locate a language function area of the individual brain; comprising the following steps:
S301, selecting a language brain region from a standard brain template to manufacture a language region mask, and obtaining the language region mask of a standard space (figure 4 a);
S302, dividing the structural image of the data preprocessed in the step S2 into white matter, gray matter, cerebrospinal fluid and the like (fig. 4 b); and setting the deformation field (Deformation fields) to Forward and reverse (Inverse + Forward) such that the segmented good white matter, gray matter, cerebrospinal fluid, etc. is registered to a Montreal (MNI) standard space to obtain a Forward transform matrix for each voxel displacement from the individual space to the standard space and a reverse transform matrix for each voxel displacement from the standard space to the individual space;
The MNI standard space is a standard three-dimensional coordinate space for brain imaging research, and is generally used for standardizing brain imaging data of different individuals or experiments so as to enable comparison and statistical analysis.
S303, registering the language zone mask of the standard space to the individual space according to the inverse transformation matrix of the step S302 to obtain the language zone mask of the individual space (fig. 4 a);
S4, performing activation analysis on the whole brain of the individual, and taking the maximum activation point in the mask of the language area of the individual as a stimulation target point of TMS; comprising the following steps:
s401, constructing a First-level model for the functional image preprocessed in the step S2.
The First-level model comprises setting experimental conditions, setting presentation time and duration of each experimental condition, and taking the head movement parameter file as a regression variable to exclude false positive results possibly caused by head movement.
The experimental conditions comprise picture naming conditions and control conditions;
S402, estimating the constructed First-level model, and carrying out T test on the picture naming condition and the activation degree under the control condition on the whole brain level to obtain a T value image (figure 5) under the whole brain language task.
S403, re-slicing the T-value image under the whole brain language task according to the language area mask of the individual space, so that the voxel size of the T-value image is the same as the voxel size of the individual language area mask.
S404, reading a three-dimensional matrix of the T-value image under the whole brain language task and a three-dimensional matrix of the individual language region mask image after re-slicing in the step S403, and multiplying the three-dimensional matrix of the T-value image and the three-dimensional matrix of the individual language region mask image to obtain the T-value matrix in the individual language region mask.
The three-dimensional matrix of the T value graph under the whole brain language task consists of T values of whole brain voxels; and the three-dimensional matrix of the individual language field mask image is composed of 1 and 0, 1 representing values within the mask range, and 0 representing values outside the mask.
S405, arranging T values in the mask matrix of the individual language area in the step S404 in a descending order, and recording three-dimensional coordinates of voxels corresponding to each T value (fig. 6 a). Finally, voxel coordinates corresponding to the maximum T value, namely coordinates of the maximum activation point in the mask of the individual language area, are selected as the stimulation target point of TMS (fig. 6 b).
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. TMS individuation positioning method based on functional magnetic resonance language task guidance is characterized by comprising the following steps:
Step S1, acquiring functional magnetic resonance image data of an individual, wherein the functional magnetic resonance image data comprises structural image data of the individual and functional image data when the individual performs a picture naming task;
s2, preprocessing the functional magnetic resonance image data to obtain preprocessed data;
S3, manufacturing a language area mask from a standard brain template in a Montreal standard space, and inversely converting the standard language brain area mask into an individual space to obtain the language area mask of the individual space so as to locate a language function area of the individual brain; comprising the following steps:
s301, selecting a language brain region from a standard brain template to manufacture a language region mask, and obtaining the language region mask of a standard space;
s302, dividing the structural image of the data preprocessed in the step S2 into white matter, gray matter and cerebrospinal fluid, and setting a deformation field (Deformation fields) into positive conversion and negative conversion (inverse+forward), so that the divided white matter, gray matter and cerebrospinal fluid are registered to a Montreal standard space to acquire a positive conversion matrix of each voxel from the individual space to the standard space and a negative conversion matrix of each voxel from the standard space to the individual space;
s303, registering the language zone mask of the standard space to the individual space according to the inverse transformation matrix of the step S302 to obtain the language zone mask of the individual space;
s4, performing activation analysis on the whole brain of the individual, and taking the maximum activation point in the mask of the language area of the individual as a stimulation target point of TMS; the method specifically comprises the following steps:
s401, constructing a First-level model for the functional image data preprocessed in the step S2;
The First-level model comprises setting experimental conditions, setting presentation time and duration of each experimental condition, and taking a head movement parameter file as a regression variable to eliminate false positive results possibly caused by head movement;
the experimental conditions comprise picture naming conditions and control conditions;
S402, estimating the constructed First-level model, and carrying out T test on the picture naming condition and the activation degree under the control condition on the whole brain level to obtain a T value diagram under the whole brain language task;
S403, re-slicing the T-value image under the whole brain language task according to the language area mask of the individual space, so that the voxel size of the T-value image is the same as the voxel size of the individual language area mask;
s404, acquiring a three-dimensional matrix of a T value graph under the whole brain language task in the step S402 and a three-dimensional matrix of the mask image of the individual language area after re-slicing in the step S403, and multiplying the three-dimensional matrix and the three-dimensional matrix to obtain the T value matrix in the mask of the individual language area;
S405, arranging T values in the mask matrix of the individual language area in the step S404 in a descending order, and recording three-dimensional coordinates of voxels corresponding to each T value; and finally, selecting the voxel coordinate corresponding to the maximum T value, namely the coordinate of the maximum activation point in the mask of the individual language area as a stimulation target point of the TMS.
2. The method according to claim 1, wherein in step S1, the picture naming task includes n picture naming conditions and n control conditions, n > 1, the picture naming conditions and the control conditions being presented alternately; the picture naming condition comprises that m pictures are randomly presented each time, m is more than 1, the presentation time of each picture is 4 seconds, and the picture content is named each time one picture is required to be seen by a test.
3. The method according to claim 2, characterized in that in step S1, the control conditions comprise a "+" character gaze point being presented for 20 seconds each time, the subject only having to look at the gaze point and not speak.
4. The method according to claim 1, characterized in that in step S2, the pretreatment is in particular:
S201, performing format conversion on the functional magnetic resonance image data in the step S1, and converting original DICOM file format data into NIFTI format capable of performing data analysis;
s202, performing time layer correction and head movement correction on the functional image data after format conversion;
s203, aligning the functional image processed in the step S202 to the structural image data converted in the step S201 format, so as to ensure that the activation position of the functional image can be positioned on the structural image, but not change the voxel size of the functional image;
S204, opening software capable of checking nii images, taking the structural image as a bottom layer and the functional image as a covering layer, checking whether the brain of the functional image is overlapped with the brain of the structural image, if yes, performing step S205, otherwise, considering that the alignment effect of the functional image and the structural image is not good, performing origin correction on the functional image after head motion correction in step S202 and the structural image in step S201, and then aligning the functional image after origin correction with the structural image again;
s205, performing spatial smoothing processing on the aligned functional image data.
5. The method of claim 4, wherein in step S204, the origin correction means performing rigid body conversion on the functional magnetic resonance image data in up-down, left-right, front-back, roll, pitch, yaw 6 directions as needed.
6. The method according to claim 4, wherein in step S205, the smoothing kernel of the spatial smoothing process is a full width half maximum of 6 mm.
7. A TMS personalized positioning system for implementing the method of any one of claims 1-6, comprising:
the data acquisition and preprocessing module is used for acquiring functional magnetic resonance image data generated by the individual languages and preprocessing the functional magnetic resonance image data;
The language zone mask acquisition module is used for reversely converting the language zone mask into the individual space according to the standard language brain zone mask to obtain the language zone mask of the individual space so as to locate the language function zone of the individual brain;
And the TMS positioning module is used for performing activation analysis on the whole brain of the individual, and taking the maximum activation point in the individual language mask as a TMS stimulation target.
8. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-6.
9. A computing device comprising a memory having executable code stored therein and a processor, which when executing the executable code, implements the method of any of claims 1-6.
CN202410087931.3A 2024-01-22 2024-01-22 TMS individuation positioning method based on functional magnetic resonance language task guidance Active CN117959605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410087931.3A CN117959605B (en) 2024-01-22 2024-01-22 TMS individuation positioning method based on functional magnetic resonance language task guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410087931.3A CN117959605B (en) 2024-01-22 2024-01-22 TMS individuation positioning method based on functional magnetic resonance language task guidance

Publications (2)

Publication Number Publication Date
CN117959605A CN117959605A (en) 2024-05-03
CN117959605B true CN117959605B (en) 2024-06-25

Family

ID=90847156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410087931.3A Active CN117959605B (en) 2024-01-22 2024-01-22 TMS individuation positioning method based on functional magnetic resonance language task guidance

Country Status (1)

Country Link
CN (1) CN117959605B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113367679A (en) * 2021-07-05 2021-09-10 北京银河方圆科技有限公司 Target point determination method, device, equipment and storage medium
KR20220155149A (en) * 2021-05-14 2022-11-22 부산대학교 산학협력단 Apparatus and method for transcranial magnetic field stimulus for focal areas

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106345062B (en) * 2016-09-20 2018-01-16 华东师范大学 A kind of cerebral magnetic stimulation coil localization method based on magnetic resonance imaging
CN109480841A (en) * 2017-09-13 2019-03-19 复旦大学 Abnormal brain area precise positioning and antidote based on functional mri
CN109589089A (en) * 2017-09-30 2019-04-09 复旦大学 Analysis method and application thereof based on multi-modal image data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220155149A (en) * 2021-05-14 2022-11-22 부산대학교 산학협력단 Apparatus and method for transcranial magnetic field stimulus for focal areas
CN113367679A (en) * 2021-07-05 2021-09-10 北京银河方圆科技有限公司 Target point determination method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN117959605A (en) 2024-05-03

Similar Documents

Publication Publication Date Title
CN109389587B (en) Medical image analysis system, device and storage medium
US9547902B2 (en) Method and system for physiological image registration and fusion
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
CN107123137B (en) Medical image processing method and equipment
Doshi et al. Multi-atlas skull-stripping
CN109166133B (en) Soft tissue organ image segmentation method based on key point detection and deep learning
CN104346821B (en) Automatic planning for medical imaging
CN110490851B (en) Mammary gland image segmentation method, device and system based on artificial intelligence
CN111402228A (en) Image detection method, device and computer readable storage medium
US10796498B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable medium
US11800978B2 (en) Deep learning based isocenter positioning and fully automated cardiac MR exam planning
Bağcı et al. The role of intensity standardization in medical image registration
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
CN106340015A (en) Key point positioning method and device
Shi et al. A new constrained spatiotemporal ICA method based on multi-objective optimization for fMRI data analysis
CN110853111A (en) Medical image processing system, model training method and training device
CN103544695B (en) A kind of efficiently based on the medical image cutting method of game framework
US10699424B2 (en) Image processing apparatus, image processing method, and non-transitory computer readable medium with generation of deformed images
Gilbert et al. Automated left ventricle dimension measurement in 2d cardiac ultrasound via an anatomically meaningful cnn approach
Mahapatra et al. Registration of dynamic renal MR images using neurobiological model of saliency
CN114065825B (en) Brain magnetic MEG source positioning method based on structural similarity
Liu et al. Robust cortical thickness morphometry of neonatal brain and systematic evaluation using multi-site MRI datasets
CN117959605B (en) TMS individuation positioning method based on functional magnetic resonance language task guidance
CN113763390A (en) Brain tumor image segmentation and enhancement system based on multi-task generation countermeasure network
CN116128942A (en) Registration method and system of three-dimensional multi-module medical image based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant