CN114041773A - Apoplexy position classification method based on electrical impedance tomography measurement framework - Google Patents

Apoplexy position classification method based on electrical impedance tomography measurement framework Download PDF

Info

Publication number
CN114041773A
CN114041773A CN202111284030.6A CN202111284030A CN114041773A CN 114041773 A CN114041773 A CN 114041773A CN 202111284030 A CN202111284030 A CN 202111284030A CN 114041773 A CN114041773 A CN 114041773A
Authority
CN
China
Prior art keywords
data
layer
convolution
block
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111284030.6A
Other languages
Chinese (zh)
Inventor
施艳艳
田志威
王萌
刘镇琨
杨坷
李亚婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Normal University
Original Assignee
Henan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Normal University filed Critical Henan Normal University
Priority to CN202111284030.6A priority Critical patent/CN114041773A/en
Publication of CN114041773A publication Critical patent/CN114041773A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/053Measuring electrical impedance or conductance of a portion of the body
    • A61B5/0536Impedance imaging, e.g. by tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • A61B2576/026Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part for the brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Physics & Mathematics (AREA)
  • Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Fuzzy Systems (AREA)
  • Neurology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a stroke position classification method based on an electrical impedance tomography measuring frame, which is characterized in that a large number of patients with known pathological changes or health conditions are set as a training data patient group, safe current excitation in a relative mode and voltage detection in an adjacent mode are adopted, and data are expanded. And a jumper neural network consisting of a jumper block I, a jumper block II, a jumper block I, a global average pooling layer and a full-connection layer is built, and after training parameters are set, data of a training data patient group are normalized and sent to the neural network for training. And then collecting the data of the patient to be predicted, and building a prediction neural network. And normalizing the patient data to be predicted and inputting the normalized patient data into a prediction neural network to finally obtain a prediction result. The invention effectively improves the accuracy of patient prediction, thereby helping doctors to quickly judge the current patient state of illness, and improving the success rate of complete cure of patients.

Description

Apoplexy position classification method based on electrical impedance tomography measurement framework
Technical Field
The invention belongs to the application of electrical tomography in the field of brain imaging detection, and particularly relates to a stroke position classification method based on an electrical impedance tomography measurement framework.
Background
Stroke is the second most fatal disease in the world. Stroke is divided into two types, hemorrhagic stroke and ischemic stroke. For patients, the timely treatment of the stroke is greatly helpful for the treatment of the patients, and the accurate judgment of the type of the stroke is also crucial. For example, thrombolytic "clot-breaking" drugs are a treatment for acute ischemic stroke, however, they require that the patient be treated within four and a half hours after the onset of symptoms, and when used for hemorrhagic stroke, they do not only help the treatment, but rather aggravate the patient and even present a life risk. In clinical practice, Computed Tomography (Computed Tomography) and Magnetic Resonance Imaging (Magnetic Resonance Imaging) are standard Imaging methods. However, CT is radioactive and NMR is very expensive. And both methods require a long calculation process.
Electrical impedance tomography is an emerging visualization technique and has been successfully applied to monitor industrial processes. Electrical impedance imaging is also widely studied in the medical imaging field due to its advantages of safety, cheapness, portability and high time resolution. In electrical impedance tomography of the brain, by injecting a safe current into the electrodes attached to the scalp, the voltage on the remaining electrodes is measured, and the electrical conductivity in the brain region is restored from this voltage. In recent studies, various classification methods based on electrical impedance imaging data have been proposed. Unlike image-based methods, this method avoids the solution of the inverse problem required in image reconstruction, and therefore the results are more reliable. Such as: mcdermott et al 2018, published in Plos One, vol 13, No. 7, document No. e0200469, entitled 'detection of cerebral hemorrhage using SVM classifier with electrical impedance tomography measurement framework' (Brain risk assessment with electrical impedance tomography measurement mechanism). These studies have demonstrated the potential for classification based on electrical impedance imaging measurement data. It should be noted that the skull is a highly resistive medium and has a great influence on the detection of electrical impedance imaging. However, when using SVM to detect bleeding, the skull is not considered. The measured voltage of EIT contains a large amount of information about spatial distribution, as well as information about the electrical conductivity of the lesion. Therefore, there is a need for a method to quickly classify the type and spatial location of stroke to help physicians better judge the current patient's condition so that the patient is treated effectively in a timely manner.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a stroke position classification method based on an electrical impedance tomography measurement framework, which uses a neural network connected by jumper wires to predict and classify stroke positions of processed electrical impedance tomography measurement voltage data.
The invention adopts the following technical scheme for solving the technical problems, and the stroke position classification method based on the electrical impedance tomography measurement frame is characterized by comprising the following specific steps of:
the method comprises the following steps: dividing the brain area, making a plane perpendicular to a connecting line between the forehead center and the back pillow center, translating the plane from the forehead center to the back pillow center, stopping when the plane reaches temples on two sides, wherein the brain area swept by the plane is lambada 1, continuously translating the plane towards the back pillow center, stopping when the plane reaches the back of ears on two sides, wherein the brain area swept by the plane is lambada 2, and the rest unswept brain areas are lambada 3;
step two: the patient condition is classified into types, and the final classification can be: health, Λ 1 hemorrhage, Λ 1 ischemia, Λ 2 hemorrhage, Λ 2 ischemia, Λ 3 hemorrhage, and Λ 3 ischemia in total 7 cases;
step three: setting a large number of patients with known pathological changes or health conditions as a training data patient group, setting the forehead center of an electrode of an EIT acquisition system with 16 electrodes as a No. 1 electrode, and placing the 16 electrodes on the same horizontal plane of the head of a first patient in the training data patient group at equal intervals anticlockwise;
step four: safe current excitation in opposite mode and voltage detection in adjacent mode are adopted. Firstly, using No. 1 electrode and No. 9 electrode as exciting electrodes, and using No. 2-3, No. 3-4, No. 4-5, No. 5-6, No. 6-7, No. 7-8, No. 10-11, No. 11-12, No. 12-13, No. 13-14, No. 14-15 and No. 15-16 as detection electrode pairs in turn, so as to obtain 12 boundary measurement voltages. And storing the data as 1 group, sequentially acquiring data by using the No. 2 electrode and the No. 10 electrode as excitation electrodes and the No. 3-4 electrode as a first pair of detection electrodes according to the method, and acquiring a last group of 12 boundary measurement voltages until the No. 16 electrode and the No. 8 electrode are the excitation electrodes. A total of 16 sets of boundary voltage measurement data can be obtained, each set of data comprising 12 boundary measurement voltages;
step five: and data expansion, namely performing data transformation on the original 12-by-16 EIT voltage data in order to improve the classification performance of the neural network and enable more data information to be input into the neural network. Taking the No. 1 electrode and the No. 9 electrode as the first group of data of the exciting electrode, carrying out linear interpolation on the No. 15-16 voltage and the No. 2-3 voltage to obtain the No. 16-1 and No. 1-2 extended voltages, carrying out linear interpolation on the No. 7-8 voltage and the No. 10-11 voltage to obtain the No. 8-9 extended voltage and the No. 9-10 extended voltage, and finally expanding the first group of voltage data into the No. 1-2, No. 2-3, No. 3-4, No. 4-5, No. 5-6, No. 6-7, No. 7-8, No. 8-9, No. 9-10, No. 10-11, No. 11-12, No. 12-13, No. 13-14, No. 14-15, No. 15-16, No. 8, No. 9-10, No. 10-11, No. 11-12, No. 12-13, No. 13-14, No. 14-15, 16-1, 16 voltage data in total. Then, taking the No. 2 electrode and the No. 10 electrode as a second group of data of the excitation electrode, carrying out similar operation, and so on, and finally converting the EIT voltage data into the size of 16 × 16;
step six: judging whether all patients of the training data patient group acquire data, if so, performing the seventh step, and if not, returning to the third step to acquire the data of the next patient;
step seven: and (3) building a training neural network, wherein the structure of the neural network is sequentially composed of a jumper block I, a jumper block II, a jumper block I, a global average pooling layer and a full-connection layer. The structure of the jumper block I is as follows:
(1) and the first convolution block consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected. Wherein the size of the convolutional layer filter is 3 x 3, and the step size is 1. The activation layer is activated by a ReLU function. The output that has passed through convolution block one is represented as:
Figure BDA0003332153660000031
wherein
Figure BDA0003332153660000032
Wherein y is the output of a convolution block I, a is the input of the convolution block I, W is the weight matrix of the convolution block I, b is the offset, x and y are the row and column serial numbers of the input matrix respectively, m and n are the row and column serial numbers of the output matrix respectively, f (x) is a ReLU function, and t is the input variable of the active layer;
(2) and the second convolution block consists of a convolution layer and a batch normalization layer which are connected in sequence. Wherein the size of the convolutional layer filter is 3 x 3, and the step size is 1.
(3) And a convolution block III which is composed of a convolution layer with the filter size of 1 x 1 and the step size of 1, and the input of the convolution block III is the input a of the convolution block I.
(4) And the ReLU activation layer adds the output of the convolution block two of the jumper block one and the output of the convolution block three and sends the added result to a ReLU function for activation.
The structure of the jumper block II is as follows:
(1) and the convolution block IV consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected. Wherein the convolutional layer filter has a size of 3 x 3 and a step size of 2. The activation layer is activated by a ReLU function.
(2) And (5) rolling a second block.
(3) And a convolution block five, which is composed of a convolution layer with the filter size of 1 x 1 and the step length of 2, and the input of the convolution block five is the input of the convolution block four.
(4) And the ReLU activation layer adds the output of the convolution block two and the output of the convolution block five of the jumper block two and sends the added output to a ReLU function for activation.
And the global average pooling layer compresses and extracts the output of the last jumper block I, converts the output into a column vector and sends the column vector into the full-connection layer. The fully-connected layer contains 7 neurons as output corresponding to the classification condition of the step two, the activation function of the layer is SoftMax, and therefore the output of the layer is written as:
Figure BDA0003332153660000033
where Y is the output of the layer and Y is the input of the activation function;
step eight: setting training parameters, randomly assigning weights and bias parameters of each layer between 0 and 1, and setting the initial learning rate to 10-4Slowly decreases to a minimum of 10 as the number of exercises increases-6. Since the output is 7 classes, the mathematical form of the loss function is:
Figure BDA0003332153660000041
in the formula, p is a label value, q is a network output value, and i is a serial number of an output neuron. The optimiser uses a hybrid momentum weighted estimation method, i.e.
Figure BDA0003332153660000042
Wherein
Figure BDA0003332153660000043
Figure BDA0003332153660000044
In the formula, lr is learning rate, SGD is intermediate variable value of general momentum, RMS is intermediate variable value of square momentum, λ 1, λ 2 are momentum hyperparameters, and o is minimum value 10-8
Step nine: and (3) neural network training, namely correspondingly classifying the patient data according to images such as CT (computed tomography), MRI (magnetic resonance imaging) and the like of a training data patient group, setting labels, normalizing all acquired data and classification results and sending the normalized data and classification results into a neural network with jumper connection. And obtaining initial output through a neural network, calculating errors of the initial output through the loss function set in the step eight, and updating the weight parameters of each layer through an optimizer in a back propagation mode. Then the next training is performed. And completing one training round until all data are trained. And then inputting the data from the first group into the neural network for training, performing 100 rounds to finish the training of all the neural networks, and storing the weight and the bias data at the moment.
Step ten: and building a prediction neural network, wherein the structure of the prediction neural network is composed of a jumper block I, a jumper block II, a jumper block I, a global average pooling layer and a full-connection layer in sequence. The structure of the jumper block I is as follows:
(1) and the first convolution block consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected. Wherein the size of the convolutional layer filter is 3 x 3, and the step size is 1. The activation layer is activated by a ReLU function.
(2) And the second convolution block consists of a convolution layer and a batch normalization layer which are connected in sequence. Wherein the size of the convolutional layer filter is 3 x 3, and the step size is 1.
(3) And a convolution block III which is composed of a convolution layer with the filter size of 1 x 1 and the step size of 1, and the input of the convolution block III is the input a of the convolution block I.
(4) And the ReLU activation layer adds the output of the convolution block two of the jumper block one and the output of the convolution block three and sends the added result to a ReLU function for activation.
The structure of the jumper block II is as follows:
(1) and the convolution block IV consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected. Wherein the convolutional layer filter has a size of 3 x 3 and a step size of 2. The activation layer is activated by a ReLU function.
(2) And (5) rolling a second block.
(3) And a convolution block five, which is composed of a convolution layer with the filter size of 1 x 1 and the step length of 2, and the input of the convolution block five is the input of the convolution block four.
(4) And the ReLU activation layer adds the output of the convolution block four and the output of the convolution block five of the jumper block two and sends the added result to a ReLU function for activation.
And the global average pooling layer compresses and extracts the output of the last jumper block I, converts the output into a column vector and sends the column vector into the full-connection layer. And activating the outputs of the seven neurons of the full connection layer through a SoftMax function, wherein the seven outputs obtained finally are the probability of the current data corresponding to the seven conditions.
Step eleven: and (3) forecasting data acquisition, namely, arranging 16 electrodes of an EIT acquisition system with 16 electrodes on the same horizontal plane of the head of a patient to be detected (a patient in a non-training data patient group) with the forehead center as a No. 1 electrode at equal intervals anticlockwise. And carrying out safe current excitation of the opposite mode and voltage detection of the adjacent mode in the same manner as described in the fourth step. And performing data expansion in the same manner as described in the step five. And finally obtaining the data to be predicted.
Step twelve: and D, data prediction, namely setting the data stored in the step nine as the weight and the bias of the prediction neural network. And (3) normalizing the data to be predicted, and then sending the data to be predicted into a prediction neural network for prediction, so as to obtain the probability corresponding to 7 conditions, and selecting the condition with the maximum probability as a final prediction result.
Compared with the prior art, the invention has the following beneficial effects: the invention provides a stroke position classification method based on an electrical impedance tomography measuring frame, which is characterized in that measuring voltage data are obtained from a large number of patients by using the electrical impedance tomography measuring frame for processing, the processed data are input into a neural network connected by jumper wires for training, and after the training is finished, the patient voltage data to be predicted are processed and then input into a prediction network finished by the training, so that the prediction result of the current pathological change condition can be obtained. The method can quickly diagnose the position and the type of the disease of the current patient, and greatly improves the possibility of curing the patient.
Drawings
FIG. 1 is a block diagram of a neural network training process of a stroke location classification method based on an electrical impedance tomography measurement framework according to the present invention.
FIG. 2 is a block diagram of a neural network prediction flow of a stroke location classification method based on an electrical impedance tomography measurement framework provided by the invention.
Fig. 3 shows the division, electrode arrangement, measurement mode and excitation mode of the regions Λ 1, Λ 2 and Λ 3 in the embodiment of the invention.
Fig. 4 is a diagram of a neural network structure constructed in the embodiment of the present invention.
Fig. 5 shows the prediction result when 40dB noise is added to the data to be predicted.
FIG. 6 shows the prediction result when 20dB noise is added to the data to be predicted.
FIG. 7 shows the prediction result when the data to be predicted is added with noise with a signal-to-noise ratio of 10 dB.
In the figure: a is an electrode, B is a measurement voltage, C is an equipotential line, D is a ground, E is an excitation current, F is a forehead direction, G is a back pillow direction, H is Lambda 1, I is Lambda 2, and J is Lambda 3.
Detailed Description
The stroke position classification method based on the electrical impedance tomography measuring frame is described in detail with reference to the accompanying drawings and examples.
A stroke position classification method based on an electrical impedance tomography measurement framework is provided, and a neural network connected by jumper wires is used for predicting and classifying stroke positions of processed electrical impedance tomography measurement voltage data. The implementation steps are shown in fig. 1 and fig. 2, and the specific implementation steps are as follows:
the method comprises the following steps: dividing the brain area, making a plane perpendicular to a connecting line between the forehead center and the back pillow center, translating the plane from the forehead center to the back pillow center, stopping when the plane reaches temples on two sides, wherein the brain area swept by the plane is lambada 1, continuously translating the plane towards the back pillow center, stopping when the plane reaches the back of ears on two sides, wherein the brain area swept by the plane is lambada 2, and the rest unswept brain area is lambada 3.
Step two: the patient condition is classified into types, and the final classification can be: healthy, Λ 1 hemorrhage, Λ 1 ischemia, Λ 2 hemorrhage, Λ 2 ischemia, Λ 3 hemorrhage, Λ 3 ischemia total 7 cases.
Step three: a large number of patients with known pathological changes or health conditions are set as a training data patient group, the electrode of the EIT acquisition system with 16 electrodes is a No. 1 electrode at the forehead center, and the 16 electrodes are placed on the same horizontal plane of the head of the first patient in the training data patient group at equal intervals in a counterclockwise direction.
Step four: safe current excitation in opposite mode and voltage detection in adjacent mode are adopted. Firstly, using No. 1 electrode and No. 9 electrode as exciting electrodes, and using No. 2-3, No. 3-4, No. 4-5, No. 5-6, No. 6-7, No. 7-8, No. 10-11, No. 11-12, No. 12-13, No. 13-14, No. 14-15 and No. 15-16 as detection electrode pairs in turn, so as to obtain 12 boundary measurement voltages. And storing the data as 1 group, sequentially acquiring data by using the No. 2 electrode and the No. 10 electrode as excitation electrodes and the No. 3-4 electrode as a first pair of detection electrodes according to the method, and acquiring a last group of 12 boundary measurement voltages until the No. 16 electrode and the No. 8 electrode are the excitation electrodes. A total of 16 sets of boundary voltage measurement data were obtained, each set containing 12 boundary measurement voltages. Fig. 3 illustrates the brain compartmentalization, electrode placement, and the electrode stimulation and measurement patterns at a time therein in an embodiment of the present invention.
Step five: and data expansion, namely performing data transformation on the original 12-by-16 EIT voltage data in order to improve the classification performance of the neural network and enable more data information to be input into the neural network. Taking the No. 1 electrode and the No. 9 electrode as the first group of data of the exciting electrode, carrying out linear interpolation on the No. 15-16 voltage and the No. 2-3 voltage to obtain the No. 16-1 and No. 1-2 extended voltages, carrying out linear interpolation on the No. 7-8 voltage and the No. 10-11 voltage to obtain the No. 8-9 extended voltage and the No. 9-10 extended voltage, and finally expanding the first group of voltage data into the No. 1-2, No. 2-3, No. 3-4, No. 4-5, No. 5-6, No. 6-7, No. 7-8, No. 8-9, No. 9-10, No. 10-11, No. 11-12, No. 12-13, No. 13-14, No. 14-15, No. 15-16, No. 8, No. 9-10, No. 10-11, No. 11-12, No. 12-13, No. 13-14, No. 14-15, 16-1, 16 voltage data in total. Then, a second set of data with the electrode number 2 and the electrode number 10 as excitation electrodes is taken, similar operation is carried out, and the like, and finally the EIT voltage data is converted into the size of 16 x 16.
Step six: and judging whether all the patients of the training data patient group acquire data, if so, performing the seventh step, and if not, returning to the third step to acquire the data of the next patient.
Step seven: and (3) building a training neural network, wherein the structure of the neural network is shown in fig. 4 and sequentially comprises a jumper block I, a jumper block II, a jumper block I, a global average pooling layer and a full-connection layer. The structure of the jumper block I is as follows:
(1) and the first convolution block consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected. Wherein the size of the convolutional layer filter is 3 x 3, and the step size is 1. The activation layer is activated by a ReLU function. The output that has passed through convolution block one can be expressed as:
Figure BDA0003332153660000071
wherein
Figure BDA0003332153660000072
Wherein y is the output of the convolution block I, a is the input of the convolution block I, W is the weight matrix of the convolution block I, b is the offset, x and y are the row and column serial numbers of the input matrix respectively, m and n are the row and column serial numbers of the output matrix respectively, f (x) is the ReLU function, and t is the input variable of the active layer.
(2) And the second convolution block consists of a convolution layer and a batch normalization layer which are connected in sequence. Wherein the size of the convolutional layer filter is 3 x 3, and the step size is 1.
(3) And a convolution block III which is composed of a convolution layer with the filter size of 1 x 1 and the step size of 1, and the input of the convolution block III is the input a of the convolution block I.
(4) And the ReLU activation layer adds the output of the convolution block two of the jumper block one and the output of the convolution block three and sends the added result to a ReLU function for activation.
The structure of the jumper block II is as follows:
(1) and the convolution block IV consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected. Wherein the convolutional layer filter has a size of 3 x 3 and a step size of 2. The activation layer is activated by a ReLU function.
(2) And (5) rolling a second block.
(3) And a convolution block five, which is composed of a convolution layer with the filter size of 1 x 1 and the step length of 2, and the input of the convolution block five is the input of the convolution block four.
(4) And the ReLU activation layer adds the output of the convolution block two and the output of the convolution block five of the jumper block two and sends the added output to a ReLU function for activation.
And the global average pooling layer compresses and extracts the output of the last jumper block I, converts the output into a column vector and sends the column vector into the full-connection layer. The fully-connected layer contains 7 neurons as output corresponding to the classification condition of the step two, the activation function of the layer is SoftMax, and therefore the output of the layer can be written as:
Figure BDA0003332153660000081
where Y is the output of the layer and Y is the input to the activation function.
Step eight: setting training parameters, randomly assigning weights and bias parameters of each layer between 0 and 1, and setting the initial learning rate to 10-4Slowly decreases to a minimum of 10 as the number of exercises increases-6. Due to the fact that the utility model is used for transfusionThe mathematical form of the loss function is given by 7 classes:
Figure BDA0003332153660000082
in the formula, p is a label value, q is a network output value, and i is a serial number of an output neuron. The optimiser uses a hybrid momentum weighted estimation method, i.e.
Figure BDA0003332153660000083
Wherein
Figure BDA0003332153660000084
Figure BDA0003332153660000085
In the formula, lr is learning rate, SGD is intermediate variable value of general momentum, RMS is intermediate variable value of square momentum, λ 1, λ 2 are momentum hyperparameters, and o is minimum value 10-8
Step nine: and (3) neural network training, namely correspondingly classifying the patient data according to images such as CT (computed tomography), MRI (magnetic resonance imaging) and the like of a training data patient group, setting labels, normalizing all acquired data and classification results and sending the normalized data and classification results into a neural network with jumper connection. And obtaining initial output through a neural network, calculating errors of the initial output through the loss function set in the step eight, and updating the weight parameters of each layer through an optimizer in a back propagation mode. Then the next training is performed. And completing one training round until all data are trained. And then inputting the data from the first group into the neural network for training, performing 100 rounds to finish the training of all the neural networks, and storing the weight and the bias data at the moment.
Step ten: the method comprises the steps of building a prediction neural network, wherein the structure of the neural network is shown in figure 4, and the prediction neural network is sequentially composed of a jumper block I, a jumper block II, a jumper block I, a global average pooling layer and a full connection layer. The structure of the jumper block I is as follows:
(1) and the first convolution block consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected. Wherein the size of the convolutional layer filter is 3 x 3, and the step size is 1. The activation layer is activated by a ReLU function.
(2) And the second convolution block consists of a convolution layer and a batch normalization layer which are connected in sequence. Wherein the size of the convolutional layer filter is 3 x 3, and the step size is 1.
(3) And a convolution block III which is composed of a convolution layer with the filter size of 1 x 1 and the step size of 1, and the input of the convolution block III is the input a of the convolution block I.
(4) And the ReLU activation layer adds the output of the convolution block two of the jumper block one and the output of the convolution block three and sends the added result to a ReLU function for activation.
The structure of the jumper block II is as follows:
(1) and the convolution block IV consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected. Wherein the convolutional layer filter has a size of 3 x 3 and a step size of 2. The activation layer is activated by a ReLU function.
(2) And (5) rolling a second block.
(3) And a convolution block five, which is composed of a convolution layer with the filter size of 1 x 1 and the step length of 2, and the input of the convolution block five is the input of the convolution block four.
(4) And the ReLU activation layer adds the output of the convolution block four and the output of the convolution block five of the jumper block two and sends the added result to a ReLU function for activation.
And the global average pooling layer compresses and extracts the output of the last jumper block I, converts the output into a column vector and sends the column vector into the full-connection layer. And activating the outputs of the seven neurons of the full connection layer through a SoftMax function, wherein the seven outputs obtained finally are the probability of the current data corresponding to the seven conditions.
Step eleven: and (3) forecasting data acquisition, namely, arranging 16 electrodes of an EIT acquisition system with 16 electrodes on the same horizontal plane of the head of a patient to be detected (a patient in a non-training data patient group) with the forehead center as a No. 1 electrode at equal intervals anticlockwise. And carrying out safe current excitation of the opposite mode and voltage detection of the adjacent mode in the same manner as described in the fourth step. And performing data expansion in the same manner as described in the step five. And finally obtaining the data to be predicted.
Step twelve: and D, data prediction, namely setting the data stored in the step nine as the weight and the bias of the prediction neural network. And (3) normalizing the data to be predicted, and then sending the data to be predicted into a prediction neural network for prediction, so as to obtain the probability corresponding to 7 conditions, and selecting the condition with the maximum probability as a final prediction result.
Fig. 5-7 are the prediction results of different neural networks when the snr is 40dB, 20dB, 10dB for the data to be predicted. This example was analytically modeled using COMSOL Multiphysics 5.4 in combination with MATLAB R2016b, with operating parameter settings referenced to real human tissue parameters, brain parenchymal layer conductivity of 0.15S/m, skull layer conductivity of 0.013S/m, and scalp layer conductivity of 0.44S/m. The bleeding conductivity was set to 0.8S/m, the ischemic conductivity was set to 0.8S/m, and there were 2500 groups of bleeding data at different positions, 2500 groups of ischemic data at different positions, and 1000 groups of health data were input to the neural network for training. The data to be predicted have 500 groups of bleeding at different positions, 500 groups of ischemia at different positions and 500 groups of health data. Sensitivity in the graph represents the proportion of disease data (bleeding and ischemia data, excluding health data) in the data to be predicted that is correctly identified (both location and type correct); specificity represents the proportion of bleeding or ischemia that is correctly recognized (only type correct is considered); the deviation represents the probability that the health data was correctly identified; the accuracy rate represents the proportion of all the data to be detected that are correctly identified (both position and type are correct). The construction, training and prediction of the neural network are all completed by the same computer by using Python software. It can be seen from fig. 5-7 that the prediction results of different neural networks are all reduced with the increase of noise, but the prediction accuracy of the present invention can still be kept highest. The method used by the invention is proved to have better anti-noise performance and can be more suitable for different measuring environments.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. A stroke position classification method based on an electrical impedance tomography measurement frame is characterized by comprising the following specific steps:
the method comprises the following steps: dividing the brain area, making a plane perpendicular to a connecting line between the forehead center and the back pillow center, translating the plane from the forehead center to the back pillow center, stopping when the plane reaches temples on two sides, wherein the brain area swept by the plane is lambada 1, continuously translating the plane towards the back pillow center, stopping when the plane reaches the back of ears on two sides, wherein the brain area swept by the plane is lambada 2, and the rest unswept brain areas are lambada 3;
step two: the patient condition is classified into types, and the final classification can be: health, Λ 1 hemorrhage, Λ 1 ischemia, Λ 2 hemorrhage, Λ 2 ischemia, Λ 3 hemorrhage, and Λ 3 ischemia in total 7 cases;
step three: setting a large number of patients with known pathological changes or health conditions as a training data patient group, setting the forehead center of an electrode of an EIT acquisition system with 16 electrodes as a No. 1 electrode, and placing the 16 electrodes on the same horizontal plane of the head of a first patient in the training data patient group at equal intervals anticlockwise;
step four: adopting safe current excitation in a relative mode and voltage detection in an adjacent mode, namely firstly taking a No. 1 electrode and a No. 9 electrode as excitation electrodes, sequentially taking a detection electrode pair of No. 2-3, No. 3-4, No. 4-5, No. 5-6, No. 6-7, No. 7-8, No. 10-11, No. 11-12, No. 12-13, No. 13-14, No. 14-15 and No. 15-16 as excitation electrodes, obtaining 12 boundary measurement voltages, storing the data as 1 group, then sequentially taking the No. 2 electrode and the No. 10 electrode as the excitation electrodes, taking the No. 3-4 electrode as the first pair of detection electrodes and obtaining data according to the method until the last group of 12 boundary measurement voltages is obtained when the No. 16 electrode and the No. 8 electrode are the excitation electrodes, obtaining 16 groups of boundary measurement data in total, each set of data contains 12 boundary measurement voltages;
step five: data expansion, in order to improve the classification performance of the neural network and enable data information input into the neural network to be more, original 12X 16 EIT voltage data is subjected to data transformation, a first group of data taking the No. 1 electrode and the No. 9 electrode as excitation electrodes is taken, the No. 15-16 and the No. 2-3 voltages are subjected to linear interpolation to obtain the No. 16-1 and No. 1-2 expansion voltages, the No. 7-8 and No. 10-11 voltages are subjected to linear interpolation to obtain the No. 8-9 and No. 9-10 expansion voltages, and finally the first group of voltage data is expanded to be No. 1-2, No. 2-3, No. 3-4, No. 4-5, No. 5-6, No. 6-7, No. 7-8, No. 8-9 and No. 9-10, No. 10-11, No. 11-12, No. 12-13, No. 13-14, No. 14-15, No. 15-16, No. 16-1, 16, and 16 voltage data, then taking the No. 2 electrode and the No. 10 electrode as a second group of data of the excitation electrode, performing similar operation, and so on, and finally converting the EIT voltage data into 16 × 16 data;
step six: judging whether all patients of the training data patient group acquire data, if so, performing the seventh step, and if not, returning to the third step to acquire the data of the next patient;
step seven: training neural network's construction, neural network's structure is in order by wire jumper piece one, wire jumper piece two, wire jumper piece one, global average pooling layer, full tie layer respectively and constitutes, and wherein the structure of wire jumper piece one is:
(1) the convolution block I is composed of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected in sequence, wherein the size of a convolution layer filter is 3 x 3, the step length is 1, the activation layer is activated by adopting a ReLU function, and the output after passing through the convolution block I can be represented as follows:
Figure FDA0003332153650000021
wherein
Figure FDA0003332153650000022
Wherein y is the output of a convolution block I, a is the input of the convolution block I, W is the weight matrix of the convolution block I, b is the offset, x and y are the row and column serial numbers of the input matrix respectively, m and n are the row and column serial numbers of the output matrix respectively, f (x) is a ReLU function, and t is the input variable of the active layer;
(2) the convolution block II consists of convolution layers and batch normalization layers which are sequentially connected, wherein the size of a convolution layer filter is 3 x 3, and the step length is 1;
(3) a convolution block III, which is composed of a convolution layer with the filter size of 1 x 1 and the step length of 1, and the input of the convolution block III is the input a of the convolution block I;
(4) the ReLU activation layer adds the output of the convolution block two and the output of the convolution block three of the jumper block one and sends the addition to a ReLU function for activation;
the structure of the jumper block II is as follows:
(1) the convolution block IV consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected, wherein the size of a convolution layer filter is 3 x 3, the step length is 2, and the activation layer is activated by adopting a ReLU function;
(2) rolling a block II;
(3) a convolution block five, which is composed of a convolution layer with the filter size of 1 x 1 and the step length of 2, and the input of the convolution block five is the input of the convolution block four;
(4) the ReLU activation layer adds the convolution block two of the jumper block two and the output of the convolution block five and sends the addition to a ReLU function for activation;
and the global average pooling layer compresses and extracts the output of the last jumper block I, converts the output into a column vector, and sends the column vector into a full connection layer, wherein the full connection layer comprises 7 neurons as output which respectively correspond to the classification conditions of the step two, and the activation function of the layer is SoftMax, so that the output of the layer can be written as:
Figure FDA0003332153650000023
where Y is the output of the layer and Y is the input of the activation function;
step eight: training parameter settings, per layer weightsAnd the bias parameter is randomly assigned between 0 and 1, and the initial learning rate is set to 10-4Slowly decreases to a minimum of 10 as the number of exercises increases-6Since the output is 7 classes, the mathematical form of the loss function is:
Figure FDA0003332153650000031
in the formula, p is a label value, q is a network output value, i is a serial number of an output neuron, and the optimizer adopts a mixed momentum weighted estimation method, namely
Figure FDA0003332153650000032
Wherein
Figure FDA0003332153650000033
Figure FDA0003332153650000034
In the formula, lr is learning rate, SGD is intermediate variable value of general momentum, RMS is intermediate variable value of square momentum, λ 1, λ 2 are momentum hyperparameters, and o is minimum value 10-8
Step nine: performing neural network training, performing corresponding classification on patient data according to CT and MRI images of a training data patient group, setting labels, normalizing all acquired data and classification results, sending the normalized data and the classification results into a neural network with jumper connection, obtaining initial output through the neural network, calculating errors of the initial output through the loss function set in the step eight, updating weight parameters of each layer through an optimizer in a back propagation mode, immediately performing next training until all data are trained, namely completing one training round, then inputting the training of the neural network from the first group of data again, performing 100 rounds in total, completing the training of all the neural networks, and storing the weight and bias data at the moment;
step ten: the construction of the prediction neural network is that the structure of the prediction neural network is composed of a jumper block I, a jumper block II, a jumper block I, a global average pooling layer and a full-connection layer in sequence, wherein the structure of the jumper block I is as follows:
(1) the convolution block I consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected, wherein the size of a convolution layer filter is 3 x 3, the step length is 1, and the activation layer is activated by adopting a ReLU function;
(2) the convolution block II consists of convolution layers and batch normalization layers which are sequentially connected, wherein the size of a convolution layer filter is 3 x 3, and the step length is 1;
(3) a convolution block III, which is composed of a convolution layer with the filter size of 1 x 1 and the step length of 1, and the input of the convolution block III is the input a of the convolution block I;
(4) the ReLU activation layer adds the output of the convolution block two and the output of the convolution block three of the jumper block one and sends the addition to a ReLU function for activation;
the structure of the jumper block II is as follows:
(1) the convolution block IV consists of a convolution layer, a batch normalization layer and an activation layer which are sequentially connected, wherein the size of a convolution layer filter is 3 x 3, the step length is 2, and the activation layer is activated by adopting a ReLU function;
(2) rolling a block II;
(3) a convolution block five, which is composed of a convolution layer with the filter size of 1 x 1 and the step length of 2, and the input of the convolution block five is the input of the convolution block four;
(4) the ReLU activation layer adds the output of the convolution block four and the output of the convolution block five of the jumper block two and sends the sum to a ReLU function for activation;
the global average pooling layer compresses and extracts the output of the last jumper block I, converts the output into a column vector, and sends the column vector into the full-connection layer, the outputs of seven neurons in the full-connection layer are activated by a SoftMax function, and finally obtained seven outputs are the probability of the current data corresponding to seven conditions;
step eleven: acquiring predicted data, namely placing 16 electrodes on the same horizontal plane of the head of a patient to be detected at equal intervals anticlockwise by taking the forehead center of an electrode of an EIT acquisition system with 16 electrodes as a No. 1 electrode, carrying out safe current excitation in a relative mode and voltage detection in an adjacent mode in the same mode in the step four, carrying out data expansion in the same mode in the step five, and finally obtaining the data to be predicted;
step twelve: and D, data prediction, namely setting the data stored in the step nine as the weight and the bias of the prediction neural network, sending the data to be predicted into the prediction neural network for prediction so as to obtain the probability corresponding to 7 cases, and selecting the case with the highest probability as a final prediction result.
CN202111284030.6A 2021-11-01 2021-11-01 Apoplexy position classification method based on electrical impedance tomography measurement framework Pending CN114041773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111284030.6A CN114041773A (en) 2021-11-01 2021-11-01 Apoplexy position classification method based on electrical impedance tomography measurement framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111284030.6A CN114041773A (en) 2021-11-01 2021-11-01 Apoplexy position classification method based on electrical impedance tomography measurement framework

Publications (1)

Publication Number Publication Date
CN114041773A true CN114041773A (en) 2022-02-15

Family

ID=80206683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111284030.6A Pending CN114041773A (en) 2021-11-01 2021-11-01 Apoplexy position classification method based on electrical impedance tomography measurement framework

Country Status (1)

Country Link
CN (1) CN114041773A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115444392A (en) * 2022-08-31 2022-12-09 河南师范大学 Nonlinear stroke analysis method based on electrical impedance tomography
CN115481681A (en) * 2022-09-09 2022-12-16 武汉中数医疗科技有限公司 Artificial intelligence-based breast sampling data processing method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115444392A (en) * 2022-08-31 2022-12-09 河南师范大学 Nonlinear stroke analysis method based on electrical impedance tomography
CN115444392B (en) * 2022-08-31 2024-05-14 河南师范大学 Nonlinear cerebral apoplexy analysis method based on electrical impedance tomography
CN115481681A (en) * 2022-09-09 2022-12-16 武汉中数医疗科技有限公司 Artificial intelligence-based breast sampling data processing method
CN115481681B (en) * 2022-09-09 2024-02-06 武汉中数医疗科技有限公司 Mammary gland sampling data processing method based on artificial intelligence

Similar Documents

Publication Publication Date Title
JP7276915B2 (en) Method and System for Individualized Prediction of Psychiatric Disorders Based on Monkey-Human Species Transfer of Brain Function Maps
CN114041773A (en) Apoplexy position classification method based on electrical impedance tomography measurement framework
WO2023178916A1 (en) Brain atlas individualized method and system based on magnetic resonance and twin graph neural network
CN107330949A (en) A kind of artifact correction method and system
CN111329469A (en) Arrhythmia prediction method
Shao et al. SPECTnet: a deep learning neural network for SPECT image reconstruction
CN102282587B (en) For the treatment of the system and method for image
CN114663355A (en) Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage
CN114463493A (en) Transcranial magnetic stimulation electric field rapid imaging method and model based on coding and decoding structure
Shi et al. Residual convolutional neural network-based stroke classification with electrical impedance tomography
Wang et al. Deep transfer learning-based multi-modal digital twins for enhancement and diagnostic analysis of brain mri image
CN109646000B (en) Node electrical impedance imaging method based on local subdivision
CN109935321B (en) Risk prediction system for converting depression patient into bipolar affective disorder based on functional nuclear magnetic resonance image data
Gao et al. EIT-CDAE: A 2-D electrical impedance tomography image reconstruction method based on auto encoder technique
Baskar et al. An Accurate Prediction and Diagnosis of Alzheimer’s Disease using Deep Learning
CN116869504A (en) Data compensation method for cerebral ischemia conductivity distribution reconstruction
JP7394133B2 (en) System and method for diagnosing cardiac ischemia and coronary artery disease
Chen et al. Influence of Hyperparameter on the Untrue Prior Detection in Discrete Transformation-based EIT Algorithm
Ko et al. U-Net-based approach for automatic lung segmentation in electrical impedance tomography
CN111951228B (en) Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model
Kim et al. Deep Network-Based Feature Selection for Imaging Genetics: Application to Identifying Biomarkers for Parkinson's Disease
TWI780396B (en) Evaluation method and evaluation system of suicidal ideation based on multi-feature MRI and artificial intelligence
TWM595486U (en) Evaluation system of suicidal idea based on multi-feature magnetic resonance imaging and artificial intelligence
CN115251889B (en) Method for describing characteristics of dynamic connection network of functional magnetic resonance image
Dar et al. Effect of training epoch number on patient data memorization in unconditional latent diffusion models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination