CN114663355A - Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage - Google Patents

Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage Download PDF

Info

Publication number
CN114663355A
CN114663355A CN202210192628.0A CN202210192628A CN114663355A CN 114663355 A CN114663355 A CN 114663355A CN 202210192628 A CN202210192628 A CN 202210192628A CN 114663355 A CN114663355 A CN 114663355A
Authority
CN
China
Prior art keywords
layer
voltage values
neural network
voltage
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210192628.0A
Other languages
Chinese (zh)
Inventor
施艳艳
武跃辉
王萌
高振
李亚婷
杨坷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Normal University
Original Assignee
Henan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Normal University filed Critical Henan Normal University
Priority to CN202210192628.0A priority Critical patent/CN114663355A/en
Publication of CN114663355A publication Critical patent/CN114663355A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a mixed Neural Network method for reconstructing a cerebral hemorrhage conductivity distribution image, wherein a Neural Network method mixing a Convolutional Neural Network (CNN) and a Transformer Network is used for reconstructing the cerebral hemorrhage conductivity distribution image, a nonlinear mapping relation between a boundary voltage measurement value and the cerebral hemorrhage conductivity distribution can be directly constructed, prior information can be automatically extracted from a training data set, and the prior information is coded into the Neural Network. Compared with the current linear image reconstruction algorithm, the method does not need to calculate the sensitivity matrix, and does not need to determine the hyper-parameters in the image reconstruction process, so that the image reconstruction quality is good.

Description

Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage
Technical Field
The invention belongs to the technical field of bioelectrical impedance tomography, and particularly relates to a hybrid neural network method for reconstructing a cerebral hemorrhage conductivity distribution image.
Background
Cerebral hemorrhage is caused by rupture of cerebral arteries, which can lead to irreversible damage to nervous tissue. The global cerebral hemorrhage incidence rate is 60-80/10 ten thousands, the mortality rate is about 40%, and the disease with the highest mortality rate in acute cerebrovascular diseases is the disease with the highest mortality rate. Rapid diagnosis and aggressive treatment are very important in reducing mortality and morbidity in patients.
At present, in diagnosis and treatment of patients with cerebral hemorrhage, Computed Tomography (CT), Computed Tomography Angiography (CTA), and Magnetic Resonance Imaging (MRI) are the most widely used techniques for craniocerebral imaging. While these techniques provide accurate high resolution images, they still have drawbacks. CT and CTA are radioactive, and MRI is time consuming and costly. In addition, these technical devices are not suitable for diagnosing and treating early-stage disease patients on site, and are not suitable for long-term clinical monitoring. This makes it difficult for clinical staff to know the progress of the patient's condition in real time, and delays the patient's condition. Therefore, real-time image reconstruction methods for cerebral hemorrhage are urgently needed for clinical treatment.
Electrical Impedance Tomography (EIT) is a very promising imaging technique that applies safe current excitation to electrodes attached to the surface of an object, then detects the voltage of the remaining electrodes to obtain measurement data, and reconstructs the distribution of the internal conductivity of the object by using the measurement data and a reconstruction algorithm. Compared with the technologies such as CT, the electrical impedance tomography has the advantages of no radiation, low equipment price, short imaging time, convenience in carrying and the like, and has attracted wide attention in the aspect of biomedical imaging. However, the process of image reconstruction using EIT is inherently pathological, and the quality of the reconstructed image is susceptible to noise and modeling errors. In addition, in craniocerebral EIT, the existing image reconstruction algorithm usually needs to calculate a sensitivity matrix, and the low conductivity of the skull can cause relatively less current to enter the cranium, so that the intracranial sensitivity is extremely low, and the quality of the reconstructed image is influenced. Therefore, the reconstruction of accurate and clear images of the conductivity distribution of the cranium and the brain by using the EIT technology has certain difficulty, and a new imaging method needs to be provided urgently.
Disclosure of Invention
The invention solves the technical problem of providing a hybrid neural network method for reconstructing a cerebral hemorrhage conductivity distribution image, which utilizes a plurality of nonlinear processing modules to construct a hybrid neural network, can encode a long-distance voltage measurement value sequence, and constructs a complex relation between input (voltage measurement value) and output (cerebral hemorrhage conductivity distribution output) through the hybrid neural network. The invention provides a method for constructing a training data set, which can directly input voltage measurement values into a trained network to directly reconstruct images without calculating a sensitivity matrix and determining hyper-parameters. The mixed neural network method for reconstructing the cerebral hemorrhage conductivity distribution image provided by the invention has good effects on rapidly reconstructing the cerebral conductivity distribution image and improving the quality of the reconstructed image.
The invention adopts the following technical scheme for solving the technical problems: a hybrid neural network method for reconstructing a cerebral hemorrhage conductivity distribution image is characterized by comprising the following specific steps:
step S1: assuming that a human body is horizontal to the ground and the face faces upwards, establishing a scanning plane vertical to the ground, and scanning the human cranium by utilizing spiral computed tomography to obtain a cranium detection plane containing the central point of the cerebral hemorrhage position;
step S2: determining the shape and structure of the cranium through the computed tomography scanning image of the cranium brain detection plane, constructing a cranium brain model in a computer, and fusing electrical impedance information of different tissues of the human cranium brain into different tissue structures;
step S3: adopting an electrical impedance tomography system with 16 electrodes, placing a No. 1 electrode at the highest point of the intracranial brain in a detection plane, then attaching 16 electrodes on a closed curve which is intersected with the surface of the scalp around the detection plane in an equidistance way anticlockwise, wherein a plane area surrounded by the closed curve is the detection plane, firstly applying safe current excitation to the No. 1 electrode and simultaneously grounding the No. 9 electrode in a mode of relative current excitation and adjacent voltage measurement at 2-3, 3-4 … … 6-7 and 7-8; 10-11, 11-12 … … 14-15, 15-16 total 12 electrode pair measurement voltage values, total 12 voltage values are measured as a first group of measurement data, 2-16 electrodes are sequentially excited according to the same method, the electrode opposite to the excitation electrode is grounded, voltage values are measured on the other electrode pairs, total 16 groups of measurement data are obtained, each group of measurement data comprises 12 voltage measurement values, and after traversing and exciting each electrode, total 192 voltage values are obtained;
step S4: acquiring the craniocerebral conductivity distribution delta sigma and the corresponding voltage measurement value delta U of different patients or the same patient in different time periods of cerebral hemorrhage by the electrical impedance tomography mode, forming a sample S by the craniocerebral conductivity distribution delta sigma measured each time and the corresponding voltage measurement value delta U, and measuring a plurality of groups of samples to form a training data set D;
step S5: for the voltage measurement value delta U in the sample containing 16 groups of measurement data, data enhancement is carried out on each group of measurement data, and for each group of measurement data, the voltage values measured by 7 electrodes in the anticlockwise direction and 7 electrodes in the clockwise direction of the excitation electrode are added to obtain an enhanced voltage value Uskip[i],i=1,2,…,5,Uskip[i]Is represented as follows:
Uskip[1]means that 10 voltage values can be obtained in total by adding the continuously measured adjacent 2 voltage values;
Uskip[2]means that a total of 8 voltage values can be obtained by adding up consecutive measured adjacent 3 voltage values;
Uskip[3]means that 6 voltage values can be obtained in total by adding up adjacent 4 voltage values measured in succession;
Uskip[4]means that the continuously measured adjacent 5 voltage values are added, and a total of 4 voltage values can be obtained;
Uskip[5]means that the continuously measured 6 adjacent voltage values are added, and a total of 2 voltage values can be obtained;
after data enhancement is carried out on each group of data, 30 voltage values can be obtained, the voltage measurement value delta U in each sample is enhanced to 672 voltage values, and a training data set D after data enhancement is used for training the network;
step S6: constructing a hybrid Neural network, which mainly comprises a CNN (conditional Neural network) module, a Transformer module and an MLP (Multi layer Perceptin) module;
step S601: during the forward propagation process, the sequence of voltage measurements
Figure BDA0003524914200000031
As an input to the hybrid neural network;
step S602: extracting the characteristics of a voltage measurement value sequence by using a CNN module, converting low-dimensional information into high-dimensional information, wherein the CNN module consists of three CNN units, each CNN unit comprises a convolution layer (Conv1d), a Batch Normalization layer (BN, Batch Normalization) and a ReLu (rectified Linear Unit) activation function, the convolution layers in the CNN units are all one-dimensional convolution, the convolution kernel size is 5, the output sizes of the three CNN units are respectively [128,1,224], [512,1,74], [768,1,24], and the numbers in brackets respectively represent the output dimensions, the channel number and the sequence length;
step S603: output of CNN module and Class token
Figure BDA0003524914200000032
Performing splicing operation, and then performing Position embedding
Figure BDA0003524914200000033
In the result of the embedded splicing operation, both Class token and Position embedding are learnable parameters;
step S604: the Transformer module mainly comprises a multi-head self-orientation (MSA) layer and an MLP layer, and in order to improve the training speed and enhance the robustness of the model, layer normalization is adopted before the input of the MSA layer and the MLP layer, and the normalization method is as follows:
Figure BDA0003524914200000034
Figure BDA0003524914200000035
Figure BDA0003524914200000036
wherein the content of the first and second substances,
Figure BDA0003524914200000037
is the ith cell in the kth layer; h represents the number of cells in the k-th layer; α and β are learnable affine transformation parameters; mu.skkAnd means and standard deviation of the k-th layer unit are respectively represented;
Figure BDA0003524914200000038
is the ith normalized element in the kth layer; ε is set to 0.00001 to avoid a denominator of zero;
step S605: inputting in MSA layer in Transformer module
Figure BDA0003524914200000039
Learnable parameters
Figure BDA00035249142000000310
Mapping to Q, K, V, and the relationship is:
[Q,K,V]=Pqkvx
wherein Q, K and V are matrices of 768 × 768;
q, K and V are respectively divided into h groups, and the output of an MSA layer is calculated by the following method:
Figure BDA00035249142000000311
Figure BDA00035249142000000312
wherein Q ish、KhAnd VhThe h groups representing Q, K and V respectively,
Figure BDA00035249142000000313
representing the dimension of each group, T representing the transposition of the matrix, softmax representing the normalized exponential function, and Concatenate representing the splicing function;
step S606: the MLP layer in the Transformer module consists of a linear layer, a ReLu activation function and a Dropout layer, and information extraction and fusion are carried out on the output of the MSA layer;
step S607: after passing through a Transformer module, extracting a Class token from a data stream, mapping the Class token to the output of a network by using an MLP module, wherein the MLP module is formed by sequentially connecting a linear layer, a Dropout layer, a linear layer and a Sigmoid activation function;
step S608: distribution of conductivity of cranium and brain
Figure BDA0003524914200000041
As an output of the hybrid neural network, M represents the number of pixels of the reconstructed image;
step S7: training a hybrid neural network, a loss function for the training
Figure BDA0003524914200000042
Expressed as:
Figure BDA0003524914200000043
where θ is a parameter to be trained in the network, Δ σiIn order to achieve a true craniocerebral conductivity distribution,
Figure BDA0003524914200000044
for the output of the network, lambda is a regularization parameter, an adam (adaptive motion estimation) optimizer is used for training, and the learning rate and the regularization parameter are respectively set to be 0.0001 and 0.00001;
step S8: and obtaining a voltage measurement value of a cerebral hemorrhage patient according to the EIT measurement mode, inputting the voltage measurement value into a trained mixed neural network, predicting to obtain a craniocerebral conductivity distribution sequence through forward propagation, fusing position information of reconstructed image pixels, and reconstructing a craniocerebral conductivity distribution image.
Compared with the prior art, the invention has the following advantages and beneficial effects: according to the invention, a data set containing voltage measurement values and craniocerebral conductivity distribution is constructed by an electrical impedance tomography technology, and data enhancement is carried out on the data set for training of a neural network, so that the robustness and the training speed of the network are improved. By utilizing the information extraction capability of the CNN and the long coding capability of the Transformer network, a mixed neural network for reconstructing a craniocerebral conductivity distribution image is constructed, and the information learning capability of a long sequence is improved in a few network layers. Compared with the traditional linear image reconstruction algorithm, the method does not need to calculate the sensitivity matrix, and after network training, the hyper-parameters do not need to be determined when the conductivity distribution image is reconstructed, so that the reconstruction speed and the reconstruction accuracy are improved. The result shows that the reconstructed cerebral hemorrhage image has clear background and good image reconstruction quality.
Drawings
FIG. 1 is a block flow diagram of the present invention;
FIG. 2 is a schematic diagram of electrical resistance tomography of the cranium;
FIG. 3 is a schematic diagram of a constructed training data set;
FIG. 4 is a block diagram of the proposed hybrid neural network;
FIG. 5 is a block diagram of a transform network;
fig. 6 is a diagram of a brain hemorrhage image reconstruction result.
Detailed Description
The invention will be further explained with reference to the drawings. In order to improve the image reconstruction quality of the craniocerebral EIT, the invention provides a Neural Network method of mixing a Convolutional Neural Network (CNN) and a Transformer Network, which is used for reconstructing a cerebral hemorrhage conductivity distribution image, can directly construct a nonlinear mapping relation between a boundary voltage measurement value and cerebral hemorrhage conductivity distribution, and can automatically extract prior information from a training data set and encode the prior information into the Neural Network. Compared with the current linear image reconstruction algorithm, the method does not need to calculate a sensitivity matrix, and does not need to determine hyper-parameters in the image reconstruction process, so that the image reconstruction quality is good.
Referring to fig. 1, the hybrid neural network method for reconstructing a cerebral hemorrhage conductivity distribution image of the present invention mainly includes constructing a training data set, enhancing the data set, training the hybrid neural network, and reconstructing a craniocerebral conductivity distribution, and the present invention is described in detail with reference to fig. 1:
step S1: assuming that a human body is horizontal to the ground, the face of the human body faces upwards, a scanning plane vertical to the ground is established, the human cranium is scanned by utilizing spiral computed tomography to obtain a cranium detection plane containing a cerebral hemorrhage position central point, the shape structure of the cranium is determined, and the electrical impedance information of different tissues of the human cranium is fused into different tissue structures to form a three-dimensional cranium model.
Step S2: as shown in fig. 2, in the 16-electrode EIT system for craniocerebral, a No. 1 electrode is placed at the highest point of the craniocerebral in a detection plane, and then 16 electrodes are attached to a closed curve which is formed by intersecting the detection plane and the surface of the scalp in an equidistant surrounding manner counterclockwise, wherein the area of the plane surrounded by the closed curve is the detection plane. In a relative current excitation and adjacent voltage measurement mode, firstly, safe current excitation is applied to the No. 1 electrode, meanwhile, the No. 9 electrode is grounded, voltage values are measured on 12 electrode pairs (2-3, 3-4 … … 6-7, 7-8; 10-11, 11-12 … … 14-15, 15-16), and 12 voltage values are measured in total to serve as a first set of measurement data. According to the same method, 2-16 electrodes are excited in sequence, the electrode opposite to the excited electrode is grounded, voltage values are measured on the other electrode pairs, 16 groups of measurement data are obtained, and each group of measurement data comprises 12 voltage measurement values. After each electrode was excited in a traversal, a total of 192 voltage values were obtained. Through the method, the craniocerebral conductivity distribution delta sigma and the corresponding voltage measurement value delta U of different patients or the same patient in different time periods of cerebral hemorrhage are obtained, the craniocerebral conductivity distribution delta sigma measured each time and the corresponding voltage measurement value delta U form a sample S, and a plurality of groups of samples are measured to form a training data set D.
Step S3: for the voltage measurement Δ U in the above one sample, which includes 16 sets of measurement data, data enhancement is performed on each set of measurement data, respectively. For each set of measurement data, adding the voltage values measured at the 7 electrodes in the counterclockwise direction and the 7 electrodes in the clockwise direction of the excitation electrode to obtain an enhanced voltage value Uskip[i](i=1,2,…,5)。Uskip[i]Is represented as follows:
Uskip[1]means that 10 voltage values can be obtained in total by adding the continuously measured adjacent 2 voltage values;
Uskip[2]means that a total of 8 voltage values can be obtained by adding up consecutive measured adjacent 3 voltage values;
Uskip[3]means that 6 voltage values can be obtained in total by adding up adjacent 4 voltage values measured in succession;
Uskip[4]means that the continuously measured adjacent 5 voltage values are added, and a total of 4 voltage values can be obtained;
Uskip[5]it is shown that a total of 2 voltage values can be obtained by adding up the consecutive measured 6 adjacent voltage values.
After each group of data is subjected to data enhancement, 30 voltage values can be obtained, and the voltage measurement value delta U in each sample is enhanced to 672 voltage values. The resulting training data set is shown in FIG. 3, where each sample contains
Figure BDA0003524914200000061
And
Figure BDA0003524914200000062
and M is the pixel number of the reconstructed image.
Step S4: the structural diagram of the constructed hybrid neural network is shown in fig. 4, and the hybrid neural network mainly comprises a CNN module, a Transformer module and an MLP module:
(1) sequence of voltage measurements
Figure BDA0003524914200000063
As an input to the hybrid neural network.
(2) And extracting the characteristics of the voltage measured value sequence by using a CNN module, and converting low-dimensional information into high-dimensional information. The CNN module is composed of three CNN units, wherein each CNN unit comprises a convolution layer, a batch normalization layer and a ReLu activation function. The convolutional layers in the CNN unit are all one-dimensional convolutions with a convolutional kernel size of 5. The output sizes of the three CNN units are [128,1,224], [512,1,74], [768,1,24] respectively, and the numbers in parentheses represent the dimension, the number of channels and the sequence length of the output respectively.
(3) Output of CNN Module and Class token
Figure BDA0003524914200000064
Performing splicing operation, and then embedding Position
Figure BDA0003524914200000065
Embedded in the results of the splicing operation. Class token and Position embedding are both learnable parameters.
(4) As shown in fig. 5, the Transformer module is mainly composed of an MSA layer and an MLP layer. Layer normalization is employed prior to input to the MSA layer and the MLP layer. The normalization method is as follows:
Figure BDA0003524914200000066
Figure BDA0003524914200000067
Figure BDA0003524914200000068
wherein,
Figure BDA0003524914200000069
Is the ith cell in the kth layer; h represents the number of cells in the k-th layer; α and β are learnable affine transformation parameters; mu.skkAnd means and standard deviation of the k-th layer unit are respectively represented;
Figure BDA00035249142000000610
is the ith normalized element in the kth layer; ε is set to 0.00001 to avoid a denominator of zero.
(5) In MSA layer in Transformer module, input
Figure BDA00035249142000000611
Learnable parameters
Figure BDA00035249142000000612
Mapping to Q, K, V, and the relationship is:
[Q,K,V]=Pqkvx
where Q, K and V are each 768 × 768 matrices.
Q, K and V are respectively divided into h groups, and the output of an MSA layer is calculated by the following method:
Figure BDA0003524914200000071
Figure BDA0003524914200000072
wherein Q ish、KhAnd VhThe h groups representing Q, K and V respectively,
Figure BDA0003524914200000073
representing the dimensions of each group, T representing the transpose of the matrix, softmax representing the normalized exponential function, and Concatenate representing the splicing function.
(6) As shown in fig. 5, the MLP layer in the Transformer module performs information extraction and fusion on the output of the MSA layer, and the MLP layer is composed of a linear layer, a ReLu activation function, and a Dropout layer.
(7) After the Transformer module, the Class token is extracted from the data stream and then mapped to the output of the network using the MLP module. The MLP module is formed by sequentially connecting a linear layer, a Dropout layer, a linear layer and a Sigmoid activation function.
(8) Distribution of conductivity of cranium and brain
Figure BDA0003524914200000074
As the output of the hybrid neural network.
Step S5: training a hybrid neural network using a training data set, a loss function for the training
Figure BDA0003524914200000075
Expressed as:
Figure BDA0003524914200000076
where θ is a parameter to be trained in the network, Δ σiIn order to achieve a true craniocerebral conductivity distribution,
Figure BDA0003524914200000077
for the output of the network, λ is the regularization parameter. Training was performed using an adam (adaptive motion estimation) optimizer with learning rate and regularization parameters set to 0.0001 and 0.00001, respectively.
Step S6: and measuring to obtain voltage measurement values of other cerebral hemorrhage patients, inputting the voltage measurement values into the trained mixed neural network, predicting to obtain a craniocerebral conductivity distribution sequence, fusing position information of reconstructed image pixels, and reconstructing a craniocerebral conductivity distribution image.
Fig. 6 is a diagram of the result of reconstructing the cerebral hemorrhage image under different algorithms, comparing the cerebral hemorrhage reconstructed images with different sizes, different positions and different shapes. It can be seen from the figure that the cerebral hemorrhage image reconstructed by the method has a clean background and no artifact, the reconstructed cerebral hemorrhage position, shape and size are the same as those of the real model, the reconstructed image by the traditional Tikhonov algorithm has more artifact and inaccurate reconstructed target, and the reconstructed images of the CNN (convolutional neural network) and the FCNN (fully-connected neural network) have certain artifact and cerebral hemorrhage deformation.
Meanwhile, in order to quantitatively analyze the reconstructed images of the cerebral hemorrhage, the images are compared by using a Correlation Coefficient (CC) and a Root Mean Square Error (RMSE). The closer the CC of the image is to 1, the better, the closer the RMSE is to 0, the better. A test data set containing 200 samples was constructed and the mean CC and mean RMSE were calculated to verify the image reconstruction performance of the proposed method. The calculation results are shown in table 1, and it can be known from the calculation results that both CC and RMSE of the reconstructed image by the method provided by the present invention are optimal, and the superiority of the method provided by the present invention is verified.
TABLE 1 average CC and average RMSE
Figure BDA0003524914200000081
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (1)

1. A hybrid neural network method for reconstructing a cerebral hemorrhage conductivity distribution image is characterized by comprising the following specific steps:
step S1: assuming that a human body is horizontal to the ground and the face faces upwards, establishing a scanning plane vertical to the ground, and scanning the human cranium by utilizing spiral computed tomography to obtain a cranium detection plane containing the central point of the cerebral hemorrhage position;
step S2: determining the shape and structure of the cranium through the computed tomography scanning image of the cranium detection plane, constructing a cranium model in a computer, and fusing electrical impedance information of different tissues of the human cranium into different tissue structures;
step S3: adopting an electrical impedance tomography system with 16 electrodes, placing a No. 1 electrode at the highest point of the intracranial brain in a detection plane, then attaching 16 electrodes on a closed curve which is intersected with the surface of the scalp around the detection plane in an equidistance way anticlockwise, wherein a plane area surrounded by the closed curve is the detection plane, firstly applying safe current excitation to the No. 1 electrode and simultaneously grounding the No. 9 electrode in a mode of relative current excitation and adjacent voltage measurement at 2-3, 3-4 … … 6-7 and 7-8; 10-11, 11-12 … … 14-15, 15-16 total 12 electrode pair measurement voltage values, total 12 voltage values are measured as a first group of measurement data, 2-16 electrodes are sequentially excited according to the same method, the electrode opposite to the excitation electrode is grounded, voltage values are measured on the other electrode pairs, total 16 groups of measurement data are obtained, each group of measurement data comprises 12 voltage measurement values, and after traversing and exciting each electrode, total 192 voltage values are obtained;
step S4: acquiring the craniocerebral conductivity distribution delta sigma and the corresponding voltage measurement value delta U of different patients or the same patient in different time periods of cerebral hemorrhage by the electrical impedance tomography mode, forming a sample S by the craniocerebral conductivity distribution delta sigma measured each time and the corresponding voltage measurement value delta U, and measuring a plurality of groups of samples to form a training data set D;
step S5: for the voltage measurement value delta U in the sample containing 16 groups of measurement data, data enhancement is carried out on each group of measurement data, and for each group of measurement data, the voltage values measured by 7 electrodes in the anticlockwise direction and 7 electrodes in the clockwise direction of the excitation electrode are added to obtain an enhanced voltage value Uskip[i],i=1,2,…,5,Uskip[i]Is represented as follows:
Uskip[1]means that 10 voltage values can be obtained in total by adding the continuously measured adjacent 2 voltage values;
Uskip[2]means that a total of 8 voltage values can be obtained by adding up consecutive measured adjacent 3 voltage values;
Uskip[3]means that 6 voltage values can be obtained in total by adding up adjacent 4 voltage values measured in succession;
Uskip[4]means that the continuously measured adjacent 5 voltage values are added, and a total of 4 voltage values can be obtained;
Uskip[5]means that the continuously measured 6 adjacent voltage values are added, and a total of 2 voltage values can be obtained;
after data enhancement is carried out on each group of data, 30 voltage values can be obtained, the voltage measurement value delta U in each sample is enhanced to 672 voltage values, and a training data set D after data enhancement is used for training the network;
step S6: constructing a hybrid Neural network which mainly comprises a CNN (conditional Neural network) module, a Transformer module and an MLP (Multi layer Perceptron) module;
step S601: during the forward propagation process, the sequence of voltage measurements
Figure FDA0003524914190000011
As an input to the hybrid neural network;
step S602: extracting the characteristics of a voltage measurement value sequence by using a CNN module, converting low-dimensional information into high-dimensional information, wherein the CNN module consists of three CNN units, each CNN unit comprises a convolution layer (Conv1d), a Batch Normalization layer (BN, Batch Normalization) and a ReLu (rectified Linear Unit) activation function, the convolution layers in the CNN units are all one-dimensional convolution, the convolution kernel size is 5, the output sizes of the three CNN units are respectively [128,1,224], [512,1,74], [768,1,24], and the numbers in brackets respectively represent the output dimensions, the channel number and the sequence length;
step S603: output of CNN module and Class token
Figure FDA0003524914190000021
Performing splicing operation, and then embedding Position
Figure FDA0003524914190000022
In the result of the embedded splicing operation, both Class token and Position embedding are learnable parameters;
step S604: the Transformer module mainly comprises a multi-head self-orientation (MSA) layer and an MLP layer, and in order to improve the training speed and enhance the robustness of the model, layer normalization is adopted before the input of the MSA layer and the MLP layer, and the normalization method is as follows:
Figure FDA0003524914190000023
Figure FDA0003524914190000024
Figure FDA0003524914190000025
wherein the content of the first and second substances,
Figure FDA0003524914190000026
is the ith cell in the kth layer; h represents the number of cells in the k-th layer; α and β are learnable affine transformation parameters; mu.skkAnd means and standard deviation of the k-th layer unit are respectively represented;
Figure FDA0003524914190000027
is the ith normalized unit in the kth layer; ε is set to 0.00001 to avoid a denominator of zero;
step S605: in MSA layer in transform module, input
Figure FDA0003524914190000028
Learnable parameters
Figure FDA0003524914190000029
Mapping to Q, K, V, and the relationship is:
[Q,K,V]=Pqkvx
wherein Q, K and V are matrices of 768 × 768;
q, K and V are respectively divided into h groups, and the output of an MSA layer is calculated by the following method:
Figure FDA00035249141900000210
Figure FDA00035249141900000211
wherein Q ish、KhAnd VhThe h-th group of Q, K and V respectively,
Figure FDA00035249141900000212
representing the dimension of each group, T representing the transposition of the matrix, softmax representing the normalized exponential function, and Concatenate representing the splicing function;
step S606: the MLP layer in the Transformer module consists of a linear layer, a ReLu activation function and a Dropout layer, and information extraction and fusion are carried out on the output of the MSA layer;
step S607: after passing through a Transformer module, extracting a Class token from a data stream, mapping the Class token to the output of a network by using an MLP module, wherein the MLP module is formed by sequentially connecting a linear layer, a Dropout layer, a linear layer and a Sigmoid activation function;
step S608: distribution of conductivity of cranium and brain
Figure FDA0003524914190000031
As an output of the hybrid neural network, M represents the number of pixels of the reconstructed image;
step S7: training a hybrid neural network, a loss function for the training
Figure FDA0003524914190000032
Expressed as:
Figure FDA0003524914190000033
where θ is a parameter to be trained in the network, Δ σiIn order to achieve a true craniocerebral conductivity distribution,
Figure FDA0003524914190000034
for the output of the network, lambda is a regularization parameter, an adam (adaptive motion estimation) optimizer is used for training, and the learning rate and the regularization parameter are respectively set to be 0.0001 and 0.00001;
step S8: and obtaining a voltage measurement value of a cerebral hemorrhage patient according to the EIT measurement mode, inputting the voltage measurement value into the trained mixed neural network, predicting to obtain a craniocerebral conductivity distribution sequence through forward propagation, fusing position information of reconstructed image pixels, and reconstructing a craniocerebral conductivity distribution image.
CN202210192628.0A 2022-02-28 2022-02-28 Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage Pending CN114663355A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210192628.0A CN114663355A (en) 2022-02-28 2022-02-28 Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210192628.0A CN114663355A (en) 2022-02-28 2022-02-28 Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage

Publications (1)

Publication Number Publication Date
CN114663355A true CN114663355A (en) 2022-06-24

Family

ID=82027630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210192628.0A Pending CN114663355A (en) 2022-02-28 2022-02-28 Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage

Country Status (1)

Country Link
CN (1) CN114663355A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524123A (en) * 2023-04-20 2023-08-01 深圳市元甪科技有限公司 Three-dimensional electrical impedance tomography image reconstruction method and related equipment
CN117011673A (en) * 2023-10-07 2023-11-07 之江实验室 Electrical impedance tomography image reconstruction method and device based on noise diffusion learning
CN117274413A (en) * 2023-09-01 2023-12-22 南京航空航天大学 EIT-based conductivity image reconstruction method, system and equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524123A (en) * 2023-04-20 2023-08-01 深圳市元甪科技有限公司 Three-dimensional electrical impedance tomography image reconstruction method and related equipment
CN116524123B (en) * 2023-04-20 2024-02-13 深圳市元甪科技有限公司 Three-dimensional electrical impedance tomography image reconstruction method and related equipment
CN117274413A (en) * 2023-09-01 2023-12-22 南京航空航天大学 EIT-based conductivity image reconstruction method, system and equipment
CN117274413B (en) * 2023-09-01 2024-04-05 南京航空航天大学 EIT-based conductivity image reconstruction method, system and equipment
CN117011673A (en) * 2023-10-07 2023-11-07 之江实验室 Electrical impedance tomography image reconstruction method and device based on noise diffusion learning
CN117011673B (en) * 2023-10-07 2024-03-26 之江实验室 Electrical impedance tomography image reconstruction method and device based on noise diffusion learning

Similar Documents

Publication Publication Date Title
CN114663355A (en) Hybrid neural network method for reconstructing conductivity distribution image of cerebral hemorrhage
CN112200306B (en) Electrical impedance imaging method based on deep learning
CN109584323A (en) The information constrained coeliac disease electrical impedance images method for reconstructing of ultrasonic reflection
CN108629816A (en) The method for carrying out thin layer MR image reconstruction based on deep learning
Kauppinen et al. Sensitivity distribution visualizations of impedance tomography measurement strategies
CN111311703B (en) Electrical impedance tomography image reconstruction method based on deep learning
CN111968222B (en) Three-dimensional ultrasonic reconstruction method for human tissue in non-static state
Chen et al. Hybrid learning-based cell aggregate imaging with miniature electrical impedance tomography
Zhang et al. V-shaped dense denoising convolutional neural network for electrical impedance tomography
CN110720915A (en) Brain electrical impedance tomography method based on GAN
Shao et al. SPECTnet: a deep learning neural network for SPECT image reconstruction
US20230024401A1 (en) Implicit Neural Representation Learning with Prior Embedding for Sparsely Sampled Image Reconstruction and Other Inverse Problems
CN115496720A (en) Gastrointestinal cancer pathological image segmentation method based on ViT mechanism model and related equipment
Shi et al. Intracerebral Hemorrhage Imaging based on Hybrid Deep Learning with Electrical Impedance Tomography
CN114549682A (en) Optimization method for electrical impedance lung imaging image
CN114041773A (en) Apoplexy position classification method based on electrical impedance tomography measurement framework
Gao et al. EIT-CDAE: A 2-D electrical impedance tomography image reconstruction method based on auto encoder technique
CN116869504A (en) Data compensation method for cerebral ischemia conductivity distribution reconstruction
Rajeev et al. A review on magnetic resonance spectroscopy for clinical diagnosis of brain tumour using deep learning
CN111798452A (en) Carotid artery handheld ultrasonic image segmentation method, system and device
CN115736877A (en) Focusing type cardiac electrical impedance imaging method
CN113838105A (en) Diffusion microcirculation model driving parameter estimation method, device and medium based on deep learning
Liu et al. Pool-UNet: Ischemic Stroke Segmentation from CT Perfusion Scans Using Poolformer UNet
CN113066145B (en) Deep learning-based rapid whole-body diffusion weighted imaging method and related equipment
Shi et al. Densely Connected Convolutional Neural Network-Based Invalid Data Compensation for Brain Electrical Impedance Tomographya

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination