CN115618284A - Multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaption - Google Patents

Multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaption Download PDF

Info

Publication number
CN115618284A
CN115618284A CN202211080268.1A CN202211080268A CN115618284A CN 115618284 A CN115618284 A CN 115618284A CN 202211080268 A CN202211080268 A CN 202211080268A CN 115618284 A CN115618284 A CN 115618284A
Authority
CN
China
Prior art keywords
domain
fetal heart
layer
signal
heart rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211080268.1A
Other languages
Chinese (zh)
Inventor
魏航
陈莉
费悦
陈沁群
全斌
李丽
罗晓牧
杨星宇
刘立军
林伙旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Sunray Medical Apparatus Co ltd
Original Assignee
Guangzhou Sunray Medical Apparatus Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Sunray Medical Apparatus Co ltd filed Critical Guangzhou Sunray Medical Apparatus Co ltd
Priority to CN202211080268.1A priority Critical patent/CN115618284A/en
Publication of CN115618284A publication Critical patent/CN115618284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a self-adaptive multi-center fetal heart monitoring intelligent interpretation method based on the unsupervised field, which extracts the characteristics of a preprocessed fetal heart rate signal, a preprocessed uterine contraction signal and a preprocessed fetal movement signal; respectively inputting the 'shared features' obtained by feature extraction into a domain-adaptation of neural network (DANN) algorithm based on the domain antagonistic neural network, which is designed by the invention: and a classification discrimination result is obtained through a label classifier constructed based on a bidirectional gating circulation unit, the convolutional neural network and the deep neural network are respectively used as a feature extractor and a domain discriminator, and a gradient reverse layer is introduced to perform back propagation after inverting the output of the domain discriminator, so that the model can more accurately discriminate the multi-center fetal-heart monitoring signal. Compared with a deep learning model without field self-adaptation, the DANN algorithm provided by the invention obviously improves the generalization of the deep learning model of the source domain in the target domain: compared with the current mainstream DA algorithm, the DANN algorithm of the invention has more excellent overall performance.

Description

Multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaption
Technical Field
The invention relates to a field self-adaptive deep learning method, in particular to a multi-center-oriented field self-adaptive fetal heart monitoring intelligent interpretation method, which is used for realizing intelligent cross-domain interpretation classification of clinical multi-center CTG signals.
Background
Prenatal fetal monitoring is an effective measure for evaluating the health development condition of a fetus. Fetal heart uterine contraction monitoring graph (CTG) is a major tool for monitoring the health of a fetus in a womb. However, the results obtained by interpreting CTGs based on rules have the characteristics of high sensitivity and low specificity, which may cause doctors to take unnecessary measures such as caesarean section for excessive intervention, and the intelligent classification of CTGs by machine learning to evaluate the health condition of the fetus is becoming a research focus.
In machine learning-based applications, a large number of high-quality labeled data sets are often required to train a model, thereby obtaining a better model effect. However, in clinical practice, in order to ensure the accuracy of case interpretation, multiple specialist physicians are required to repeatedly label and finally determine case labels to reduce the misjudgment rate, so that the cost of manpower and financial resources for labeling samples is extremely high, the labels of clinical data cases are difficult to obtain, and a large number of unlabeled medical data samples exist.
The multi-center clinical research can overcome the limitation of single-center research and obtain more data with higher clinical value. However, the prenatal CTG signal data sets obtained by the multi-center research have large inter-domain differences due to the differences among fetal monitoring instruments, hospitals, regions, operators and the like for acquiring the CTG signals. However, the traditional machine learning for intelligently interpreting the CTG requires that a training set and a test set meet the characteristics of independent and same distribution, collected CTG signal data usually come from pregnant women in the same hospital or a certain area, and the obtained model has great limitation by using the data as the training set, and high-efficiency generalization performance cannot be obtained in a new data set.
Generally speaking, the traditional machine learning method for intelligently interpreting the CTG lacks generalization for the interpretation of multicenter clinical data, and meanwhile, the labeling of medical data cases is difficult to obtain, so that a large number of unlabeled data cases are not utilized by the existing machine learning method for intelligently interpreting the CTG, and the interpretation accuracy is not high. Therefore, how to improve the classification performance of the CTG signal interpretation model by using data without any label becomes a technical problem in the field.
Disclosure of Invention
In order to overcome the defects of the intelligent interpretation CTG in clinical multicenter fetal monitoring, the invention provides an intelligent interpretation method for fetal cardiac monitoring based on unsupervised field self-adaptation.
The invention adopts the following technical scheme:
a multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaptation comprises the following steps:
s1: raw CTG signal data including fetal heart rate, uterine contractions, and fetal movement signals from the source domain, the target domain are acquired.
S2: and (3) taking the preprocessed fetal heart rate, uterine contraction and fetal movement signals with the lengths of d as multi-mode fusion input signals, and inputting the multi-mode fusion input signals into the feature extractor. The feature extractor comprises an embedding layer, a splicing layer, three convolution layers and a batch standardization layer.
Firstly, each signal point is subjected to one-hot coding through an embedding layer and is converted into a weighted two-dimensional matrix (d multiplied by m), and then the two-dimensional matrices of the three signals are spliced through a splicing layer to obtain a two-dimensional matrix (d multiplied by 3 m) vector. m is the output dimension of the embedding layer. And inputting the processed data into a convolution structure group consisting of three convolution layers with (d × c) structures and a batch normalization layer, and realizing automatic extraction of the CTG signal characteristics. Wherein c represents the filter size of the convolutional layer.
S3: respectively inputting the 'shared features' output by the feature extractor into a tag classifier and a domain discriminator, wherein the tag classifier comprises a bidirectional gating circulation unit layer and a full connection layer; the domain discriminator comprises a full connection layer, a dropout layer and a global average pooling layer;
in the label classifier, shared features pass through a bidirectional gated cyclic unit layer, a two-dimensional matrix vector (d multiplied by 3 m) is input into each of k gated cyclic units in the forward direction and the reverse direction for calculation and splicing, and a one-dimensional output vector with the length of 2k is output.
Calculating the one-dimensional output vector through a full connection layer, performing softmax function compression calculation on the outputs of the two groups of bidirectional gating circulation units, outputting the probabilities that the samples are normal and abnormal, and selecting the class label corresponding to the higher probability as a classification judgment result;
in the domain discriminator, shared features are firstly input into a full connection layer, and feature representations are mapped to a sample mark space;
introducing a dropout layer structure, and reducing the overfitting phenomenon of the model by weakening the interaction between hidden layer nodes;
then, the output of the dropout layer is input to a full connection layer for task learning, and then dimension compression is carried out on the output through a global average pooling layer;
and finally, entering a full connection layer to classify the data, and performing compression calculation on the output by adopting a softmax function to obtain the probability of the domain from which the sample comes, wherein the domain label corresponding to the higher probability is used as a domain identification result.
As a preferred scheme, the intelligent interpretation method for the multi-center-oriented field self-adaptive fetal heart monitoring is characterized in that: the preprocessing of the step S1 comprises the steps of processing missing values and abnormal values of the fetal heart rate signals, the uterine contraction signals and the fetal movement signals, and carrying out standardization processing on the fetal heart rate signals processed by the missing values and the abnormal values.
Preferably, the fetal heart rate normalization process includes calculating a baseline value of the fetal heart rate signal and subtracting the baseline value from the fetal heart rate signal processed with the deficiency value and the abnormal value to obtain a standard fetal heart rate signal.
Preferably, the preprocessing of step S1 further includes performing sliding window segmentation processing on the fetal heart rate signals subjected to the missing value processing and the abnormal value processing before performing normalization processing, so as to obtain a segment of the fetal heart rate signals with a signal length not less than p.
Preferably, the sliding window segmentation process includes synchronously sliding window segmentation of the preprocessed fetal heart rate signal, the preprocessed uterine contraction signal and the preprocessed fetal movement signal.
Preferably, in the step S3, a gradient inversion layer is introduced between the feature extractor and the domain discriminator, so that the gradient of the domain classification loss of the domain classifier is automatically inverted before being propagated back to the parameters of the feature extractor, thereby realizing the countermeasure loss, and enabling the feature extractor to extract the features having two characteristics of "domain invariance" and "discriminatability".
The invention has the following beneficial effects:
firstly, the invention not only carries out data preprocessing according to the characteristics of the fetal heart rate signal, the uterine contraction pressure signal and the fetal movement signal, but also carries out signal enhancement on the preprocessed signals, so that the signal lengths are unified, and the classification interpretation performance of the model is further improved.
Secondly, the shared features output by the feature extractor are input into the bidirectional gating circulation unit by the label classifier, and compared with other deep learning models, the classification method is short in time consumption and has better classification discrimination capability.
Thirdly, the invention utilizes the countermeasure thought to lead the signal characteristics extracted by the characteristic extractor to have domain invariance and label identifiability simultaneously, thereby reducing the influence of inter-domain difference on the model performance, fully utilizing the label classification information of the source domain and the shared characteristics of the source domain and the target domain to realize cross-domain interpretation of the intelligent interpretation CTG model, and carrying out classification interpretation on the unmarked mobile terminal data.
Fourthly, compared with the domain adaptive algorithm which is not adopted and is mainstream at home and abroad, the algorithm provided by the invention has the best classification effect, has superiority in solving the difference between the clinical multi-center CTG signal data sets, can effectively reduce the workload of medical staff, effectively reduce the fetal death rate and the laparotomy yield, avoid unnecessary medical intervention and ensure the normal growth and development of the fetus.
Drawings
FIG. 1 is a schematic diagram of the overall design of the domain-based unsupervised domain adaptive algorithm structure of the domain-based anti-neural network of the present invention
Detailed Description
In order to make the object, technical scheme and beneficial effect of the present invention clearer, the following will further describe a multi-center-oriented field adaptive fetal heart monitoring intelligent interpretation method of the present invention with reference to the accompanying drawings and specific embodiments.
The invention adopts two signal data sets of a central station and a home mobile terminal of a hospital as a source domain and a target domain respectively, wherein the instrument and the model of the central station for recording signals are a multi-bed wireless probe fetal monitoring workstation SRF618A Pro, the mobile terminal is a remote fetal monitor SRF618B1, the recording frequencies are 1.25Hz, the years of the collected data are 2016 to 2018 and 2018 to 2020 respectively. After the interpretation and screening, 16355 CTG signal cases are provided for the central station, wherein the CTG signal cases comprise a normal category 11998 case, a suspicious category 4326 case and an abnormal category 31 case; for the mobile terminal, there are 3351 CTG signal cases, wherein 2886 normal classes, 440 suspicious classes and 25 abnormal classes.
Example 1
The invention provides a self-adaptive multi-center fetal heart monitoring intelligent interpretation method based on the unsupervised field, which comprises the following steps:
s1: and acquiring CTG signal data of a source domain and a target domain containing fetal heart rate signals, uterine contraction pressure signals and fetal movement signals from a central station and a mobile terminal of the hospital respectively.
S2: carrying out data preprocessing on the signals to form a fusion multi-signal data set;
the preprocessing comprises interpolation or deletion processing of the fetal heart rate signal, the uterine contraction pressure signal and the fetal movement signal;
the preprocessing further comprises the step of carrying out standardization processing on the fetal heart rate signals subjected to interpolation or deletion processing;
the preprocessing further comprises the step of carrying out sliding window segmentation on the fetal heart rate signals subjected to interpolation or deletion processing before carrying out standardization processing on the fetal heart rate signals subjected to interpolation or deletion processing to obtain a fetal heart rate signal segment with the signal length p; wherein p is 900;
the normalization process includes calculating a baseline value of the fetal heart rate signal and subtracting the baseline value from the interpolated or deleted fetal heart rate signal to obtain a standard fetal heart rate signal;
s3: performing segmentation processing on the fused multi-signal data set, wherein the segmentation processing comprises synchronous sliding window segmentation on a preprocessed fetal heart rate signal, a preprocessed uterine contraction pressure signal and a preprocessed fetal movement signal to obtain a signal segment with the signal length of p; wherein p is 900.
S4: inputting the deep learning data set into a feature extractor of a pre-trained unsupervised domain adaptive algorithm model based on a domain confrontation neural network, and referring to fig. 1, the model comprises the feature extractor, a label classifier and a domain discriminator. The characteristic extractor comprises an embedding layer, a splicing layer, three convolution layers and a batch standardization layer; the label classifier comprises a BiGRU layer and a full connection layer; the domain discriminator comprises a full connection layer, a dropout layer and a global average pooling layer;
s5: in the feature extractor, the one-dimensional fetal heart rate signal, the communal signal and the fetal movement signal segment with the signal length of p are respectively input into an embedding layer to obtain a fetal heart rate signal, a uterine contraction signal and a fetal movement signal segment of a two-dimensional matrix (dxm), wherein m is the output dimension of the embedding layer, and m is 4;
inputting fetal heart rate signals, uterine contraction pressure signals and fetal movement signal segments of the two-dimensional matrix (d x m) into a splicing layer for splicing to obtain a two-dimensional matrix (d x 3 m) vector;
and inputting the two-dimensional matrix (d multiplied by 3 m) vectors into 3 convolution structure groups to obtain the shared features. The output dimensions of the three convolutional layers activated by the ReLU are 128,256,512, respectively.
S6: and inputting the shared characteristics into a label classifier and a domain discriminator respectively.
In a label classifier, the extracted shared features are input into 2k bidirectional gating circulation unit layers activated by tanh, and a two-dimensional matrix vector (d multiplied by 3 m) is input into each forward gating circulation unit and each backward gating circulation unit for calculation and splicing; k is the number of GRU units in one direction, 2k is the number of GRU units in a bidirectional gating circulating unit layer, wherein k is 256, and 2k is 512;
inputting the one-dimensional output vector with the length of 2k to a full-connection layer of n units, and then performing softmax function compression on an output result to output the probability that the sample is a normal class and the probability that the sample is an abnormal class; wherein n is 2;
and comparing the probabilities of the samples of the normal class and the abnormal class, and selecting the class label corresponding to the larger probability as a classification judgment result.
In the domain discriminator, the extracted shared features are firstly input into a full-connection layer of g units, feature representation is mapped to a sample mark space, and the output dimension is q 1 (ii) a Wherein g is 2,q 1 Is 256;
introducing a dropout layer structure, and reducing the overfitting phenomenon of the model by weakening the interaction between hidden layer nodes;
then, the output of the dropout layer inputs the information to a full connection layer of h units for task learning, and the dimension q is output 2 Performing dimension compression on the output for subsequent passing through the global average pooling layer; wherein h is 2,q 2 Is 256;
finally, the data are classified in a full connection layer of i units, and the output dimensionality is q 3 Performing compression calculation on the output by adopting a softmax function to obtain the probability of the domain from which the sample comes; wherein i is 2,q 3 Is 2;
and comparing the probabilities of the normal class and the abnormal class of the sample, and selecting the domain label corresponding to the higher probability as a domain identification result.
And S7, the gradient of the domain classification loss of the domain classifier passes through a gradient inversion layer between the domain discriminator and the feature extractor, and the gradient is automatically inverted before the parameters of the feature extractor, so that the loss is resisted.
The problem of missing values and outliers of the original fetal heart rate signal, the uterine contraction pressure signal and the fetal movement signal can be dealt with by interpolation or deletion, as is well known in the art. For the fetal heart rate signal, if the abnormal value is at the head and tail of the signal, directly deleting the abnormal value; if the fetal heart rate is less than 40bpm: generally, interpolation is carried out according to signal data at two ends, and if the signals at the two ends do not have the situation of sudden increase and reduction, the interpolation is not considered; if the fetal heart rate value is isolated: firstly, the 'Nan' value is used for substitution, and then the signals are interpolated according to the situation. Before processing the deficiency value and the abnormal value, the fetal heart rate signal is normalized by using a formula D (t) = S (t) -B (t), t is more than or equal to 1 and less than or equal to K, wherein S (t) is the preprocessed fetal heart rate signal, B (t) is a base line extracted from the fetal heart rate signal, and K is the length of the result of the preprocessed signal.
For the uterine contraction pressure signal, because the fetal heart rate signal and the uterine contraction pressure signal have consistency, when the fetal heart rate signal is preprocessed, the fetal movement signal is extracted point by point in the uterine contraction pressure signal and also has consistency, so that the uterine contraction pressure signal can be synchronously interpolated or deleted. However, since the uterine contraction signal and the fetal heart rate are substantially different, the following adjustments are made in the processing flow: let the uterine contraction signal be U = (U) 1 ,u 2 ,u 3 ,...,u n ) Wherein the first value of the uterine contraction signal length, abnormal value or missing segment is represented as u start Taking the contraction signal segment as u (start-75∶start) Using median u med Updating u start . The remaining fragments were pre-processed according to the above procedure.
For fetal movement signals, the fetal movement signals of the invention are extracted point by point from the uterine contraction signals, and have consistency, so the fetal movement signals are synchronously interpolated or deleted. Fetal activity signals have only two values: the presence of fetal movement is shown as 36864, and the absence of fetal movement is shown as 0.
Because the fetal heart rate, the uterine contraction and the fetal movement signal length are changed in the same way after the signal data are preprocessed in the early stage, and the length is changed to be in the signal length range of 10min to 60min, the invention carries out sliding window processing on the preprocessed signals. When the length L of the signal f <Deleting the signal when the time is 15 min; 15min<L f <When 18min is needed, intercepting 15min from the tail part as first part data; 18min<L f <Sliding forward from the tail part for 3min when 20min is needed, and intercepting 15min as second part of data; l is f When =20min, 15min of data is cut from the head as the third partial data.
The feature extractor of the present invention includes an embedding layer, a stitching layer, three convolution layers, and a batch normalization layer. The embedding layer carries out unique heat coding on each signal point of the one-dimensional fetal heart rate signal, the uterine contraction pressure signal and the fetal movement signal segment with the signal length of d, the signals are converted into a (d x 4) two-dimensional matrix, and d is the dimension of the unique heat coding. Then, the output results of the embedding layers are spliced through the splicing layer to form a (d × 12) two-dimensional signal. And then entering three groups of convolution structure groups, wherein the three groups are composed of convolution layers with (d multiplied by c) structures and batch normalization layers, wherein c represents the filter size of the convolution layers and determines the dimension of an output space. The output dimensions of the three convolutional layers are 128,256,512, respectively.
In a bidirectional gated cyclic unit layer, a label classifier inputs a two-dimensional matrix (d multiplied by 12) vector output by a splicing layer in the early stage into each gated cyclic unit in the forward direction and the reverse direction for calculation and splicing, and the calculation formula is as follows:
z t =σ(W (Z) x t +U (z) h t-1 )
r t =σ(W (Z) xt+U (z) ht -1 )
Figure RE-GDA0003947306470000063
h t =z t ⊙U t-1 +(1-z t )⊙h t
where z denotes an update gate that gates the loop element and r denotes a reset gate. Then, the one-dimensional output vector with the output length of 2k of each gating cycle unit is calculated through a full-connection layer with n units, and the calculation formula of the ith unit is as follows:
Figure RE-GDA0003947306470000064
wherein h is k And h' k Representing the outputs of the forward and backward gated loop units, respectively. Finally, performing softmax function compression calculation on the outputs of the two groups of units, wherein the output samples are normal class probabilities and abnormal class probabilities, and the calculation formula is as follows:
Figure RE-GDA0003947306470000061
Figure RE-GDA0003947306470000062
and selecting the category label corresponding to the higher probability as a classification judgment result.
In the domain adaptive algorithm based on the domain confrontation neural network, a gradient inversion layer is introduced between the feature extractor and the domain discriminator, so that the gradient of the domain classification loss of the domain classifier can be automatically inverted before the gradient is reversely propagated to the parameters of the feature extractor, and the confrontation loss similar to GAN is further realized. The loss function of the model loss function is as follows:
Figure RE-GDA0003947306470000071
Figure RE-GDA0003947306470000072
wherein, the parameter λ is dynamically changed, and the expression is:
Figure RE-GDA0003947306470000073
verification example 1
In order to verify the classification performance advantages of the BiGRU model in the label classifier, the invention selects another seven deep learning models: a Recurrent Neural Network (RNN), a long-short-term memory network (LSTM), a Gated Recurrent Unit (GRU), a bidirectional recurrent neural network (BiRNN), a bidirectional long-short-term memory network (biolstm), a Convolutional Neural Network (CNN), a deep neural network (deep neural network, DNN) are respectively used as deep learning classifiers, the CTG signal data of the source domain is intelligently classified, the interpretation effect of the model is respectively tested, and the classification performance of the BiGRU is compared, and the model result is shown in table 1.
And each deep learning model learns the same training set, the same verification set is used for verifying and outputting a convergence curve of the model learning process, finally, the unified test and the classification judgment are carried out, the label corresponding to the maximum value of the output probability is taken as an output result, and the label is divided into a normal class and an abnormal class. Verification example 1 comparative analysis the deep learning model described above was analyzed for accuracy, precision, sensitivity, specificity, F1 value, kappa coefficient, MCC coefficient, AUC value, and time. The accuracy is one of the most common model evaluation indexes in machine learning, also called accuracy. The accuracy rate represents the proportion of the data predicted to be the correct positive class data in the data predicted to be the positive class. Sensitivity represents the ratio of positive data to positive data that is actually predicted to be correct. Specificity indicates that the correct negative class data is predicted to be occupied in the data which is actually in the negative class. The positive class related to the accuracy, sensitivity and specificity is the abnormal class of the invention, and the negative class is the normal class of the invention.
The calculation formula of the accuracy is as follows:
Figure RE-GDA0003947306470000074
the calculation formula of the accuracy rate is as follows:
Figure RE-GDA0003947306470000081
the sensitivity is calculated as:
Figure RE-GDA0003947306470000082
the specific calculation formula is as follows:
Figure RE-GDA0003947306470000083
wherein TP designates predicting positive class as positive class number, TN designates predicting negative class as negative class number, FN designates predicting positive class as negative class number, FP designates predicting negative class as positive class number.
The F1 value is a comprehensive evaluation index comprehensively considering the accuracy and the recall rate, and when data has an unbalanced phenomenon, the F1 value is more representative than the accuracy and the sensitivity and can reflect the overall performance of the model. The Kappa coefficient and Matthews correlation coefficient can measure the consistency degree of model judgment and the quality of the classifier.
The formula for the calculation of the F1 value is:
Figure RE-GDA0003947306470000084
kappa coefficient:
Figure RE-GDA0003947306470000085
wherein p is c =p true +p false The calculation formula is as follows:
Figure RE-GDA0003947306470000086
Figure RE-GDA0003947306470000087
matthews correlation coefficient:
Figure RE-GDA0003947306470000088
in addition, verification example 1 introduces a receiver Operating characteristic curve (ROC) to evaluate the performance of the model, and to quantify the result of the ROC, the area under the ROC curve is defined as an AUC value, and the range of the AUC value is [0,1]. The larger the AUC value of the model, the better the classification performance is. Wherein, the abscissa and the ordinate of ROC are False Positive Rate (FPR) and True Positive Rate (TPR), respectively, and the calculation formula is as follows:
Figure RE-GDA0003947306470000089
Figure RE-GDA0003947306470000091
the model is optimal when FPR =0, tpr = 1. Therefore, when the ROC of the model is closer to the upper left corner of the coordinate axis, the classification performance is better; as the ROC gets closer to the diagonal, the model works worse, similar to random guessing.
TABLE 1 comparison of discrimination abilities of respective deep learning models
Figure RE-GDA0003947306470000092
Remarking: time refers to the time required for each iteration of the model, in units: s
The results in table 1 show that, compared with other deep learning models, the accuracy, precision, specificity, kappa and MCC values of example 1 are all the highest, and compared with DNN, example 1 almost improves all evaluation indexes except for run time; example 1 has advantages in classification performance and efficiency over other recurrent neural network models RNN, LSTM, GRU, biRNN, biLSTM.
Verification example 2
In order to verify the reasonability of the domain adaptive algorithm based on the domain-antagonistic neural network, the classification model based on the BiGRU and obtained by training on the data of the central station is directly applied to the data of the mobile terminal, namely, under the condition of not applying a domain adaptive method, classification performance indexes of the model before and after the domain adaptive are observed, and an implementation result pair is shown in a table 2.
Table 2 comparison of DANN with BiGRU results without Domain Adaptation
Figure RE-GDA0003947306470000093
Figure RE-GDA0003947306470000101
It can be seen from the results in table 2 that, under the condition of using the same deep learning classifier and applying the domain adaptive method, the classification performance of the model in the target domain is obviously improved, the accuracy reaches 70.26%, the sensitivity reaches 90.23%, the average F1 value reaches 78.19%, and the AUC value reaches 75.21%. The above experimental results show that the domain adaptive algorithm based on the domain confrontation neural network designed by the research is scientific and reasonable, and achieves the purpose of reducing the domain difference between the data sets of the central station and the mobile terminal.
Verification example 3
In order to verify the effectiveness and the scientificity of the proposed domain-based anti-neural network domain adaptive algorithm, the invention applies the current mainstream model to carry out domain adaptation on a central station and a mobile terminal data set and compares the performances of different domain adaptive algorithms. The comparative analysis results are shown in table 3.
TABLE 3 experimental results of cross-domain intelligent interpretation of CTG signals by adaptive algorithms in different fields
Model (model) Rate of accuracy Rate of accuracy Sensitivity of the probe F1 value AUC
DAN 44.17 63.41 13.03 21.62 0.5780
DSAN 59.59 59.70 97.26 73.99 0.5226
MRAN 55.29 59.96 62.76 53.78 0.5105
Example 1 70.26 68.99 90.23 78.19 0.7521
As can be seen from table 3, compared with the currently mainstream domain adaptation, the DANN algorithm provided herein has the highest prediction accuracy, F1, and AUC values on the data set of the home mobile terminal, the accuracy is improved by 10.67% as compared with the DSAN model with the highest accuracy, the accuracy is improved by 5.58%, the F1 value is improved by 4.2%, and the AUC value is improved by 0.1741 to 0.7521, which further indicates that the domain-spanning intelligent interpretation of the CTG signal of the domain-oriented clinical multi-center intelligent fetal monitoring algorithm provided herein has advantages.
In conclusion, different hyper-parameter combinations are adopted to compare the experimental results of different models, so that the optimal multi-center intelligent fetal monitoring-oriented domain adaptive algorithm model is obtained; compared with a BiGRU classifier which is not trained by adopting a domain self-adaptive algorithm and only trained by utilizing source domain data, the domain self-adaptive algorithm provided by the invention obviously improves the classification performance of the model on a mobile terminal; compared with the current mainstream domain adaptive algorithms DAN, DSAN and MRAN, the DANN model designed in the method has better performance and has obvious advantages on almost all indexes.
The above description is only for the preferred embodiment of the present invention, but the present invention is not limited to the embodiment, and those skilled in the art can equally modify or replace the concept of the present invention within the scope of the present invention.

Claims (5)

1. A multi-center-oriented intelligent interpretation method for self-adaptive fetal heart monitoring in the field comprises the following steps:
s1: the method comprises the steps of taking preprocessed fetal heart rate, uterine contraction and fetal movement signals with the lengths of d as multi-mode fusion input signals, carrying out unique hot coding on each signal point through an embedding layer, converting the signals into a weighted two-dimensional matrix (d x m), and splicing the two-dimensional matrices of the three signals through a splicing layer to obtain a two-dimensional matrix (d x 3 m) vector. Where m is the output dimension of the embedding layer.
S2: and inputting the processed data into a convolution structure group consisting of three convolution layers with (d × c) structures and a batch normalization layer to realize automatic extraction of CTG signal characteristics. Where c represents the filter size of the convolutional layer, determining the dimensions of the output space.
S3: and respectively inputting the shared features obtained after feature extraction into a label classifier and a domain discriminator, and introducing a gradient inversion layer between the feature extractor and the domain discriminator to realize loss resistance. The label classifier comprises a bidirectional gating circulation unit, a full connection layer and a softmax function; the domain discriminator comprises a full connection layer, a dropout layer and a global average pooling layer;
in the label classifier, shared features pass through a bidirectional gated cyclic unit layer, a two-dimensional matrix vector (d multiplied by 3 m) is input into each of k gated cyclic units in the forward direction and the reverse direction for calculation and splicing, and a one-dimensional output vector with the length of 2k is output.
Calculating the one-dimensional output vector through a full connection layer, performing softmax function compression calculation on the outputs of the two groups of bidirectional gating circulation units, outputting the probabilities that the samples are normal and abnormal, and selecting the class label corresponding to the higher probability as a classification judgment result;
in the domain discriminator, shared features are firstly input into a full connection layer, and feature representations are mapped to a sample mark space;
introducing a dropout layer structure, and reducing the overfitting phenomenon of the model by weakening the interaction between hidden layer nodes;
then, the output of the dropout layer is input to a full connection layer for task learning, and then dimension compression is carried out on the output through a global average pooling layer;
and finally, entering a full connection layer to classify the data, and performing compression calculation on the output by adopting a softmax function to obtain the probability of the domain from which the sample comes, wherein the domain label corresponding to the higher probability is used as a domain identification result.
2. The intelligent interpretation method for multi-center-oriented domain-adaptive fetal heart monitoring according to claim 1, characterized in that: the preprocessing of the step S1 comprises the steps of processing missing values and abnormal values of the fetal heart rate signals, the uterine contraction signals and the fetal movement signals, and carrying out standardization processing on the fetal heart rate signals processed by the missing values and the abnormal values.
3. The intelligent interpretation method for multi-center-oriented domain-adaptive fetal heart monitoring according to claim 2, characterized in that: the fetal heart rate normalization process includes calculating a baseline value of the fetal heart rate signal and subtracting the baseline value from the fetal heart rate signal processed with the missing value and the abnormal value to obtain a standard fetal heart rate signal.
4. The intelligent interpretation method for multi-center-oriented domain-adaptive fetal heart monitoring according to claim 3, wherein the method comprises the following steps: and the preprocessing of the step S1 further comprises the step of carrying out sliding window segmentation processing on the fetal heart rate signals subjected to missing value and abnormal value processing before standardization processing to obtain a fetal heart rate signal segment with the signal length not less than p.
5. The intelligent interpretation method for multi-center-oriented domain-adaptive fetal heart monitoring according to claim 4, wherein the method comprises the following steps: the sliding window segmentation processing comprises the step of synchronously performing sliding window segmentation on the preprocessed fetal heart rate signals, the preprocessed uterine contraction signals and the preprocessed fetal movement signals.
CN202211080268.1A 2022-09-05 2022-09-05 Multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaption Pending CN115618284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211080268.1A CN115618284A (en) 2022-09-05 2022-09-05 Multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaption

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211080268.1A CN115618284A (en) 2022-09-05 2022-09-05 Multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaption

Publications (1)

Publication Number Publication Date
CN115618284A true CN115618284A (en) 2023-01-17

Family

ID=84859106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211080268.1A Pending CN115618284A (en) 2022-09-05 2022-09-05 Multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaption

Country Status (1)

Country Link
CN (1) CN115618284A (en)

Similar Documents

Publication Publication Date Title
Sridhar et al. Brain tumor classification using discrete cosine transform and probabilistic neural network
CN113616184B (en) Brain network modeling and individual prediction method based on multi-mode magnetic resonance image
Petrozziello et al. Deep learning for continuous electronic fetal monitoring in labor
CN113077434A (en) Method, device and storage medium for lung cancer identification based on multi-modal information
WO2021212715A1 (en) Schizophrenia classification and identification method, operation control apparatus, and medical equipment
Patel et al. EfficientNetB0 for brain stroke classification on computed tomography scan
CN113456064B (en) Intelligent interpretation method for prenatal fetal heart monitoring signals
Zhuang et al. Tumor classification in automated breast ultrasound (ABUS) based on a modified extracting feature network
US20230148955A1 (en) Method of providing diagnostic information on alzheimer&#39;s disease using brain network
CN114224288A (en) Microcapsule neural network training method and device for detecting epilepsia electroencephalogram signals
Latha et al. Deep Learning based Automatic Detection of Intestinal Hemorrhage Using Wireless Capsule Endoscopy Images
Zhou et al. Identifying fetal status with fetal heart rate: Deep learning approach based on long convolution
CN113627391A (en) Cross-mode electroencephalogram signal identification method considering individual difference
CN117523203A (en) Image segmentation and recognition method for honeycomb lung disease kitchen based on transducer semi-supervised algorithm
CN115618284A (en) Multi-center fetal heart monitoring intelligent interpretation method based on unsupervised field self-adaption
Raghukumar et al. Predicting the myocardial infarction from predictive analytics through supervised machine learning
CN114841216B (en) Electroencephalogram signal classification method based on model uncertainty learning
Ural Computer aided deep learning based assessment of stroke from brain radiological ct images
CN115511798A (en) Pneumonia classification method and device based on artificial intelligence technology
CN115239695A (en) Pulmonary nodule identification system and method based on time sequence image
Chen et al. Cardiac motion scoring based on CNN with attention mechanism
CN114822734A (en) Traditional Chinese medical record analysis method based on cyclic convolution neural network
Gahiwad et al. Brain Stroke Detection Using CNN Algorithm
CN115644927A (en) Multi-center fetal heart monitoring intelligent interpretation method based on maximum and minimum entropy semi-supervised field self-adaption
Zeng et al. Use of Deep Learning for Continuous Prediction of Mortality for All Admissions in Intensive Care Units

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination