CN111584072A - Neural network model training method suitable for small samples - Google Patents

Neural network model training method suitable for small samples Download PDF

Info

Publication number
CN111584072A
CN111584072A CN202010397594.XA CN202010397594A CN111584072A CN 111584072 A CN111584072 A CN 111584072A CN 202010397594 A CN202010397594 A CN 202010397594A CN 111584072 A CN111584072 A CN 111584072A
Authority
CN
China
Prior art keywords
neural network
training
model
network
mathematical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010397594.XA
Other languages
Chinese (zh)
Other versions
CN111584072B (en
Inventor
王烁
李文江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Maikang Medical Technology Co ltd
Original Assignee
Suzhou Maikang Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Maikang Medical Technology Co ltd filed Critical Suzhou Maikang Medical Technology Co ltd
Priority to CN202010397594.XA priority Critical patent/CN111584072B/en
Publication of CN111584072A publication Critical patent/CN111584072A/en
Application granted granted Critical
Publication of CN111584072B publication Critical patent/CN111584072B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention provides a neural network model training method suitable for small samples, which obtains physical quantities related to medical diagnosis tasks by selecting a proper mathematical equation, provides additional guidance and regularization in a multi-task learning mode to perform auxiliary training on a neural network model, and greatly reduces the requirements on sample quantity. Moreover, the implicit spatial features involved in the invention are closely related to the physiological process, so that the reliability of the diagnosis result is greatly improved. The method realizes the training of the high-precision model under a small sample.

Description

Neural network model training method suitable for small samples
Technical Field
The invention relates to a method for training a neural network model for medical diagnosis by combining a mathematical equation under the condition of a small sample, belonging to the field of medical treatment and artificial intelligence.
Background
With the continuous development of machine learning methods, modern artificial intelligence methods represented by deep neural network algorithms can be applied to medical automatic diagnosis. Machine learning algorithms for medical diagnosis generally adopt a supervised learning framework, i.e., signals acquired by a certain number of medical devices and corresponding diagnosis results are provided as input and output of a model, and then model parameters are continuously adjusted to optimize an objective function. Since the number of parameters of deep neural networks often exceeds millions, a matching large number of samples are required for training, but this is difficult and expensive in the medical field. The insufficiency of the sample volume easily causes the overfitting of the model, and reduces the accuracy of the diagnosis result. In addition, the black box nature of deep neural networks makes the model lack interpretable performance, thereby affecting the reliability of medical diagnostic results.
Mathematical equations are often used to characterize physiological processes in the human body, and can be inverted from the measurement signals of medical devices to obtain relevant physical quantities, which reflect physiological states in the human body and provide references for medical diagnosis. Compared with a data-driven machine learning algorithm, the mathematical equation is derived from theoretical derivation and can be directly solved without sample training. But has the disadvantage of containing a series of assumptions and simplifications, deviating from the true situation in the human body.
Aiming at the difficulties of the deep neural network in medical diagnosis, the invention combines the mathematical equation related to the diagnosis result to provide additional supervision and regularity for the neural network in a multi-task learning mode, thereby greatly reducing the requirement on the sample size. In addition, the implicit variable space involved in the invention is closely associated with the physiological process, so that the reliability of the diagnosis result is improved.
Disclosure of Invention
The invention aims to obtain a neural network training model which does not need a large number of training sample sets, and process and diagnose signals acquired by medical equipment. The invention provides a neural network model training method suitable for small samples, which obtains physical quantities related to medical diagnosis tasks by selecting a proper mathematical equation, provides additional guidance and regularization for a model in a multi-task learning mode, and provides a method suitable for training a neural network model of small samples.
The technical scheme adopted by the invention is as follows:
a neural network model training method suitable for small samples obtains physical quantities related to medical diagnosis tasks by selecting a proper mathematical equation, and provides additional guidance and regularization to train the neural network model in a multi-task learning mode, and the method specifically comprises the following steps:
(1) collecting training samples: the training sample set comprises samples X ═ { X ] of the N medical signals1,X2,…,XNAnd the corresponding diagnostic result y ═ y1,y2,…,yN}。
(2) Selecting a mathematical equation to obtain corresponding physical quantities: selecting mathematical equations describing physiological processes, and selecting m physical quantities q ═ q [ q ] associated with the diagnosis in step 1(1),q(2),…q(k)…,q(m)]Wherein q is(k)Is the kth physical quantity. For each training sample XiObtaining corresponding physical quantity q by solving mathematical equationiAnd acquiring a physical quantity labeling set corresponding to the sample.
(3) Constructing a neural network training model: the neural network training model comprises a signal coding network E for coding medical signals into hidden spatial features, a diagnosis analysis network F for decoding the hidden spatial features into diagnosis results, and a mathematical decoding network M for decoding the hidden spatial features into physical quantities;
(4) constructing an objective function L: the objective function L comprises a main task and a mathematical auxiliary task:
Figure BDA0002488161620000021
wherein D1(..) and D2(.,) are each a metric function for comparing the deviation of the model predicted result from the true result, D1(.,) comparing the deviation of the diagnostic results, D2(.,) comparing deviations of physical quantities β is a hyper-parameter of the model.
(5) Training a deep neural network model: and (3) continuously updating parameters of the neural network training model by using the sample training set in the step (1) and the physical quantity calculated in the step (2) and adopting a gradient back propagation algorithm until the target function L is converged, and finishing the training.
Further, the medical signal is a result obtained by the medical device directly obtaining the measurement result or further processing the measurement signal.
Further, the signal encoding network E is a time-series neural network or a convolutional neural network.
Further, the diagnosis network F adopts a fully-connected neural network with a plurality of layers.
Further, the mathematical decoding network M adopts a fully-connected neural network of several layers.
The invention has the beneficial effects that: the training method obtains the physical quantity related to the medical diagnosis task by selecting a proper mathematical equation, provides additional supervision for model training, and greatly reduces the requirement on the sample quantity. Meanwhile, the model is guided to generate a hidden space with physical significance, the characteristics of the hidden space are closely related to the physiological process, and the reliability of the model and the diagnosis result thereof is greatly improved.
Drawings
FIG. 1 is a schematic diagram of the structure of the method of the present invention.
FIG. 2 is a flow chart of the method of the present invention applied to the model training in the diagnosis of cerebrovascular diseases.
FIG. 3 is a flow chart of the method of the present invention applied to the model training in the assessment of carotid atherosclerotic plaque stability.
Detailed Description
The invention provides a neural network model training method suitable for small samples, which obtains physical quantities related to medical diagnosis tasks by selecting a proper mathematical equation, provides additional guidance and regularization in a multi-task learning mode to train the neural network model, and greatly reduces the requirements on sample quantity. Moreover, the implicit spatial features involved in the invention are closely related to the physiological process, so that the reliability of the diagnosis result is greatly improved. Specifically, the method comprises the following steps:
(1) collecting training samples: the training sample set comprises N medical signal samples X ═ X1,X2,…,XNAnd the corresponding diagnostic result y ═ y1,y2,…,yN}. The medical sample signal may be a measurement result obtained directly by the medical equipment (for example, a blood flow velocity waveform obtained by doppler ultrasound, a medical image obtained by a magnetic resonance machine, etc.), or a result obtained by further processing based on the measurement signal (for example, an image segmentation result labeled by a medical imaging physician). Diagnostic nodeThe fruit may be in the form of categories, such as malignant/benign or high-risk/low-risk, corresponding to classification problems in machine learning; or a continuous variable, such as a risk score, corresponding to a regression problem in machine learning.
(2) Selecting a mathematical equation to obtain corresponding physical quantities: based on the relevant research theory, selecting mathematical equation describing physiological process and selecting m physical quantities q ═ q relevant to the diagnosis result(1),q(2),…q(k)…,q(m)]Wherein q is(k)Is the kth physical quantity. For each training sample XiObtaining corresponding physical quantity q by solving mathematical equationiAnd acquiring a physical quantity labeling set corresponding to the sample.
(3) Constructing a neural network training model: designing a network structure: as shown in the structural schematic diagram in fig. 1, the neural network training model includes a signal encoding network E for encoding a medical signal into hidden spatial features, a diagnostic analysis network F for decoding the hidden spatial features into diagnostic results, and a mathematical decoding network M for decoding the hidden spatial features into physical quantities; wherein, the signal encoding network E: the input of this network is the medical signal X and the output is a hidden spatial representation z of the signal. The function of the signal coding network is to convert the high-dimensional medical signal X into a low-dimensional hidden space, i.e. z ═ E (X; w)E) Wherein
Figure BDA0002488161620000031
Figure BDA0002488161620000032
To encode parameters in a neural network. Different types of neural networks can be selected for different signal forms, such as a time-series neural network for a time series, a convolutional neural network for a medical image, and the like. Diagnostic analysis network F: the input of this network is the implicit spatial representation z of the signal and the output is the diagnostic result y. The diagnostic analysis network functions to predict a diagnostic result based on a hidden spatial representation of the signal, i.e.
Figure BDA0002488161620000033
Wherein
Figure BDA0002488161620000034
To encode parameters in a neural network. The diagnostic analysis network F may employ several layers of fully connected neural networks.
The mathematical decoding network M: the input of this network is a hidden spatial representation z of the signal and the output is a specified physical quantity q. The mathematical decoding network has the functions of predicting physical quantity based on the hidden space representation of the signal, providing additional supervision for model training and guiding the model to generate a hidden space with physical significance, namely
Figure BDA0002488161620000035
Wherein
Figure BDA0002488161620000036
To encode parameters in a neural network. The mathematical decoding network M may employ a fully connected neural network of several layers.
The main task of the neural network training model is to directly predict the corresponding diagnosis result through medical signals. And the auxiliary task is to predict the relevant physical quantities in the physical equations.
(4) Constructing an objective function L: the objective function L comprises a main task and a mathematical auxiliary task:
Figure BDA0002488161620000041
wherein D1(..) and D2(.,) are each a metric function for comparing the deviation of the model predicted result from the true result, D1(.,) comparing the deviation of the diagnostic results, D2(.) comparing deviation of physical quantity, for classification problem, the measurement function can select Cross Entropy function (Cross Entropy), for regression problem measurement function can select Mean square error (β) as hyper-parameter of model, default value is 1, weight of main task and auxiliary task in objective function is balanced, F degree E (X) represents predicted value of diagnosis result, F degree M (X) represents predicted value of physical quantity.
(5) Training a deep neural network training model: and (3) continuously updating parameters of the neural network by using the sample training set in the step (1) and the physical quantity calculated by the auxiliary task in the step (2) and adopting a gradient back propagation algorithm until the target function L converges to a specified standard, and finishing training.
And after the training is finished, saving the parameters of each neural network. Connecting the signal encoding network E and the diagnostic analysis network F as a diagnostic model, i.e. for a new sample XnewThe method of the present invention gives a predicted value of
Figure BDA0002488161620000042
Preferably, the signal encoding network E is a time-series neural network or a convolutional neural network; the diagnosis network F adopts a plurality of layers of fully-connected neural networks; the mathematical decoding network M adopts a fully-connected neural network with a plurality of layers.
To facilitate an understanding of the method of the present invention, two preferred embodiments of the present invention will now be described, and the embodiments will be further described with reference to fig. 2 and 3. The method of the present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the following description, specific details of some embodiments of the method of the invention are set forth. However, it will be apparent to those skilled in the art that various changes, rearrangements, and substitutions can be made without departing from the scope of the invention.
Example 1 a cerebrovascular disease diagnostic model based on carotid artery ultrasound measurement of blood flow velocity was trained:
the diagnosis tasks of the diagnosis model of the embodiment are as follows: the Doppler ultrasonic blood flow velocity waveform obtained by measuring the carotid artery can distinguish healthy people from cerebral infarction patients. We collected average blood flow velocity waveforms of 30 healthy persons and 30 cerebral infarction patients, of which 20 healthy persons and 20 patients were data as a diagnostic model training data set. The remaining 10 healthy and 10 patient data were used as a test set to evaluate the performance of the diagnostic model trained by the training method of the present invention. The main task of the neural network training model in this embodiment is to give an average blood flow velocity waveform in one cardiac cycle as an input, and output a diagnosis result: health/cerebral infarction. The mathematical equation selected in this example is a reduced-order fluid dynamics equation for intracranial vascular circulation, which is described for cerebral vessels using the three-element Westerhof model:
Figure BDA0002488161620000043
pe+p=Zcqin
Figure BDA0002488161620000044
where V is the volume of the cerebral arteries, peIs carotid blood pressure, p is cerebral blood pressure, p isvVenous pressure, R peripheral cerebrovascular resistance, ZcIs the characteristic impedance of the cerebral vessel, qinTo obtain blood flow from the vein into the cerebral vessels, qoutIs the blood flow from the cerebral artery into the vein. The parameters in the equation can be obtained by the analysis method proposed in the Chinese patent of invention CN 1044899A. According to the report in the literature that cerebral infarction can cause abnormal changes of peripheral vascular resistance and impedance, the peripheral cerebrovascular resistance R and the characteristic cerebrovascular impedance Z are selectedcAs physical quantities in the present embodiment. For each training sample XiObtaining corresponding physical quantity R by solving mathematical equationiAnd ZciAnd acquiring a physical quantity labeling set corresponding to the sample. The relevant modules in this embodiment are implemented as follows for the form of the measurement signal and the diagnostic task, see fig. 2:
a) signal encoding network E: the input to this network is a time series of blood flow velocities over one cardiac cycle with a time resolution of 128 times one cardiac cycle, with the time series samples being denoted X ═ v1,v2,…,v128]Wherein v isiIs the blood flow velocity at the i-th instant. The structure of the network is a 5-layer fully-connected neural network, wherein the number of the neurons of an input layer and an output layer is 128 and 8 respectively, the number of the neurons of the middle layer is 64,32 and 16 in sequence, and an activation function is a commonly used ReLU function. NetworkIs an 8-dimensional hidden spatial representation of z ═ z1,z2,…,z8]。
b) Diagnostic analysis network F: the input to this network is an 8-dimensional hidden spatial representation z. The structure of the network is a 4-layer fully-connected neural network, wherein the number of the neurons of the input layer and the output layer is respectively 8 and 1, the number of the neurons of the middle layer is sequentially 64 and 64, the activation function is a common Sigmoid function, and the output result is a diagnosis result: health/cerebral infarction can also be represented by "1" and "0", respectively, the latter being more in accordance with the computer data processing rules.
c) The mathematical decoding network M: the input of the network is a hidden space representation Z of a signal, and the output is cerebrovascular peripheral resistance R and cerebrovascular characteristic impedance Z which are solved according to a blood flow velocity waveformc. The structure of the network is a 4-layer fully-connected neural network, wherein the number of neurons of an input layer and an output layer is respectively 8 and 2, the number of neurons of a middle layer is sequentially 64 and 64, an activation function is a commonly used Sigmoid function, and output results are corresponding cerebrovascular peripheral resistance R and cerebrovascular characteristic impedance Zc
The objective function in this embodiment is:
Figure BDA0002488161620000051
wherein BCE is a common cross entropy function, and MSE is a mean square error function. After 40 cases of data of the training set are used as input and output training models, the signal coding network E and the diagnosis analysis network F are connected to be used as diagnosis models, and data of the test set are input, so that the correct classification accuracy of the models reaches 80%.
Example 2 a magnetic resonance image-based atherosclerotic plaque stability assessment model was trained:
the diagnosis tasks of the stability assessment model in the embodiment are as follows: tomographic scanning of atherosclerotic plaques by magnetic resonance imaging distinguishes plaques causing stroke symptoms from plaques not causing bisection symptoms. Thus 15 patients with stroke due to unilateral carotid atherosclerosis were collected and therefore contained 30 magnetic resonance images of atherosclerotic plaque (15 on the left and right) with 15 each of symptomatic (stroke causing) and asymptomatic plaques. The image data of 10 symptomatic plaque and 10 asymptomatic plaque are used as the training data set of the stability assessment model, and the rest 5 symptomatic plaque and 5 asymptomatic plaque are used as the test set for evaluating the performance of the stability assessment model trained by the training method of the invention. In this embodiment, the main task of the neural network training model is to determine the plaque stability by taking the magnetic resonance image of the atherosclerotic plaque as an input: symptomatic/asymptomatic.
The mathematical equation chosen in this example is a biomechanical equation describing the motion of the vessel wall:
Figure BDA0002488161620000061
Figure BDA0002488161620000062
where σ is the structural stress tensor in the vessel wall, F is the deformation gradient tensor,
Figure BDA0002488161620000063
is the stiffness derived from the constitutive relation of the vessel wall material. In this example, we chose a Neo-hooken superelastic material model to describe the structure of the vessel. Given the contour of the vessel wall, the distribution of structural stresses within the vessel wall is solved by a finite element method. In particular, we can obtain the maximum equivalent stress σ in the entire tube wallm. According to the literature report, the stability and sigma of the atherosclerotic plaquemRelated, so I choose σmAs physical quantities in the present embodiment. For each training sample XiObtaining corresponding physical quantity sigma by solving mathematical equationmiAnd acquiring a physical quantity labeling set corresponding to the sample. The relevant modules in this embodiment are implemented as follows for the form of the measurement signal and the diagnostic task, see fig. 3:
a) signal encoding network E: the input of the network is a cross-section labeling graph I of the atherosclerotic plaque labeled by an image doctor, wherein the value in the tube wall is 1, the values of the tube cavity and the outer part of the tube wall are 0, the width and the height of the image are both 256 pixels, and the resolution is 1 mm. The structure of the network is a 6-layer deep convolutional network, the convolutional kernel size is 3x3 pixels, a ReLU activation layer and a 2x2 pooling layer are connected behind each convolutional layer, the last layer is a full connection layer, and the number of neurons in an output layer is 8.
b) Diagnostic analysis network F: the input to this network is an 8-dimensional hidden spatial representation z. The structure of the network is a 4-layer fully-connected neural network, wherein the number of the neurons of the input layer and the output layer is respectively 8 and 1, the number of the neurons of the middle layer is sequentially 64 and 64, the activation function is a commonly used Sigmoid function, and the output result is the probability y of the plaque causing symptoms.
c) The mathematical decoding network M: the input of the network is a hidden space representation z of a signal, and the output is the maximum structural stress sigma solved according to the structural numerical value of a plaquem. The structure of the network is a 4-layer fully-connected neural network, wherein the number of neurons of an input layer and an output layer is respectively 8 and 1, the number of neurons of middle layers is sequentially 64 and 64, an activation function is a commonly used Sigmoid function, and an output result is corresponding maximum structural stress sigmam
The objective function in this embodiment is:
Figure BDA0002488161620000064
wherein BCE is a common cross entropy function, and MSE is a mean square error function. After 30 cases of data of the training set are used as input and output training models, the signal coding network E and the diagnostic analysis network F are connected to be used as diagnostic models, and data of the test set are input, so that the correct classification accuracy of the models reaches 70%.

Claims (5)

1. A neural network model training method suitable for small samples is characterized in that physical quantities related to medical diagnosis tasks are obtained by selecting a proper mathematical equation, and extra guidance and regularization are provided in a multi-task learning mode to train the neural network model, and the method specifically comprises the following steps:
(1) collecting training samples: the training sample set comprises samples X ═ { X ] of the N medical signals1,X2,...,XNAnd the corresponding diagnostic result y ═ y1,y2,...,yN}。
(2) Selecting a mathematical equation to obtain corresponding physical quantities: selecting mathematical equations describing physiological processes, and selecting m physical quantities q ═ q [ q ] associated with the diagnosis in step 1(1),q(2),...q(k)...,q(m)]Wherein q is(k)Is the kth physical quantity. For each training sample XiObtaining corresponding physical quantity q by solving mathematical equationiAnd acquiring a physical quantity labeling set corresponding to the sample.
(3) Constructing a neural network training model: the neural network training model comprises a signal coding network E for coding medical signals into hidden spatial features, a diagnosis analysis network F for decoding the hidden spatial features into diagnosis results, and a mathematical decoding network M for decoding the hidden spatial features into physical quantities;
(4) constructing an objective function L: the objective function L comprises a main task and a mathematical auxiliary task:
Figure FDA0002488161610000011
wherein D1(..) and D2(.,) are each a metric function for comparing the deviation of the model predicted result from the true result, D1(.,) comparing the deviation of the diagnostic results, D2(.,) comparing deviations of physical quantities β is a hyper-parameter of the model.
(5) Training a deep neural network model: and (3) continuously updating parameters of the neural network training model by using the sample training set in the step (1) and the physical quantity calculated in the step (2) and adopting a gradient back propagation algorithm until the target function L is converged, and finishing the training.
2. The neural network model training method of claim 1, wherein the medical signal is a result of a medical device directly obtaining a measurement result or a result of further processing based on the measurement signal.
3. The neural network model training method according to claim 1, wherein the signal encoding network E is a time-series neural network or a convolutional neural network.
4. The neural network model training method of claim 1, wherein the diagnostic network F employs a number of layers of fully-connected neural networks.
5. The neural network model training method of claim 1, wherein the mathematical decoding network M employs a fully-connected neural network with several layers.
CN202010397594.XA 2020-05-12 2020-05-12 Neural network model training method suitable for small samples Active CN111584072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010397594.XA CN111584072B (en) 2020-05-12 2020-05-12 Neural network model training method suitable for small samples

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010397594.XA CN111584072B (en) 2020-05-12 2020-05-12 Neural network model training method suitable for small samples

Publications (2)

Publication Number Publication Date
CN111584072A true CN111584072A (en) 2020-08-25
CN111584072B CN111584072B (en) 2023-09-19

Family

ID=72112684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010397594.XA Active CN111584072B (en) 2020-05-12 2020-05-12 Neural network model training method suitable for small samples

Country Status (1)

Country Link
CN (1) CN111584072B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112652165A (en) * 2020-12-11 2021-04-13 北京百度网讯科技有限公司 Model training and road condition prediction method, device, equipment, medium and program product
CN113436187A (en) * 2021-07-23 2021-09-24 沈阳东软智能医疗科技研究院有限公司 Processing method, device, medium and electronic equipment of brain CT angiography image
WO2023202231A1 (en) * 2022-04-20 2023-10-26 北京华睿博视医学影像技术有限公司 Image reconstruction method and apparatus, and electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180158552A1 (en) * 2016-12-01 2018-06-07 University Of Southern California Interpretable deep learning framework for mining and predictive modeling of health care data
CN109119156A (en) * 2018-07-09 2019-01-01 河南艾玛医疗科技有限公司 A kind of medical diagnosis system based on BP neural network
CN109300121A (en) * 2018-09-13 2019-02-01 华南理工大学 A kind of construction method of cardiovascular disease diagnosis model, system and the diagnostic model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180158552A1 (en) * 2016-12-01 2018-06-07 University Of Southern California Interpretable deep learning framework for mining and predictive modeling of health care data
CN109119156A (en) * 2018-07-09 2019-01-01 河南艾玛医疗科技有限公司 A kind of medical diagnosis system based on BP neural network
CN109300121A (en) * 2018-09-13 2019-02-01 华南理工大学 A kind of construction method of cardiovascular disease diagnosis model, system and the diagnostic model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112652165A (en) * 2020-12-11 2021-04-13 北京百度网讯科技有限公司 Model training and road condition prediction method, device, equipment, medium and program product
CN112652165B (en) * 2020-12-11 2022-05-31 北京百度网讯科技有限公司 Model training and road condition prediction method, device, equipment, medium and program product
CN113436187A (en) * 2021-07-23 2021-09-24 沈阳东软智能医疗科技研究院有限公司 Processing method, device, medium and electronic equipment of brain CT angiography image
WO2023202231A1 (en) * 2022-04-20 2023-10-26 北京华睿博视医学影像技术有限公司 Image reconstruction method and apparatus, and electronic device and storage medium

Also Published As

Publication number Publication date
CN111584072B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
Acharya et al. Deep neural network for respiratory sound classification in wearable devices enabled by patient specific model tuning
CN111584072B (en) Neural network model training method suitable for small samples
CN110619322A (en) Multi-lead electrocardio abnormal signal identification method and system based on multi-flow convolution cyclic neural network
CN109119156A (en) A kind of medical diagnosis system based on BP neural network
CN109512423A (en) A kind of myocardial ischemia Risk Stratification Methods based on determining study and deep learning
CN113143230B (en) Peripheral arterial blood pressure waveform reconstruction system
CN110558960A (en) continuous blood pressure non-invasive monitoring method based on PTT and MIV-GA-SVR
CN111370120B (en) Heart diastole dysfunction detection method based on heart sound signals
Parpievna et al. Application of Artificial Neural Networks for Analysis of Pathologies in Blood Vessels
CN114820573A (en) Atrial fibrillation auxiliary analysis method based on semi-supervised learning
CN113509186B (en) ECG classification system and method based on deep convolutional neural network
CN117017310A (en) Acoustic-electric dual-mode congenital heart disease prediction device based on knowledge distillation
CN106355578A (en) Ultrasonic carotid artery far end recognizing device and method based on convolutional neural network
Wen et al. Fine-Grained and Multiple Classification for Alzheimer's Disease With Wavelet Convolution Unit Network
CN116369877A (en) Noninvasive blood pressure estimation method based on photoelectric volume pulse wave
JP2023104885A (en) Electrocardiographic heart rate multi-type prediction method based on graph convolution
Shao et al. Predicting cardiovascular and cerebrovascular events based on instantaneous high-order singular entropy and deep belief network
Joloudari et al. A survey of applications of artificial intelligence for myocardial infarction disease diagnosis
Edupuganti et al. Classification of Heart Diseases using Fusion Based Learning Approach
Martinez et al. Strategic attention learning for modality translation
Ebrahimkhani et al. A deep learning approach to using wearable Seismocardiography (SCG) for diagnosing aortic valve stenosis and predicting aortic hemodynamics obtained by 4D flow MRI
Bahloul et al. Spectrogram Image-based Machine Learning Model for Carotid-to-Femoral Pulse Wave Velocity Estimation Using PPG Signal
CN117426754B (en) PNN-LVQ-based feature weight self-adaptive pulse wave classification method
CN117393153B (en) Shock real-time risk early warning and monitoring method and system based on medical internet of things time sequence data and deep learning algorithm
Markuleva et al. The neuronet method for pulse wave analysis by hydro-cuff technology at cardiovascular system diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant