CN116934675A - Plateau pulmonary edema prediction method based on multi-feature fusion and contrast learning - Google Patents
Plateau pulmonary edema prediction method based on multi-feature fusion and contrast learning Download PDFInfo
- Publication number
- CN116934675A CN116934675A CN202310433513.0A CN202310433513A CN116934675A CN 116934675 A CN116934675 A CN 116934675A CN 202310433513 A CN202310433513 A CN 202310433513A CN 116934675 A CN116934675 A CN 116934675A
- Authority
- CN
- China
- Prior art keywords
- samples
- features
- feature
- learning
- pulmonary edema
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010037423 Pulmonary oedema Diseases 0.000 title claims abstract description 25
- 208000005333 pulmonary edema Diseases 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000004927 fusion Effects 0.000 title claims abstract description 13
- 210000004072 lung Anatomy 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims abstract description 10
- 230000011218 segmentation Effects 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims abstract description 5
- 238000000605 extraction Methods 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims 1
- 239000000284 extract Substances 0.000 claims 1
- 238000013135 deep learning Methods 0.000 abstract description 7
- 238000003745 diagnosis Methods 0.000 abstract description 6
- 238000013527 convolutional neural network Methods 0.000 abstract description 5
- 230000003902 lesion Effects 0.000 abstract description 3
- 238000003062 neural network model Methods 0.000 abstract 1
- 238000005259 measurement Methods 0.000 description 3
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 229910052760 oxygen Inorganic materials 0.000 description 2
- 239000001301 oxygen Substances 0.000 description 2
- 208000019693 Lung disease Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 208000008445 altitude sickness Diseases 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Probability & Statistics with Applications (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a predictive diagnosis method for altitude pulmonary edema, which comprises the following steps: s1, a full-automatic focus segmentation algorithm based on Dense-U-Net; s2, constructing a two-branch feature learning network, and learning focus and full lung features; s3, constructing a sample pair; s4, comparing feature learning; s5, predicting plateau pulmonary edema. The convolution neural network model based on multi-feature fusion and contrast learning can be used for analyzing CT images of the plateau pulmonary edema in a full-automatic mode, and further prediction of the plateau pulmonary edema is achieved. Compared with the method for classifying by using a single convolutional neural network, the method uses multi-feature fusion and feature contrast learning, randomly selects two samples to construct a sample pair during training, and then uses a network sharing weights to extract the full lung features and focus features of the two samples in the sample pair; the method comprises the steps of measuring the distance between features of two samples in a feature contrast learning mode, wherein if the two samples belong to the same prediction, the distance between the whole lung features and focus features of the two samples is relatively short; if the two samples belong to different predictions, the lung features and lesion features of the two samples are respectively farther apart. By combining the feature distance and the cross entropy loss function, the deep learning features of the samples of the same category are similar, the deep learning features of the samples of different categories are large in difference, the features are optimized and fused, and the classification precision is improved.
Description
Technical Field
The invention relates to a medical technology, in particular to a plateau pulmonary edema prediction method based on multi-feature fusion and contrast learning.
Background
In high altitude areas, altitude sickness, including altitude pulmonary edema, is easily caused by insufficient oxygen supply to the body due to rarefaction of oxygen. Altitude pulmonary edema is a serious pulmonary disease with high morbidity and mortality. Traditional diagnostic methods for altitude pulmonary edema typically rely on manual observation and analysis of CT images of the lungs. However, this method has problems such as complicated operation and low accuracy. In recent years, with the development of deep learning technology, automatic analysis and diagnosis of lung images by using convolutional neural networks are becoming a new research direction.
However, there are some limitations to the automatic analysis and prediction of lung images using conventional convolutional neural networks, such as limitations in feature extraction, differences between different cases, and the like. Therefore, researchers have proposed a predictive diagnosis method for altitude pulmonary edema based on multi-feature fusion and contrast learning. The method adopts a Dense-U-Net model to carry out full-automatic focus segmentation, and builds a two-branch characteristic learning network to learn focus and full-lung characteristics. During training, two samples are randomly selected to construct a sample pair, the two samples in the sample pair are subjected to full-lung feature extraction and focus feature extraction by using a network sharing weights, and then the features of the two samples are subjected to distance measurement in a contrast learning mode. By combining the feature distance and the cross entropy loss function, the method can enable the deep learning features of samples of the same category to be similar, and the deep learning features of samples of different categories have large difference, so that the features are optimized, and the classification precision is improved. Compared with the traditional convolutional neural network method, the method has better diagnosis precision and higher automation degree.
Disclosure of Invention
The invention discloses a predictive diagnosis method for altitude pulmonary edema based on multi-feature fusion and contrast learning. The method uses the convolutional neural network model to carry out full-automatic analysis and classification on the CT image of the altitude pulmonary edema, and can realize the prediction of the altitude pulmonary edema.
Specifically, the method comprises the following steps:
s1, a full-automatic focus segmentation algorithm based on Dense-U-Net is used for automatically segmenting focus areas in CT images of plateau pulmonary edema;
s2, constructing two branch feature learning networks which are respectively used for learning the features of focus and whole lung, and improving the classification accuracy by a multi-feature fusion mode;
s3, constructing a sample pair, and adopting a mode of randomly selecting two samples to construct the sample pair for training a network to perform feature learning and contrast learning;
s4, comparing feature learning, namely performing full-lung feature extraction and focus feature extraction on two samples in a sample pair in a feature comparison learning mode, and performing distance measurement on the two samples in a feature distance mode; if the two samples belong to the same label, the whole lung characteristics and focus characteristics of the two samples are respectively closer in distance; if the two samples belong to different labels, the lung features and focus features of the two samples are far away respectively;
s5, plateau pulmonary edema prediction
After the model is trained, one network in the twin network in the step S4 is taken out, and the prediction of unknown plateau pulmonary edema CT images is realized.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a schematic diagram of a full-automatic lung segmentation algorithm based on Dense-U-Net;
FIG. 2 is a flow chart of multi-feature fusion and contrast feature learning.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that, while the present embodiment provides a detailed implementation and a specific operation process on the premise of the present technical solution, the protection scope of the present invention is not limited to the present embodiment.
The invention comprises the following steps:
s1, a full-automatic focus segmentation algorithm based on Dense-U-Net. Focus segmentation is carried out on the CT image of the plateau pulmonary edema by using a Dense-U-Net algorithm, and focus areas in the image are automatically segmented.
S2, constructing a two-branch feature learning network, learning focus and whole lung features, and designing a feature fusion algorithm. The full lung characteristics and focus characteristics output by Dense-U-Net are respectively input into the characteristic learning network of the two branches, the characteristic representations of the two branches are learned, and when the characteristics are fused, in order to remove redundant information and enhance key parts, the importance of each characteristic is weighted by using an attention mechanism.
The whole lung is characterized byThe focus is characterized by->The weighted result of the two features after the attention mechanism can be expressed as:
f′ 1 =f 1 ·softmax(Conv(ReLU(BN(Conv(f 1 )))))
f′ 2 =f 2 ·softmax(Conv(ReLU(BN(Conv(f 2 )))))
where Conv represents a convolution operation, BN represents a batch normalization operation, reLU represents an activation function, and softmax represents a softmax function that normalizes the attention coefficient. Attention factor, i.e. softmax (Conv (ReLU (BN (Conv (f)))), where f may be f 1 Or f 2 。
Will weightSplicing the features to obtain a fused feature f fuse :
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the stitching operation of tensors, alpha represents the weight of the whole lung feature, and 1-alpha represents the weight of the lesion feature. h, w and c respectively represent the height, width and channel number of the feature map after splicing, and the key part after feature fusion can be controlled by adjusting the splicing proportion.
S3, constructing a sample pair. In the training process, two samples are randomly selected to construct a sample pair so as to perform feature contrast learning.
S4, comparing feature learning. The method comprises the steps of carrying out full-lung feature extraction and focus feature extraction on two samples in a sample pair by using a network sharing weights, carrying out distance measurement on features of the two samples in a feature comparison learning mode, and if the two samples belong to the same label, respectively enabling the full-lung features and focus features of the two samples to be closer in distance; if the two samples belong to different labels, the lung features and lesion features of the two samples are respectively farther apart. The model is optimized through the contrast characteristic loss function, and a loss function formula of contrast characteristic learning can be expressed as:
wherein Y is a binary variable of whether the categories of the two samples are the same, w is a weight parameter, A and B are feature vectors of the two samples, D is a distance metric function, and m is a margin parameter.
S5, predicting plateau pulmonary edema. Through a feature fusion algorithm, redundant feature screening important features can be removed, and the correlation between the features and a final task is improved. By combining the feature distance and the cross entropy loss function, the deep learning features of samples of the same category can be similar, the deep learning features of samples of different categories are large in difference, the features are optimized, and the classification precision is improved. Predictive diagnosis of plateau pulmonary edema can be made by predicting predictions for each sample.
After the model is trained, one of the twin networks in step S4 is taken out as a plateau pulmonary edema prediction model. When the unknown plateau pulmonary edema CT image is predicted, the lung image and focus image (obtained in step S1) of the CT image are input into the model to obtain the prediction result of the sample.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.
Claims (1)
1. A plateau pulmonary edema prediction method based on multi-feature fusion and contrast learning is characterized by comprising the following steps: the method comprises the following steps:
s1, constructing a full-automatic focus segmentation network based on Dense-U-Net: a U-shaped full convolution network (Dense-U-Net) based on DenseNet extracts features by downsampling, upsamples to generate segmentation results, and fully automatically segments focal regions from CT images.
S2, constructing a feature learning network: extracting the characteristics of a lung CT image and a focus image by using a three-dimensional DenseNet121 as a characteristic extraction network;
s3, sample pair construction: in a training set, randomly extracting two samples each time to construct a sample pair, if the two samples have the same label, marking the label of the sample pair as 1, and if the two samples have different labels, marking the label of the sample pair as-1;
s4, contrast characteristic learning: and (3) constructing two branch twin networks based on the feature extraction network in the step (S2), wherein weight sharing among the twin networks is respectively used for extracting various features of two samples in the sample pair in the step (S3). Then, providing a contrast feature loss function, measuring the distances of various features of two samples, so that the feature distances among the same type of samples are similar, and the feature distances among different types of samples are far; and combining the cross entropy loss function to realize the contrast characteristic learning of the model.
S5, predicting plateau pulmonary edema: after the model is trained, one network in the twin network in the step S4 is taken out, and the prediction of unknown plateau pulmonary edema CT images is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310433513.0A CN116934675A (en) | 2023-04-21 | 2023-04-21 | Plateau pulmonary edema prediction method based on multi-feature fusion and contrast learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310433513.0A CN116934675A (en) | 2023-04-21 | 2023-04-21 | Plateau pulmonary edema prediction method based on multi-feature fusion and contrast learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116934675A true CN116934675A (en) | 2023-10-24 |
Family
ID=88376444
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310433513.0A Pending CN116934675A (en) | 2023-04-21 | 2023-04-21 | Plateau pulmonary edema prediction method based on multi-feature fusion and contrast learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116934675A (en) |
-
2023
- 2023-04-21 CN CN202310433513.0A patent/CN116934675A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111488921B (en) | Intelligent analysis system and method for panoramic digital pathological image | |
Ghahremani et al. | Deep learning-inferred multiplex immunofluorescence for immunohistochemical image quantification | |
CN111489324A (en) | Cervical cancer lesion diagnosis method fusing multi-modal prior pathology depth features | |
CN114299324B (en) | Pathological image classification method and system based on multiscale domain countermeasure network | |
CN112053354B (en) | Rail plate crack detection method | |
CN113239993B (en) | Pathological image classification system, terminal and computer readable storage medium | |
Cicalese et al. | Kidney level lupus nephritis classification using uncertainty guided Bayesian convolutional neural networks | |
CN111210869A (en) | Protein cryoelectron microscope structure analysis model training method and analysis method | |
WO2024060416A1 (en) | End-to-end weakly supervised semantic segmentation and labeling method for pathological image | |
CN114445356A (en) | Multi-resolution-based full-field pathological section image tumor rapid positioning method | |
CN114511710A (en) | Image target detection method based on convolutional neural network | |
CN113469119A (en) | Cervical cell image classification method based on visual converter and graph convolution network | |
EP4367675A1 (en) | Stain-free detection of embryo polarization using deep learning | |
Adorno III et al. | Advancing eosinophilic esophagitis diagnosis and phenotype assessment with deep learning computer vision | |
Kloeckner et al. | Multi-categorical classification using deep learning applied to the diagnosis of gastric cancer | |
CN114580501A (en) | Bone marrow cell classification method, system, computer device and storage medium | |
CN114140437A (en) | Fundus hard exudate segmentation method based on deep learning | |
CN114399763A (en) | Single-sample and small-sample micro-body ancient biogenetic fossil image identification method and system | |
CN113705595A (en) | Method, device and storage medium for predicting degree of abnormal cell metastasis | |
CN115564997A (en) | Pathological section scanning and analyzing integrated method and system based on reinforcement learning | |
CN116934675A (en) | Plateau pulmonary edema prediction method based on multi-feature fusion and contrast learning | |
CN112086174B (en) | Three-dimensional knowledge diagnosis model construction method and system | |
CN114693600A (en) | Semi-supervised learning method for carrying out nucleus segmentation on tissue pathology image | |
Basso et al. | Machine learning in renal pathology | |
CN114078137A (en) | Colposcope image screening method and device based on deep learning and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |