CN114820491A - Multi-modal stroke lesion segmentation method and system based on small sample learning - Google Patents

Multi-modal stroke lesion segmentation method and system based on small sample learning Download PDF

Info

Publication number
CN114820491A
CN114820491A CN202210403148.4A CN202210403148A CN114820491A CN 114820491 A CN114820491 A CN 114820491A CN 202210403148 A CN202210403148 A CN 202210403148A CN 114820491 A CN114820491 A CN 114820491A
Authority
CN
China
Prior art keywords
image
modal
sample
segmentation
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210403148.4A
Other languages
Chinese (zh)
Inventor
马祥园
张会凌
陈盈嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN202210403148.4A priority Critical patent/CN114820491A/en
Publication of CN114820491A publication Critical patent/CN114820491A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-modal stroke lesion segmentation method based on small sample learning, which comprises the following steps of: acquiring an original brain training sample, wherein the original training sample comprises a multi-modal CT medical image and an annotated image, and CT, CBF, CBV, MTT, TMax and an ischemic stroke lesion label are included in the original brain training sample; preprocessing a multi-modal medical image, performing image augmentation through image deformation, image scaling, generation countermeasure and other modes, and expanding a small sample image data set; registering the multi-modal images after data augmentation, wherein a reference image is a CT image, and performing pixel level fusion on the multi-modal images after registration; and (4) the fused multi-modal image data is transmitted to a segmentation network constructed based on a Transformer for image segmentation. The invention also discloses a system using the method. The invention not only improves the influence on the medical image data segmentation task caused by insufficient data samples, but also obtains more focus image information through multi-mode image fusion and improves the accuracy of image segmentation.

Description

Multi-modal stroke lesion segmentation method and system based on small sample learning
Technical Field
The invention relates to the field of CT detection of human brain, in particular to a method and a system for segmenting an ischemic stroke lesion based on a multi-modal CT image based on a transform deep learning network, small sample learning and multi-modal analysis.
Background
Ischemic stroke is one of the most common causes of death and disability worldwide. It is caused by occlusion of cerebral arteries, resulting in hypoxia and ultimately death of the affected brain tissue. Brain imaging plays a crucial role in the diagnosis and treatment decision of ischemic stroke. However, the detection and assessment of stroke lesions requires considerable time spent by radiologists. The following is the result of image semantic segmentation in medical images in recent years.
In order to improve the diagnosis speed and accuracy of ischemic stroke, a reliable automatic lesion segmentation method is urgently needed. In the past years, convolutional neural networks have made great progress in medical image analysis, and existing medical segmentation methods mainly rely on full convolutional neural networks of U-shaped structures. A typical U-network U-Net consists of a symmetric encoder-decoder with a hopping connection. U-Net has enjoyed great success in various medical imaging and, according to this technical route, 3D U-Net, Res-Unet, etc. have been extended for image segmentation in various medical imaging modalities.
Most of the papers now use neural network architectures that are variants or improvements on FCN or U-Net. For example, Lucas et al propose an extension of U-Net, i.e., multiscale U-Net using additional hopping connections, which improves the propagation of information at different resolutions; islam and Ren et al propose a PixelNet network, which is based on VGG-16 and alleviates the problem of category imbalance; wang et al propose an attention-based approach to merge DWI images, which can obtain an original spatial CTA image and map the original spatial CTA image with the CTP image to obtain better pseudo-DWI composite quality. These superiority in lesion segmentation based on FCNN proved that CNN has strong learning and judgment ability, but has no way to fully meet the strict requirements of medical applications on segmentation accuracy. Due to the inherent limitations of the coiled machine operation, explicit global and long-term semantic information interaction is difficult to learn with CNN-based methods. Some studies have attempted to solve the problem by using depth convolution layers, self-attention mechanisms, and image pyramids, but these approaches still have limitations when modeling is long-term dependent.
Due to the limitation of the CNN method, more people start to switch from CNN to the combination of a transform and CNN, for example, UTNet adds a self-attention mechanism to a convolutional neural network, and avoids large-scale pre-training by using the induction deviation of a convolutional image through a hybrid architecture, so as to finally enhance the accuracy of medical image segmentation; MCtrans are characterized in that a Transformer is inserted into a network similar to U-Net by introducing a new learnable Proxy embedding method, and performance is evaluated on segmentation data. The two methods are both based on a CNN network architecture, and a self-attention mechanism is added, so that the medical image prediction is improved, but the improvement degree is not high.
Since the improvement after the Transformer is added into the CNN network architecture is not obvious, Hu et al proposes a U-Net-shaped medical image segmentation network based on a pure Transformer, wherein Encoder, Bottleneck and Decoder are constructed by using Swin Transformer, and the performance is superior to TransUnnet, UTNet and the like. Therefore, the network architecture is improved for the ischemic stroke based on the network.
The CT image is a sectional image of human tissues, the density resolution of the CT image is superior to that of an X-ray image, and the organ result can be well displayed. In order to observe the condition of the lesion tissue, the perfusion image is used to obtain the modal parameters such as cerebral blood volume, cerebral blood flow, mean transit time, peak time and the like through the change of the concentration of the iodine contrast agent in the cerebral tissue, and the condition of the lesion is evaluated through the modalities. Each mode has different characteristic information, and the information quantity of the focus of the cerebral arterial thrombosis is different; although the CT image has a wide application range, can be used for each system and each part of the whole body, and can find most of the lesions, the CT image has a certain limitation on accurately displaying the lesion parts and the boundaries of the lesion parts, and the boundaries are unclear.
Disclosure of Invention
In view of the above, it is necessary to provide a method and a system for multi-modal stroke lesion segmentation based on small sample learning. Specifically, the invention provides a multi-modal CT image ischemic stroke lesion segmentation method based on small sample learning, which solves the problem of scarce original data samples by means of image deformation, scaling, generation confrontation and the like, and performs data augmentation; and then, through registration and multi-modal fusion, the obtained image has more detailed information, further analysis and processing of the image are facilitated, potential targets are exposed, the image is segmented through a segmentation network constructed based on a Transformer, and the accuracy of model prediction is improved.
In a first aspect, the present invention provides a method for segmenting a multi-modal CT image ischemic stroke lesion based on small sample learning, where the method includes:
s1: acquiring an original brain training sample, wherein the original training sample comprises a multi-modal CT medical image and an annotated image, and CT, CBF, CBV, MTT, TMax and an ischemic stroke lesion label are included in the original brain training sample;
s2: preprocessing the multi-modal medical image, performing image augmentation through image deformation, image scaling, generation countermeasure and other modes, and expanding a small sample image data set;
s3: registering the multi-modal images after data augmentation, wherein a reference image is a CT image, and performing pixel level fusion on the multi-modal images after registration;
s4: and (4) the fused multi-modal image data is transmitted to a segmentation network constructed based on a Transformer for image segmentation.
Wherein the step of image augmentation in S2 includes: and (3) image deformation, Gaussian filtering and denoising, and image scaling to generate a countermeasure network.
Taking out a sample containing a focus and a sample without the focus from the multi-modal CT medical image, and inputting the samples into a generated countermeasure together to obtain an artificial sample similar to real data; and adding the artificial sample into the real data to obtain a mixed data set.
The step S3 includes taking the CT image as a reference modality image, arranging a plurality of control points therein, obtaining, through a similarity recognition principle, a best matching position of the control point of the reference modality image with other modality images through regression, obtaining a deformed to-be-registered image matched with the reference modality image and the reference image, stacking and merging the deformed to-be-registered image and the reference image in a channel dimension, and adding the merged multimodal image to a segmentation network constructed based on a Transformer for training.
The transform deep learning network model of S4 is specifically a marked image block, which is sent to a transform-based U-type codec structure, and has a skip connection for local global semantic feature learning.
The method comprises the steps that a basic unit of a Transformer deep learning network model is a swin Transformer block, medical images of different modules are divided into 4 x 4 non-overlapping patches for an encoder, a characteristic dimension is changed into 48, a linear embedding layer is applied to the projected characteristic dimension, a converted patch mark generates layered characteristic representation through several swin Transformer blocks and a patch merging layer, context characteristics are extracted, and spatial information loss caused by down-sampling is made up through skip connection and multi-scale characteristic fusion of the encoder.
The invention also provides a system using the multi-modal CT image segmentation method for ischemic stroke lesions based on small sample learning, which comprises the following steps:
a sample acquisition module: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring an original training sample, and the original training sample comprises a multi-modal CT medical image and marking information;
the information augmentation module: the system is used for enhancing the characteristic information of the multi-modal CT medical image, generating counterimage augmentation and the like through image deformation and image scaling, and solving the problem of scarce original data samples;
an image registration synthesis module: by image registration, for merging multi-image information; the fusion facilitates further analysis and understanding of the images, and can expose potential targets.
A training module: the system is used for training the original training sample and the synthesized training sample to a focus detection model, and carrying out weight adjustment on the capability of acquiring information by the mode in the training process to finally reach a preset termination condition position, thereby completing the training of the focus segmentation detection model.
The embodiment of the invention has the following beneficial effects: by the image segmentation method provided by the invention, the problem of insufficient data samples can be solved by generating the countermeasure network to expand the small sample image data, more focus image information can be obtained by multi-modal image registration and fusion, and the accuracy of image segmentation is improved.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic overview of the process of the present invention;
FIG. 3 is a flow chart of a generation countermeasure network for data augmentation;
FIG. 4 is a diagram of a transform-based U-type semantic segmentation network during segmentation.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1 and fig. 2, the method performs image augmentation training on a multi-modal image by generating a countermeasure, extracts different feature information under multiple modalities by multi-modality fusion, and finally puts the fused image into a semantic segmentation network constructed by a Transformer to segment the image, thereby finally obtaining a prediction result.
First, a data expansion operation is performed on the multi-modal image. The image data augmentation comprises image deformation, Gaussian filtering denoising, image scaling, generation of antagonistic image augmentation and the like, and the specific operations are as follows: if deformation and scaling are used, specifically, the deformation and scaling are realized by a resize function, a rotate function and other functions;
the specific operations for generating the image augmentation of the countermeasure network as shown in fig. 3 are as follows: taking out a sample containing a focus and a sample without the focus from the multi-modal CT medical image, and inputting the samples into a generated countermeasure together to obtain an artificial sample similar to real data; and adding the artificial sample into the real data to obtain a mixed data set. The embodiment of the invention mainly comprises the following implementation steps:
step 1: acquiring a medical image data set;
step 2: the data is concentrated to distinguish a data sample containing focus information and a data sample without the focus information;
and 3, step 3: taking a data sample containing focus information and a data sample not containing the focus information as a pair of inputs, adding the inputs into a generated countermeasure network, and training to obtain an artificial sample;
and 4, step 4: and adding the obtained artificial sample into the data and mixing the artificial sample into the data set.
GAN is an unsupervised learning method that learns by letting two neural networks game each other. The GAN is composed of a generation network and a discrimination network. The generated network is used as random sampling input from a potential space, and the output result imitates real samples in a training set as much as possible. The input of the discrimination network is the real sample or the output of the generation network, and the purpose is to distinguish the output of the generation network from the real sample as much as possible. The role of the generation network is to spoof the discrimination network as much as possible.
CycleGAN is a GAN-based network model. The CycleGAN is essentially two mirror-symmetrical GANs, forming a ring network. The two GANs share two generators and each have one arbiter, i.e. there are a total of two arbiters and two generators. CycleGAN performs well on style migration tasks. In the embodiment, a network model suitable for medical image sample data expansion is constructed by improving the cycleGAN.
As shown in fig. 3, firstly, a lesion-free ischemic stroke sample is input into a generator, after a lesion sample is generated by the generator, the lesion sample and the lesion sample are input into a discriminator for discrimination, and the discriminator is trained to better distinguish real data; and then inputting the output generated sample data into a mirror image generator, and outputting reconstructed sample data without the focus.
This creates a more or less loss of the countermeasure network than the traditional GAN: the loss of the false image is calculated by the loss of the generator and is judged as 1 by the discriminator; the discriminator calculates the loss of the counterfeit image judged to be 0 by the discriminator + the loss of the real image judged to be 1 by the discriminator.
After the data amplification is obtained to obtain an amplified data set, an image fusion operation is performed. In the embodiment, by using the CT image as the registration reference map, the operations of image registration and image fusion are performed to obtain fusion of a plurality of feature information of a plurality of modalities, so as to obtain a plurality of information samples in a small sample data set.
A multimodal fusion procedure comprising the steps of:
step 1: and taking the CT image in the multi-modal image as a reference modal image, and taking other modal images as images to be registered. Locating a control point in the reference modality image;
step 2: performing space transformation on other modal images, and obtaining the optimal matching position of the control point of the reference modal image on the image to be registered through a similarity regression model;
and step 3: through sampling, obtaining a deformed image to be registered matched with the reference modal image and a reference image which are stacked and combined on a channel dimension;
and 4, step 4: and segmenting the combined multi-mode image into focuses. The fused multimodal image data is passed into the network structure as in fig. 4 after image fusion.
As shown in fig. 4, the invention further improves the original network structure by studying the role of semantic segmentation in medical images and the development in the current stage and aiming at the advantages and the improvement points of the existing technology in the current stage, so as to become an improved semantic segmentation image method, and by adding a generation countermeasure network on the original U-shaped transform network structure. Compared with the prior network, the network can better give consideration to global and local information, is convenient and fast to optimize, and the shape and the boundary of the finally obtained object in the segmentation are clearer and the classification is more accurate;
the invention is inspired by swin-unet, as shown in fig. 4, and is a U-type network consisting of an encoder, a decoder and residual connection. The basic unit of the network is a transform block.
The transform block is herein referred to as a swin-transform block. It is constructed based on a translation window. Each swin-transformer block is composed of a LayerNorm layer, a multi-headed self-attention module, a residual connection, and a two-layer MLP with GELU nonlinearity. A window-based multi-head self-attention module and a displaced window-based multi-head self-attention module are respectively adopted in two consecutive transform blocks.
As in the encoder in FIG. 4, the labeled input is input into two consecutive swin-transformer blocks for representation learning, and the feature dimension and resolution remain unchanged. Meanwhile, patch merge layers will reduce the number of tokens, increasing the feature dimension to 2 × the original dimension. This process will be repeated three times in the encoder.
Such as the decoder of fig. 4, corresponding to the encoder. The extracted depth features are upsampled by an expand layer in a decoder, the feature maps of adjacent dimensions are reshaped into a feature map of higher resolution, and the feature dimension is halved.
Residual concatenation, as in fig. 4, is used to concatenate multi-scale features and upsampled features in the encoder, where shallow and deep features are concatenated together to reduce the information loss incurred when downsampling.
The network structure mainly uses the swin-transformer with a displacement window as an encoder to extract features, and designs the symmetric decoding of a transform block expansion layer with patch to carry out up-sampling operation, thereby recovering the resolution of a feature map. This method leads to better segmentation accuracy.
The specific steps of the whole invention are as above, and more medical images with higher resolution are obtained on the medical image of the small sample in a data augmentation mode; obtaining a multi-modal medical image after image augmentation for fusion to obtain multi-modal image data; and the fused image data is transmitted into a segmentation network to obtain more accurate segmentation information.

Claims (6)

1. A multi-modal CT image segmentation method for ischemic stroke lesions based on small sample learning is characterized by comprising the following steps:
s1: acquiring an original brain training sample, wherein the original brain training sample comprises a multi-modal CT medical image and an annotation image, wherein the multi-modal CT medical image comprises CT, CBF, CBV, MTT, TMax and an ischemic stroke lesion label;
s2: preprocessing the multi-modal medical image, performing image augmentation through image deformation, image scaling, generation countermeasure and other modes, and expanding a small sample image data set;
s3: registering the multi-modal images after data augmentation, wherein a reference image is a CT image, and performing pixel level fusion on the multi-modal images after registration;
s4: and (4) the fused multi-modal image data is transmitted to a segmentation network constructed based on a Transformer for image segmentation.
2. The method for segmenting ischemic stroke lesion based on multi-modal CT image based on small sample learning as claimed in claim 1, wherein the image augmentation in S2 comprises: image deformation, image scaling, generating countermeasures, etc.
The method for generating the countermeasure network comprises the following steps: taking out a sample containing a focus and a sample without the focus from the multi-modal CT medical image, and inputting the samples into a generated countermeasure together to obtain an artificial sample similar to real data; and adding the artificial sample into the real data to obtain a mixed data set.
3. The method for segmenting the ischemic stroke lesion based on the multi-modal CT image learned by the small samples as claimed in claim 1, wherein the S3 includes using the CT image as a reference modality image, arranging a plurality of control points therein, obtaining the positions of the control points of the reference modality image best matching with other modality images through regression by using a similarity recognition principle, obtaining a deformed registered image matching with the reference modality image, stacking and merging the deformed registered image with the reference image in a channel dimension, and adding the fused multi-modal image into a segmentation network constructed based on a transform for training.
4. The method for segmentation of ischemic stroke lesions based on multimodal CT images with small sample learning as claimed in claim 1, wherein the transform deep learning network model of S4, specifically the labeled image blocks, are fed into a transform-based U-type codec structure with jump connection for local global semantic feature learning.
5. The method for segmenting the ischemic stroke lesions based on the multi-modal CT image with the small sample learning as claimed in claim 4, wherein the basic unit of the transform deep learning network model is a swin transform block, for the encoder, medical images of different modules are divided into 4 by 4 non-overlapping patches, feature dimensions are changed into 48, a linear embedding layer is applied to the projected feature dimensions, the transformed patch markers generate a hierarchical feature representation through several swin transform blocks and patch merging layers, and extraction of context features compensates for spatial information loss caused by down-sampling through skip connection and multi-scale feature fusion of the encoder.
6. A system for using the small sample learning based multi-modal CT image segmentation method for ischemic stroke lesion of any one of claims 1-5, comprising:
a sample acquisition module: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring an original training sample, and the original training sample comprises a multi-modal CT medical image and marking information;
the information augmentation module: the system is used for enhancing the characteristic information of the multi-modal CT medical image, and performing image augmentation in modes of image deformation, image scaling, generation countermeasure and the like to make up for the problem of scarce original data samples;
an image registration synthesis module: by image registration, for merging multi-image information; the fusion facilitates further analysis and understanding of the images, and can expose potential targets.
A training module: the system is used for training the original training sample and the synthesized training sample to a focus detection model, and carrying out weight adjustment on the capability of acquiring information by the mode in the training process to finally reach a preset termination condition position, thereby completing the training of the focus segmentation detection model.
CN202210403148.4A 2022-04-18 2022-04-18 Multi-modal stroke lesion segmentation method and system based on small sample learning Pending CN114820491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210403148.4A CN114820491A (en) 2022-04-18 2022-04-18 Multi-modal stroke lesion segmentation method and system based on small sample learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210403148.4A CN114820491A (en) 2022-04-18 2022-04-18 Multi-modal stroke lesion segmentation method and system based on small sample learning

Publications (1)

Publication Number Publication Date
CN114820491A true CN114820491A (en) 2022-07-29

Family

ID=82536876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210403148.4A Pending CN114820491A (en) 2022-04-18 2022-04-18 Multi-modal stroke lesion segmentation method and system based on small sample learning

Country Status (1)

Country Link
CN (1) CN114820491A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018750A (en) * 2022-08-08 2022-09-06 湖南大学 Medium-wave infrared hyperspectral and multispectral image fusion method, system and medium
CN115345886A (en) * 2022-10-20 2022-11-15 天津大学 Brain glioma segmentation method based on multi-modal fusion
CN116012388A (en) * 2023-03-28 2023-04-25 中南大学 Three-dimensional medical image segmentation method and imaging method for acute ischemic cerebral apoplexy
CN116543385A (en) * 2023-07-05 2023-08-04 江西农业大学 Intelligent detection method and device for morphology of rice leaf cells
CN116894973A (en) * 2023-07-06 2023-10-17 北京长木谷医疗科技股份有限公司 Integrated learning-based intelligent self-labeling method and device for hip joint lesions
CN117238458A (en) * 2023-09-14 2023-12-15 广东省第二人民医院(广东省卫生应急医院) Critical care cross-mechanism collaboration platform system based on cloud computing
CN117457222A (en) * 2023-12-22 2024-01-26 北京邮电大学 Alzheimer's disease brain atrophy model construction method, prediction method and device
WO2024041058A1 (en) * 2022-08-25 2024-02-29 推想医疗科技股份有限公司 Follow-up case data processing method and apparatus, device, and storage medium
CN117649448A (en) * 2024-01-29 2024-03-05 云南省交通规划设计研究院股份有限公司 Intelligent recognition and segmentation method for leakage water of tunnel working face

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018750B (en) * 2022-08-08 2022-11-08 湖南大学 Medium-wave infrared hyperspectral and multispectral image fusion method, system and medium
CN115018750A (en) * 2022-08-08 2022-09-06 湖南大学 Medium-wave infrared hyperspectral and multispectral image fusion method, system and medium
WO2024041058A1 (en) * 2022-08-25 2024-02-29 推想医疗科技股份有限公司 Follow-up case data processing method and apparatus, device, and storage medium
CN115345886A (en) * 2022-10-20 2022-11-15 天津大学 Brain glioma segmentation method based on multi-modal fusion
CN116012388A (en) * 2023-03-28 2023-04-25 中南大学 Three-dimensional medical image segmentation method and imaging method for acute ischemic cerebral apoplexy
CN116543385B (en) * 2023-07-05 2023-09-05 江西农业大学 Intelligent detection method and device for morphology of rice leaf cells
CN116543385A (en) * 2023-07-05 2023-08-04 江西农业大学 Intelligent detection method and device for morphology of rice leaf cells
CN116894973A (en) * 2023-07-06 2023-10-17 北京长木谷医疗科技股份有限公司 Integrated learning-based intelligent self-labeling method and device for hip joint lesions
CN116894973B (en) * 2023-07-06 2024-05-03 北京长木谷医疗科技股份有限公司 Integrated learning-based intelligent self-labeling method and device for hip joint lesions
CN117238458A (en) * 2023-09-14 2023-12-15 广东省第二人民医院(广东省卫生应急医院) Critical care cross-mechanism collaboration platform system based on cloud computing
CN117238458B (en) * 2023-09-14 2024-04-05 广东省第二人民医院(广东省卫生应急医院) Critical care cross-mechanism collaboration platform system based on cloud computing
CN117457222A (en) * 2023-12-22 2024-01-26 北京邮电大学 Alzheimer's disease brain atrophy model construction method, prediction method and device
CN117457222B (en) * 2023-12-22 2024-03-19 北京邮电大学 Alzheimer's disease brain atrophy model construction method, prediction method and device
CN117649448A (en) * 2024-01-29 2024-03-05 云南省交通规划设计研究院股份有限公司 Intelligent recognition and segmentation method for leakage water of tunnel working face

Similar Documents

Publication Publication Date Title
CN114820491A (en) Multi-modal stroke lesion segmentation method and system based on small sample learning
CN111127482B (en) CT image lung and trachea segmentation method and system based on deep learning
CN111429474B (en) Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution
CN107492071A (en) Medical image processing method and equipment
CN110310281A (en) Lung neoplasm detection and dividing method in a kind of Virtual Medical based on Mask-RCNN deep learning
Lee et al. Real-time depth estimation using recurrent CNN with sparse depth cues for SLAM system
CN109685768A (en) Lung neoplasm automatic testing method and system based on lung CT sequence
CN114092439A (en) Multi-organ instance segmentation method and system
CN116681679A (en) Medical image small target segmentation method based on double-branch feature fusion attention
CN111080657A (en) CT image organ segmentation method based on convolutional neural network multi-dimensional fusion
CN110992352A (en) Automatic infant head circumference CT image measuring method based on convolutional neural network
CN109727197A (en) A kind of medical image super resolution ratio reconstruction method
CN110580681B (en) High-resolution cardiac motion pattern analysis device and method
CN117422788B (en) Method for generating DWI image based on CT brain stem image
CN115578406A (en) CBCT jaw bone region segmentation method and system based on context fusion mechanism
CN117422715B (en) Global information-based breast ultrasonic tumor lesion area detection method
CN116993703A (en) Breast CEM image focus recognition system and equipment based on deep learning
US20230281806A1 (en) Microbubble counting method for patent foramen ovale (pfo) based on deep learning
Zhou et al. Fully automatic dual-guidewire segmentation for coronary bifurcation lesion
CN115526898A (en) Medical image segmentation method
CN116109603A (en) Method for constructing prostate cancer lesion detection model based on contrast image feature extraction
CN116091793A (en) Light field significance detection method based on optical flow fusion
CN115147404A (en) Intracranial aneurysm segmentation method with dual-feature fusion MRA image
CN112967295B (en) Image processing method and system based on residual network and attention mechanism
CN114998582A (en) Coronary artery blood vessel segmentation method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination