CN116469099A - Microscopic hyperspectral image judging method and frame based on self-supervision spectral regression - Google Patents

Microscopic hyperspectral image judging method and frame based on self-supervision spectral regression Download PDF

Info

Publication number
CN116469099A
CN116469099A CN202310444636.4A CN202310444636A CN116469099A CN 116469099 A CN116469099 A CN 116469099A CN 202310444636 A CN202310444636 A CN 202310444636A CN 116469099 A CN116469099 A CN 116469099A
Authority
CN
China
Prior art keywords
image
hyperspectral image
self
encoder
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310444636.4A
Other languages
Chinese (zh)
Inventor
王妍
谢星然
李庆利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202310444636.4A priority Critical patent/CN116469099A/en
Publication of CN116469099A publication Critical patent/CN116469099A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a microscopic hyperspectral image judging method and a frame based on self-supervision spectral regression, comprising the following steps: acquiring a hyperspectral image, and processing the hyperspectral image to obtain a processed spectral image; extracting image features of the processing spectrum suitcase based on a depth separable encoder to obtain an extracted image; pre-training the depth separable encoder to obtain a feature extraction backbone network; the hyperspectral image is processed after being encoded based on the feature extraction backbone network, and a data category is obtained; and processing the segmentation mask in the extracted image to obtain a final data area. According to the invention, the priori characteristics of the hyperspectral image are fused into the deep learning algorithm, and the characteristics of a downstream task are combined, so that two recognition algorithms are designed pertinently, the performance index is superior to that of the traditional deep learning model based on natural images, and the hyperspectral image has high generalization capability and high application potential in the medical field.

Description

Microscopic hyperspectral image judging method and frame based on self-supervision spectral regression
Technical Field
The invention belongs to the field of computer vision and computer-aided diagnosis, and particularly relates to a microscopic hyperspectral image judging method and a frame based on self-supervision spectral regression.
Background
The microscopic hyperspectral image contains abundant spectral information, so that the biochemical phenomenon which cannot be observed by the RGB pathological image can be reflected, and the method has wide application prospect in the field of calculating pathology. However, due to the lack of sufficient fine annotation data and the large dimension of hyperspectral image data, the conventional fully supervised deep learning algorithm is extremely prone to over-fitting. Meanwhile, a light splitting device (such as an acousto-optic tunable filter) in the hyperspectral imaging system can reduce the intensity of monochromatic light, so that a part of wave bands are polluted by system noise. In an application scene requiring accurate positioning, ambiguity caused by image blurring may seriously affect the performance of a deep learning algorithm.
In view of the above problems, a common solution is to use self-monitoring pre-training and design a network structure for the characteristics of hyperspectral data. The existing hyperspectral image recognition algorithm is usually to fine tune the input layer of the mainstream RGB deep learning model to adapt to the requirements of hyperspectral tasks, or directly use a 3D convolutional neural network. Although hyperspectral images can be regarded as a special empty spectrum cube, in pathological diagnosis, it is necessary to preserve the specificity of the spectral information as much as possible while guaranteeing spatial information priority. Directly changing the number of input channels of the model can lead all single-band images to be directly fused at the shallow layer of the model, so that abundant spectrum information of image data can not be fully utilized; the direct use of the 3D convolutional neural network does not conform to the physical characteristics of the data, and the calculated amount of the model is increased obviously.
Disclosure of Invention
The invention aims to provide a microscopic hyperspectral image judging method and a microscopic hyperspectral image judging framework based on self-supervision spectral regression so as to solve the problems existing in the prior art.
In order to achieve the above object, the present invention provides a method for determining a microscopic hyperspectral image based on self-supervised spectral regression, comprising:
acquiring a hyperspectral image, and processing the hyperspectral image to obtain a processed spectral image;
extracting image features of the processing spectrum suitcase based on a depth separable encoder to obtain an extracted image;
pre-training the depth separable encoder to obtain a feature extraction backbone network;
the hyperspectral image is processed after being encoded based on the feature extraction backbone network, and a data category is obtained;
and processing the segmentation mask in the extracted image to obtain a final data area.
Preferably, the process of obtaining a processed spectral image includes:
and acquiring a hyperspectral image, randomly extracting wave bands in the hyperspectral image, and adding a shielding mask to the rest wave bands to obtain the processed spectral image.
Preferably, the expression for randomly extracting the wave bands in the hyperspectral image is:
Y b =Conv 1×1 (Cat(H 3 ,H 5 ,H 7 ),Y sum );
wherein Y is b Representing the extraction of the b-th band as a recovery target, conv 1×1 Representing a 1 x 1 convolution, Y sum Is the primary reduction result after linear weighted summation, H k =Conv k×k (Y sum ) Represents a fine-tuning restoration target with a kxk convolution.
Preferably, the process of obtaining the extracted image includes:
extracting image features using a depth separable encoder, and restoring the extracted band by a multi-scale decoder to obtain the extracted image.
Preferably, the process of obtaining the feature extraction backbone network includes:
and calculating reconstruction loss for the depth separable encoder, and updating network parameters to obtain a feature extraction backbone network.
Preferably, the process of obtaining the data category includes:
and constructing a classification task, encoding the hyperspectral image based on the feature extraction backbone network, fusing the band features of the hyperspectral image through a self-attention mechanism, and finally obtaining the data category by using multi-layer perceptron mapping.
Preferably, the expression for obtaining the data category using multi-layer perceptron mapping is:
P=MLP(AvgPool B (MSA(Z)));
where P is the data class probability, Z is the spectral feature, MLP represents the multi-layer perceptron, avgPool B The channel is pooled, and MSA represents a multi-head self-attention mechanism.
Preferably, the process of obtaining the final data area includes:
sampling the extracted image to obtain segmentation masks of all wave bands;
screening the segmentation masks of all the wave bands based on a wave band matching mechanism to obtain clear data area wave bands and matching scores;
and recombining the single-band segmentation masks in the segmentation masks of all the bands by taking the matching score as a weighting coefficient to obtain the final data area.
In order to achieve the above object, the present invention further provides a microscopic hyperspectral image judgment framework based on self-supervised spectral regression, including: the device comprises a data acquisition module, a pre-training module, a depth separable encoder, a classification module and a segmentation module;
the data acquisition module, the pre-training module and the depth separable encoder are sequentially connected, and the classification module and the segmentation module are respectively connected with the depth separable encoder;
the data acquisition module is used for acquiring a spectrum characteristic image;
the pre-training module is used for training the depth separable encoder based on the self-supervision spectrum regression task;
the depth separable encoder is used for extracting image features, simultaneously serves as a feature extraction network, and trains the classification module and the segmentation module by combining a manual annotation and using a full-supervised learning method;
the classification module is used for processing the image characteristics through multi-layer perceptron mapping to obtain data types;
the segmentation module is used for processing the segmentation mask in the extracted image to obtain a final data area.
The invention has the technical effects that:
the prior characteristic of the hyperspectral image is fused into the deep learning algorithm, and two recognition algorithms are specifically designed by combining the characteristics of a downstream task, so that the hyperspectral image deep learning method is superior to the traditional deep learning model based on natural images in performance index, has stronger generalization capability, and has higher application potential in the medical field.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a schematic view of a microscopic hyperspectral pathology image diagnostic framework in an embodiment of the present invention;
fig. 2 is a flowchart of a microscopic hyperspectral pathology image diagnosis framework in an embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Example 1
As shown in fig. 1, in this embodiment, a method for determining a microscopic hyperspectral image based on self-supervised spectral regression is provided, including:
step 1: in the pre-training stage, a band is randomly extracted from the input hyperspectral image and a mask is added to the remaining bands. Image features are extracted using a depth separable encoder and the extracted bands are restored by a multi-scale decoder.
Step 2: calculating reconstruction loss, updating network parameters, completing encoder pre-training, and applying the encoder pre-training as a feature extraction backbone network to a downstream task;
step 3: in a classification task, firstly, the above-mentioned characteristic extraction network is used for coding hyperspectral pathological images, and the characteristics of all wave bands are fused through a self-attention mechanism, and finally, a multi-layer perceptron is used for mapping to obtain disease categories;
step 4: in the segmentation task, the spectral features are up-sampled to obtain segmentation masks of all wave bands, the wave bands containing clear lesion areas are screened through a wave band matching mechanism, the matching score obtained in the process can be used as a weighting coefficient to recombine the single-wave-band segmentation masks, and the final lesion areas are predicted.
The depth separable encoder described in step 1 may optionally have ResNet-18, resNet-50 and Swin-T/16 as backbone networks. The hyperspectral image is treated as a plurality of single band gray scale images when being input into the network, so that the hyperspectral image is processed in parallel. The multi-scale decoder consists of three convolution layers with different convolution kernel sizes, and the process of restoring the extracted wave band is as follows:
Y b =Conv 1×1 (Cat(H 3 ,H 5 ,H 7 ),Y sum )
wherein Y is b Representing the extraction of the b-th band as a recovery target, conv 1×1 Representing a 1 x 1 convolution, Y sum Is the preliminary restoration result after linear weighted summation. H k =Conv k×k (Y sum ) Represents a fine-tuning restoration target with a kxk convolution. The output results of all convolution layers will be padded to the same resolution.
The pathological classification process described in the step 3 is implemented as follows:
P=MLP(AvgPool B (MSA(Z)))
wherein P is the probability of disease, Z is the spectral signature, MLP represents the multi-layer perceptron, avgPool B The channel is pooled, and MSA represents a multi-head self-attention mechanism.
In the single-band segmentation mask generation process described in step 4, the spectral features are first complemented with unknown information of the encoding stage by the feature fusion module, and then up-sampled by the decoder to obtain the segmentation mask m of each band i . The segmentation mask of each band can be matched with the label of a pathologist pixel by pixel, and K most matched bands are selected to form a set N, so that the matching loss is calculated, and the deep learning network is trained.
Wherein 1[i E N]The indication function is represented, i.e. the loss is calculated only for the bands in set N. P is p i Represents the confidence of the matching of the ith wave band, L dice (m i M) calculate the segmentation mask M for each band i Segmentation loss with artificial annotation M. In the model reasoning stage, the trained band matching module automatically selects the most suitable band and generates a weighting coefficient, and finally the single band mask is recombined to generate the segmentation prediction of the lesion area
Example two
As shown in fig. 2, in this embodiment, a microscopic hyperspectral image determination framework based on self-supervised spectral regression is provided, which includes:
two training phases and three main modules. Training a depth separable encoder by using an auto-supervised spectral regression task in a pre-training stage to extract image features; in the transfer learning stage, a depth separable encoder is used as a characteristic extraction network, and a full-supervision learning method is combined with manual labeling to train a pathological classification and lesion region segmentation module. The classification module fuses the spectral features extracted by the encoder by using a self-attention mechanism and obtains disease categories through multi-layer perceptron mapping; the segmentation module firstly carries out up-sampling on the spectral characteristics to obtain segmentation masks of all wave bands, and filters out partial wave bands seriously interfered by system noise through a wave band matching mechanism, wherein the matching score obtained in the process can be used as a weighting coefficient to recombine the single-wave band segmentation masks, and the final lesion area is predicted.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. The microscopic hyperspectral image judging method based on self-supervision spectral regression is characterized by comprising the following steps of:
acquiring a hyperspectral image, and processing the hyperspectral image to obtain a processed spectral image;
extracting image features of the processing spectrum suitcase based on a depth separable encoder to obtain an extracted image;
pre-training the depth separable encoder to obtain a feature extraction backbone network;
the hyperspectral image is processed after being encoded based on the feature extraction backbone network, and a data category is obtained;
and processing the segmentation mask in the extracted image to obtain a final data area.
2. The method for determining a microscopic hyperspectral image based on self-supervised spectral regression as claimed in claim 1, wherein the process of obtaining the processed spectral image includes:
and acquiring a hyperspectral image, randomly extracting wave bands in the hyperspectral image, and adding a shielding mask to the rest wave bands to obtain the processed spectral image.
3. The method for determining a microscopic hyperspectral image based on self-supervised spectral regression as claimed in claim 2, wherein the expression for randomly extracting the wave bands in the hyperspectral image is:
T b =Conv 1×1 (Cat(H 3 ,H 5 ,H 7 ),Y sum );
wherein Y is b Representing the extraction of the b-th band as a recovery target, conv 1×1 Representing a 1 x 1 convolution, Y sum Is the primary reduction result after linear weighted summation, H k =Conv k×k (Y sum ) Represents a fine-tuning restoration target with a kxk convolution.
4. The method for determining a microscopic hyperspectral image based on self-supervised spectral regression as claimed in claim 1, wherein the process of obtaining the extracted image includes:
extracting image features using a depth separable encoder, and restoring the extracted band by a multi-scale decoder to obtain the extracted image.
5. The method for determining a microscopic hyperspectral image based on self-supervised spectral regression as claimed in claim 1, wherein the process of obtaining the feature extraction backbone network comprises:
and calculating reconstruction loss for the depth separable encoder, and updating network parameters to obtain a feature extraction backbone network.
6. The method for determining a microscopic hyperspectral image based on self-supervised spectral regression as claimed in claim 1, wherein the process of obtaining the data class includes:
and constructing a classification task, encoding the hyperspectral image based on the feature extraction backbone network, fusing the band features of the hyperspectral image through a self-attention mechanism, and finally obtaining the data category by using multi-layer perceptron mapping.
7. The method for determining a microscopic hyperspectral image based on self-supervised spectral regression as recited in claim 1, wherein the expression for obtaining the data class using multi-layer perceptron mapping is:
P=MLP(AvgPool B (MSA(Z)));
where P is the data class probability, Z is the spectral feature, MLP represents the multi-layer perceptron, avgPool B The channel is pooled, and MSA represents a multi-head self-attention mechanism.
8. The method of determining a microscopic hyperspectral image based on self-supervised spectral regression as recited in claim 1, wherein the process of obtaining the final data area includes:
sampling the extracted image to obtain segmentation masks of all wave bands;
screening the segmentation masks of all the wave bands based on a wave band matching mechanism to obtain clear data area wave bands and matching scores;
and recombining the single-band segmentation masks in the segmentation masks of all the bands by taking the matching score as a weighting coefficient to obtain the final data area.
9. A microscopic hyperspectral image judgment framework based on self-supervised spectral regression, comprising: the device comprises a data acquisition module, a pre-training module, a depth separable encoder, a classification module and a segmentation module;
the data acquisition module, the pre-training module and the depth separable encoder are sequentially connected, and the classification module and the segmentation module are respectively connected with the depth separable encoder;
the data acquisition module is used for acquiring a spectrum characteristic image;
the pre-training module is used for training the depth separable encoder based on the self-supervision spectrum regression task;
the depth separable encoder is used for extracting image features, simultaneously serves as a feature extraction network, and trains the classification module and the segmentation module by combining a manual annotation and using a full-supervised learning method;
the classification module is used for processing the image characteristics through multi-layer perceptron mapping to obtain data types;
the segmentation module is used for processing the segmentation mask in the extracted image to obtain a final data area.
CN202310444636.4A 2023-04-24 2023-04-24 Microscopic hyperspectral image judging method and frame based on self-supervision spectral regression Pending CN116469099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310444636.4A CN116469099A (en) 2023-04-24 2023-04-24 Microscopic hyperspectral image judging method and frame based on self-supervision spectral regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310444636.4A CN116469099A (en) 2023-04-24 2023-04-24 Microscopic hyperspectral image judging method and frame based on self-supervision spectral regression

Publications (1)

Publication Number Publication Date
CN116469099A true CN116469099A (en) 2023-07-21

Family

ID=87175067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310444636.4A Pending CN116469099A (en) 2023-04-24 2023-04-24 Microscopic hyperspectral image judging method and frame based on self-supervision spectral regression

Country Status (1)

Country Link
CN (1) CN116469099A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117783088A (en) * 2024-02-23 2024-03-29 广州贝拓科学技术有限公司 Control model training method, device and equipment of laser micro-Raman spectrometer

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117783088A (en) * 2024-02-23 2024-03-29 广州贝拓科学技术有限公司 Control model training method, device and equipment of laser micro-Raman spectrometer
CN117783088B (en) * 2024-02-23 2024-05-14 广州贝拓科学技术有限公司 Control model training method, device and equipment of laser micro-Raman spectrometer

Similar Documents

Publication Publication Date Title
Jin et al. DUNet: A deformable network for retinal vessel segmentation
Adegun et al. Deep learning techniques for skin lesion analysis and melanoma cancer detection: a survey of state-of-the-art
CN109886273B (en) CMR image segmentation and classification system
Singh et al. FCA-Net: Adversarial learning for skin lesion segmentation based on multi-scale features and factorized channel attention
Lu et al. Detection of surface and subsurface defects of apples using structured-illumination reflectance imaging with machine learning algorithms
CN113034505B (en) Glandular cell image segmentation method and glandular cell image segmentation device based on edge perception network
CN112651978A (en) Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium
US20220301301A1 (en) System and method of feature detection in satellite images using neural networks
CN112001928B (en) Retina blood vessel segmentation method and system
CN112669248B (en) Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
CN114283158A (en) Retinal blood vessel image segmentation method and device and computer equipment
Xu et al. Dual-channel asymmetric convolutional neural network for an efficient retinal blood vessel segmentation in eye fundus images
Ross-Howe et al. The effects of image pre-and post-processing, wavelet decomposition, and local binary patterns on U-nets for skin lesion segmentation
CN116469099A (en) Microscopic hyperspectral image judging method and frame based on self-supervision spectral regression
CN113705675A (en) Multi-focus image fusion method based on multi-scale feature interaction network
CN115631107A (en) Edge-guided single image noise removal
CN117576483B (en) Multisource data fusion ground object classification method based on multiscale convolution self-encoder
Dihin et al. Wavelet-Attention Swin for Automatic Diabetic Retinopathy Classification
Liu et al. MRL-Net: multi-scale representation learning network for COVID-19 lung CT image segmentation
CN114155165A (en) Image defogging method based on semi-supervision
Kadhim et al. A novel deep learning framework for water body segmentation from satellite images
Iqbal et al. LDMRes-Net: Enabling real-time disease monitoring through efficient image segmentation
Bazi et al. Vision transformers for segmentation of disc and cup in retinal fundus images
Variyar et al. Learning and Adaptation from Minimum Samples with Heterogeneous Quality: An investigation of image segmentation networks on natural dataset
Confalonieri et al. An End-to-End Framework for the Classification of Hyperspectral Images in the Wood Domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination