CN114612664A - Cell nucleus segmentation method based on bilateral segmentation network - Google Patents

Cell nucleus segmentation method based on bilateral segmentation network Download PDF

Info

Publication number
CN114612664A
CN114612664A CN202210247886.4A CN202210247886A CN114612664A CN 114612664 A CN114612664 A CN 114612664A CN 202210247886 A CN202210247886 A CN 202210247886A CN 114612664 A CN114612664 A CN 114612664A
Authority
CN
China
Prior art keywords
segmentation
network
cell
picture
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210247886.4A
Other languages
Chinese (zh)
Inventor
黄金杰
范融冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202210247886.4A priority Critical patent/CN114612664A/en
Publication of CN114612664A publication Critical patent/CN114612664A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cell nucleus segmentation method based on a bilateral segmentation network pair, which is used for solving the problem of accurate segmentation of overlapped cell nuclei in pathological images. The invention discloses a cell nucleus segmentation method based on a bilateral segmentation network pair, and relates to the technical field of image segmentation; the segmentation method comprises the following steps: firstly, preprocessing operations of normalization and histogram equalization are carried out on the image, a gradient histogram is obtained, and the contrast of the image is enhanced by histogram equalization processing; restnet34 and a mixed attention mechanism are added into the U-Net network to enhance the characteristics of the foreground, inhibit the characteristics of the background and improve the accuracy of the segmentation model network. Two branches are added into the decoder part and are respectively used for predicting cell nucleus and cell boundary, and a characteristic fusion module is added between the two branches, so that the accuracy of cervical cell segmentation is improved. The method is applied to the accurate segmentation of the cell nucleus in the pathological image.

Description

Cell nucleus segmentation method based on bilateral segmentation network
Technical Field
The invention relates to the technical field of medical image processing, in particular to a cell nucleus segmentation method based on a bilateral segmentation network.
Background
Cervical cytology screening is widely used in recent years, and the cervical cancer screening technology is mainly used for manually checking cervical fluid-based exfoliated smears, and mainly used for a radiologist to find whether diseased cells exist in the smears through a microscope. However, this method requires a great amount of manpower and material resources for diagnosis after an experienced pathologist observes diseased cell nuclei under a microscope. And the cell screening error rate is increased and the diagnosis efficiency is reduced due to attention loss or fatigue and the like, and a series of problems of talent shortage and the like are faced. Therefore, the computer-aided automatic cell screening brings great convenience to the field.
Due to the characteristics of low contrast of the cell nucleus images, large difference of the spatial distribution of the cell nuclei, adhesion among cells, complex background of the cell nucleus images and the like, the accurate segmentation of the cell nuclei becomes one of the difficulties of computer-aided diagnosis. The traditional image segmentation algorithm has a non-ideal segmentation effect on cell nucleus images with the problems of different colors, complex backgrounds, different nucleus sizes, fuzzy edges and the like, so that the traditional image segmentation algorithm has obvious limitations. The method is improved on the basis of a deep learning image segmentation algorithm to realize accurate segmentation of a cell nucleus image. The text provides a cell nucleus segmentation method of a double-branch U-Net network based on a feature fusion module. The method mainly comprises the steps of dividing a U-Net decoder into two branches, namely, one branch is used for predicting cell nucleuses and the other branch is used for predicting cell boundaries, then fusing the characteristics obtained by the two branches by using a characteristic fusion module, reserving some important information in the previous downsampling process to the maximum extent, capturing context information of a rough characteristic diagram to enhance the characteristics of a foreground, simultaneously inhibiting the characteristics of a background and improving the accuracy of a segmentation model network.
Disclosure of Invention
The invention aims to solve the problem of accuracy of nucleus segmentation, and provides a nucleus segmentation method based on a bilateral segmentation network.
The above object of the invention is mainly achieved by the following technical scheme:
s1, preprocessing the cell picture and the labeling picture;
s2, dividing the cell picture after pretreatment and the corresponding labeled picture into a training set, a verification set and a test set;
s3, building a segmentation network structure;
fig. 1 is a structural diagram of a segmentation network proposed herein, where the segmentation network is composed of an encoding part and a decoding part, and the encoding part is composed of a residual error module resnet34 of fig. 2, so as to prevent gradient from disappearing while increasing network depth, and to facilitate better extraction of image features. The deeper the convolution, the more abstract the extracted features, which is more advantageous for improving the accuracy of the segmentation. The decoder is split into two branches, one to predict the nucleus and the other to predict the cell boundary.
The structure diagram of Mixed Attention block in fig. 1 is shown in fig. 3, and is composed of two parts of space Attention and channel Attention, where the upper part of fig. 3 is space Attention and the lower part is channel Attention. The number of channels of an input feature map U is C, the height is H, the width is W, when spatial attention is paid, 1 multiplied by 1 convolution operation is firstly carried out, the feature map is changed from C multiplied by H multiplied by W to 1 multiplied by H multiplied by W, then sigmoid is used for activation, the feature map of the spatial attention is obtained and is directly applied to an original feature map, the recalibration feature map provides more weight to relevant spatial information, and irrelevant spatial information is ignored. When the input feature map passes through the channel attention, the feature map is firstly changed from C × H × W to C × 1 × 1 through global averaging, and then two 1 × 1 × 1 convolution operations are carried out, so that the phasor of the C dimension is finally obtained. And activating by using sigmoid and multiplying the sigmoid by the original characteristic diagram to obtain the characteristic diagram after channel calibration, and giving a large weight to an important channel and neglecting an unimportant channel. And adding the feature map subjected to the spatial calibration and the feature map subjected to the channel calibration to obtain the feature map subjected to the mixed attention module, wherein the feature map is used for enhancing the features of the foreground, inhibiting the features of the background and improving the accuracy of the segmentation model network.
The feature fusion module (DA) in fig. 1 inputs the cell nuclei and boundaries as shown in fig. 4, smoothes and eliminates the mesh effect by 3 × 3 convolution, and then selects and integrates complementary information features by two parallel convolution layers to further refine the details in the next iteration.
S4, inputting a segmentation network according to batches by using the training set obtained in S2, and training;
s5, calculating the loss between the prediction result and the real label by using a mixed loss function, and performing back propagation to update the weight;
the Binary Cross Entropy (BCE) loss function and the Dice loss function are combined to form a mixed loss function.
The bilateral U-Net structure has two task-specific decoders, so the overall loss of the network is composed of two parts, the calculation formula is shown in formula (1), LnRepresenting the loss between the computationally predicted nucleus and the nucleus signature, LcThe representation calculated predicts the loss between the cell boundary and the nuclear center tag.
L=Ln+Lc (1)
The loss functions for calculating the output of the cell nucleus branch and the cell boundary branch are formed by combining a BCE loss function and a Dice loss function. The loss function of cell boundary branch output and the loss function of nuclear center branch output are calculated according to the formulas (2) and (3), wherein alpha and beta in the two formulas are weight coefficients. Because the cell nucleus and the cell boundary occupy different areas in the image, the values of alpha and beta are different. Specifically, when the values of α and β are 1, the loss function calculating the outputs of the two branches becomes a BCE loss function, and when the values of α and β are 0, the loss function calculating the outputs of the two branches becomes a Dice loss function. The weighting coefficients alpha and beta have the function of respectively adjusting the ratio of the Dice loss function and the BCE loss function in the two branches during model training so as to eliminate the influence caused by unbalanced ratio of positive data and negative data in training data.
Ln=αLBCE+(1-α)LDice (2)
Lc=βLBCE+(1-β)LDice (3)
S6, when each training is finished, the model obtained from S3 is evaluated by using a verification set, and the model with the best evaluation result is stored;
s7, inputting the test set obtained in S2 into the model with the best evaluation result in S6 for segmentation to obtain a prediction picture;
s8, evaluating the segmentation effect using the Dice similarity coefficient as an evaluation criterion.
Effects of the invention
The invention provides a cell nucleus segmentation method based on a bilateral segmentation network. And the encoder structure in the original U-Net is replaced by the resnet34, so that the picture characteristic information is extracted more accurately. The decoder of U-Net is divided into two branches for predicting the cell nucleus and cell boundary, and a characteristic fusion module is added between the two branches to prevent some boundary characteristics from being lost in the convolution process. A mixed attention module is added into the segmentation network, the mixed attention module is composed of a space attention module and a channel attention module, context information of a rough feature map is captured, target structures with different shapes, sizes and positions are focused in a learning mode, meanwhile, useful remarkable features in specific tasks are highlighted, features of a foreground are enhanced, features of a background are restrained, and accuracy of the segmentation model network is improved. When a network model is trained, a Binary Cross Entropy (BCE) loss function and a Dice loss function are combined to form a mixed loss function so as to better optimize network parameters.
Drawings
FIG. 1 is a diagram of a partitioned network architecture;
FIG. 2Resnet34 is a block diagram;
FIG. 3 is a block diagram of a hybrid attention module;
FIG. 4 is a block diagram of a feature fusion module;
FIG. 5 flow chart of cell segmentation experiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
the present disclosure provides a method for segmenting a cell nucleus by using a double-edge segmentation network as shown in fig. 1, which comprises the following steps:
s1, preprocessing the cell picture and the labeling picture;
s2, dividing the cell picture after pretreatment and the corresponding labeled picture into a training set, a verification set and a test set;
s3, building a segmentation network structure;
s4, inputting a segmentation network according to batches by using the training set obtained in S2, and training;
s5, calculating the loss between the prediction result and the real label by using a mixed loss function, and performing back propagation to update the weight;
s6, when each training is finished, the model obtained from S3 is evaluated by using a verification set, and the model with the best evaluation result is stored;
s7, inputting the test set obtained in S2 into the model with the best evaluation result in S6 for segmentation to obtain a prediction picture;
s8, evaluating the segmentation effect using the Dice similarity coefficient as an evaluation criterion.
According to the embodiment of the invention, a large number of cervical cell pictures are selected as a data set, the cell pictures and the labeled pictures are preprocessed, and the processed pictures are divided into a training set, a verification set and a test set according to the proportion of 70%, 10% and 20%. And inputting the training set into the constructed segmentation network according to batches for training. And calculating the loss between the prediction result and the real label by using a binary cross entropy loss function, optimizing the loss function by using a random gradient descent method, and performing reverse propagation to update the weight. And evaluating the model by using the verification set and storing the model parameters with the best evaluation result. And inputting the test set into a trained segmentation network for segmentation to obtain a prediction picture, and evaluating the segmentation effect by using the Dice similarity coefficient as an evaluation standard.
The following examples illustrate the invention in detail:
the embodiment of the invention adopts a cervical cell picture, and the segmentation is realized by applying the algorithm of the invention as follows.
S1, preprocessing the cell picture and the labeling picture;
firstly, 10000 cell pictures and corresponding labeled pictures are randomly selected, the sample size is 512 multiplied by 512, the images are denoised by a median filtering algorithm, and then preprocessing operation of histogram equalization is adopted.
S2, dividing the cell picture after pretreatment and the corresponding labeled picture into a training set, a verification set and a test set;
and dividing the preprocessed pictures into a training set, a verification set and a test set according to the proportion of 70%, 10% and 20%.
S3, building a segmentation network structure;
fig. 1 is a diagram illustrating a structure of a segmentation network proposed herein, where the segmentation network is composed of an encoding portion and a decoding portion, and the encoding portion is composed of a residual error module resnet34 of fig. 2, so that the depth of the network is increased while gradient disappearance is prevented, which is beneficial to better extracting image features. The deeper the convolution, the more abstract the extracted features, which is more advantageous for improving the accuracy of the segmentation. The decoder is split into two branches, one for predicting nuclei and the other for predicting cell boundaries
The structure diagram of Mixed Attention block in fig. 1 is shown in fig. 3, and is composed of two parts of space Attention and channel Attention, where the upper part of fig. 3 is space Attention and the lower part is channel Attention. The number of channels of an input feature map U is C, the height is H, the width is W, when spatial attention is paid, 1 multiplied by 1 convolution operation is firstly carried out, the feature map is changed from C multiplied by H multiplied by W to 1 multiplied by H multiplied by W, then sigmoid is used for activation, the feature map of the spatial attention is obtained and is directly applied to an original feature map, the recalibration feature map provides more weight to relevant spatial information, and irrelevant spatial information is ignored. When the input feature map passes through the channel attention, the feature map is firstly changed from C × H × W to C × 1 × 1 through global averaging, and then two 1 × 1 × 1 convolution operations are carried out, so that the phasor of the C dimension is finally obtained. And activating by using sigmoid and multiplying the sigmoid by the original characteristic diagram to obtain the characteristic diagram after channel calibration, and giving a large weight to an important channel and neglecting an unimportant channel. And adding the feature map subjected to the spatial calibration and the feature map subjected to the channel calibration to obtain the feature map subjected to the mixed attention module, wherein the feature map is used for enhancing the features of the foreground, inhibiting the features of the background and improving the accuracy of the segmentation model network.
The feature fusion module (DA) in fig. 1 inputs the cell nuclei and boundaries as shown in fig. 4, smoothes and eliminates the mesh effect by 3 × 3 convolution, and then selects and integrates complementary information features by two parallel convolution layers to further refine the details in the next iteration.
S4, inputting a segmentation network according to batches by using the training set obtained in S2, and training;
as shown in fig. 5, the prepared training set pictures are input into the segmentation network in batches, each layer of features output by the encoding part pass through the mixed attention module and are fused with features of the decoding part with the same resolution, the decoding part extracts cell nucleus and cell boundary features respectively, and then fusion upsampling is performed until the cell nucleus and cell boundary features are upsampled to the size of the original picture.
S5, calculating the loss between the prediction result and the real label by using a mixed loss function, and performing back propagation to update the weight;
the Binary Cross Entropy (BCE) loss function and the Dice loss function are combined to form a mixed loss function.
The bilateral U-Net structure has two task-specific decoders, so the overall loss of the network is composed of two parts, the calculation formula is shown in formula (1), LnRepresenting the loss between the computationally predicted nucleus and the nucleus signature, LcThe representation calculated predicts the loss between the cell boundary and the nuclear center tag.
L=Ln+Lc (1)
The loss functions for calculating the output of the cell nucleus branch and the cell boundary branch are formed by combining a BCE loss function and a Dice loss function. The loss function of cell boundary branch output and the loss function of nuclear center branch output are calculated according to the formulas (2) and (3), wherein alpha and beta in the two formulas are weight coefficients. Because the cell nucleus and the cell boundary occupy different areas in the image, the values of alpha and beta are different. Specifically, when the values of α and β are 1, the loss function calculating the outputs of the two branches becomes a BCE loss function, and when the values of α and β are 0, the loss function calculating the outputs of the two branches becomes a Dice loss function. The weighting coefficients alpha and beta have the function of respectively adjusting the ratio of the Dice loss function and the BCE loss function in the two branches during model training so as to eliminate the influence caused by imbalance of the ratio of positive data to negative data in the training data.
Ln=αLBCE+(1-α)LDice (2)
Lc=βLBCE+(1-β)LDice (3)
S6, when each training is finished, the model obtained from S3 is evaluated by using a verification set, and the model with the best evaluation result is stored;
at the end of each training, the score of the model is evaluated with the validation set and compared with the score of the last evaluation, thereby saving the model with the highest score.
S7, inputting the test set obtained in S2 into the model with the best evaluation result in S6 for segmentation to obtain a prediction picture;
s8, evaluating the segmentation effect using the Dice similarity coefficient as an evaluation criterion.
And testing the trained network model by using a test set, and evaluating the segmentation effect by using the Dice similarity coefficient as an evaluation standard of a finally obtained prediction result. The Dice similarity coefficient is a set similarity measurement function used for measuring the overlapping degree between two binary samples, the measurement value is between [0 and 1], the Dice similarity coefficient is 1 to represent complete overlapping, and the calculation formula is as follows:
Figure RE-GDA0003606857650000061
where | X | represents the number of elements in set X, | Y | represents the number of elements in set Y, and | X ≦ Y | represents the number of elements common to sets X and Y. The Dice coefficient formula for evaluating image segmentation is as follows:
Figure RE-GDA0003606857650000062
in the formula
Figure RE-GDA0003606857650000063
Representing the corresponding pixel value, y, in the model predictioniThe corresponding pixel values in the real label are expressed, the errors of the prediction result output by the model and all pixel points of the real label are directly calculated, and the overall consistency of the prediction result and the data of the real label is measured.

Claims (3)

1. A nucleus segmentation method based on a bilateral segmentation network is characterized by comprising the following steps:
s1, preprocessing the cell picture and the labeling picture;
s2, dividing the cell picture after pretreatment and the corresponding labeled picture into a training set, a verification set and a test set;
s3, building a segmentation network structure;
s4, inputting a segmentation network according to batches by using the training set obtained in S2, and training;
s5, calculating the loss between the prediction result and the real label by using a binary cross entropy loss function, and performing back propagation to update the weight;
s6, when each training is finished, the model obtained from S3 is evaluated by using a verification set, and the model with the best evaluation result is stored;
s7, inputting the test set obtained in S2 into the model with the best evaluation result in S6 for segmentation to obtain a prediction picture;
s8, evaluating the segmentation effect using the Dice similarity coefficient as an evaluation criterion.
2. The method for segmentation of overlapping cell nuclei based on deep learning as claimed in claim 1, wherein in the bilateral segmentation network in step S3, a feature fusion module is added between two branches of a U-Net decoder, so as to capture context information of a coarse feature map, learn to focus on target structures with different shapes, sizes and positions, and simultaneously highlight significant features useful for a specific task, so as to enhance features of a foreground while suppressing features of a background;
in the original U-Net network, the coding part is a downsampling module consisting of a 3 × 3 convolution (RELU) and a 2 × 2 maxporoling layer, and after 4 times of operations, a Resnet34 residual convolution layer is used for replacing a coder in the original U-Net network to extract image features, so that the depth of the network is increased, and the problem of gradient disappearance is solved;
in the original U-Net network, a decoding part consists of up-conv2x2 and a 3 x 3 convolution (RELU) layer, an encoder is divided into two branches, one is used for extracting cell nucleus and the other is used for extracting cell boundary, spatial information of different scales is extracted, a feature fusion module is added between the two branches, the spatial information of different scales is combined together and transmitted to a deeper network, and the performance of a segmentation network for extracting complex images suffering from noise, disturbance and lack of clear boundaries is remarkably improved.
3. The method for segmentation of overlapped cell nuclei based on U-Net network of feature fusion module as claimed in claim 1, wherein the binary cross entropy loss function in step S4 is calculated as follows:
Figure FDA0003545628180000021
wherein y is a real label, and y is a real label,
Figure FDA0003545628180000022
is a predicted result. The binary cross entropy loss function is applicable to the binary problem.
CN202210247886.4A 2022-03-14 2022-03-14 Cell nucleus segmentation method based on bilateral segmentation network Pending CN114612664A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210247886.4A CN114612664A (en) 2022-03-14 2022-03-14 Cell nucleus segmentation method based on bilateral segmentation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210247886.4A CN114612664A (en) 2022-03-14 2022-03-14 Cell nucleus segmentation method based on bilateral segmentation network

Publications (1)

Publication Number Publication Date
CN114612664A true CN114612664A (en) 2022-06-10

Family

ID=81863353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210247886.4A Pending CN114612664A (en) 2022-03-14 2022-03-14 Cell nucleus segmentation method based on bilateral segmentation network

Country Status (1)

Country Link
CN (1) CN114612664A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115132375A (en) * 2022-06-17 2022-09-30 广州智睿医疗科技有限公司 Thyroid disease pathological analysis module
CN116342600A (en) * 2023-05-29 2023-06-27 中日友好医院(中日友好临床医学研究所) Segmentation method of cell nuclei in thymoma histopathological image
CN117197156A (en) * 2022-10-21 2023-12-08 南华大学 Lesion segmentation method and system based on double decoders UNet and Transformer
CN117291941A (en) * 2023-10-16 2023-12-26 齐鲁工业大学(山东省科学院) Cell nucleus segmentation method based on boundary and central point feature assistance

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115132375A (en) * 2022-06-17 2022-09-30 广州智睿医疗科技有限公司 Thyroid disease pathological analysis module
CN117197156A (en) * 2022-10-21 2023-12-08 南华大学 Lesion segmentation method and system based on double decoders UNet and Transformer
CN117197156B (en) * 2022-10-21 2024-04-02 南华大学 Lesion segmentation method and system based on double decoders UNet and Transformer
CN116342600A (en) * 2023-05-29 2023-06-27 中日友好医院(中日友好临床医学研究所) Segmentation method of cell nuclei in thymoma histopathological image
CN116342600B (en) * 2023-05-29 2023-08-18 中日友好医院(中日友好临床医学研究所) Segmentation method of cell nuclei in thymoma histopathological image
CN117291941A (en) * 2023-10-16 2023-12-26 齐鲁工业大学(山东省科学院) Cell nucleus segmentation method based on boundary and central point feature assistance

Similar Documents

Publication Publication Date Title
CN113780296B (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN114612664A (en) Cell nucleus segmentation method based on bilateral segmentation network
CN115049936B (en) High-resolution remote sensing image-oriented boundary enhanced semantic segmentation method
CN112862774B (en) Accurate segmentation method for remote sensing image building
CN109978871B (en) Fiber bundle screening method integrating probability type and determination type fiber bundle tracking
CN111222519B (en) Construction method, method and device of hierarchical colored drawing manuscript line extraction model
CN112017192B (en) Glandular cell image segmentation method and glandular cell image segmentation system based on improved U-Net network
CN111161224A (en) Casting internal defect grading evaluation system and method based on deep learning
CN113569724B (en) Road extraction method and system based on attention mechanism and dilation convolution
CN111798469A (en) Digital image small data set semantic segmentation method based on deep convolutional neural network
CN115661655B (en) Southwest mountain area cultivated land extraction method with hyperspectral and hyperspectral image depth feature fusion
CN115909006A (en) Mammary tissue image classification method and system based on convolution Transformer
Zhang et al. Dense haze removal based on dynamic collaborative inference learning for remote sensing images
CN114092467A (en) Scratch detection method and system based on lightweight convolutional neural network
CN113628180A (en) Semantic segmentation network-based remote sensing building detection method and system
CN111079807B (en) Ground object classification method and device
CN112801195A (en) Deep learning-based fog visibility prediction method, storage device and server
CN115456957B (en) Method for detecting change of remote sensing image by full-scale feature aggregation
CN116757979A (en) Embryo image fusion method, device, electronic equipment and storage medium
CN115330703A (en) Remote sensing image cloud and cloud shadow detection method based on context information fusion
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement
CN110717960B (en) Method for generating building rubbish remote sensing image sample
CN113409321B (en) Cell nucleus image segmentation method based on pixel classification and distance regression
CN114219811B (en) Rail steel surface defect segmentation method based on feature pyramid and neural network
CN117853397A (en) Image tampering detection and positioning method and system based on multi-level feature learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination