CN117078705B - CT image segmentation method based on Pasteur coefficient active contour attention - Google Patents

CT image segmentation method based on Pasteur coefficient active contour attention Download PDF

Info

Publication number
CN117078705B
CN117078705B CN202311344404.8A CN202311344404A CN117078705B CN 117078705 B CN117078705 B CN 117078705B CN 202311344404 A CN202311344404 A CN 202311344404A CN 117078705 B CN117078705 B CN 117078705B
Authority
CN
China
Prior art keywords
feature map
convolution
attention
block
active contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311344404.8A
Other languages
Chinese (zh)
Other versions
CN117078705A (en
Inventor
陈达
郭学丽
舒明雷
刘丽
李安坤
韩孝兴
李焕春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Original Assignee
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology, Shandong Institute of Artificial Intelligence filed Critical Qilu University of Technology
Priority to CN202311344404.8A priority Critical patent/CN117078705B/en
Publication of CN117078705A publication Critical patent/CN117078705A/en
Application granted granted Critical
Publication of CN117078705B publication Critical patent/CN117078705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of medical image segmentation, in particular to a CT image segmentation method based on Pasteur coefficient active contour attention, which comprises the following steps: the dataset is a CT image of 3D kidneys and kidney tumors; invoking a nibabel library to process the volume data in the data set to obtain 2D png format slices, respectively selecting 10 slices from each volume data to obtain a data set D ', and carrying out data enhancement on the training set in the data set D' to obtain a data set D; the network structure includes an encoding portion and a decoding portion; calculating a loss function using the Dice loss function; adjusting weights and offsets in the network by back propagation using an SGD optimizer; storing the optimal weight and the optimal offset in a newly built file; and reading the images in the test set to finish segmentation, and storing the result as a jpg file. The invention can focus the target structure and can obtain better segmentation result for pictures with uneven gray value distribution.

Description

CT image segmentation method based on Pasteur coefficient active contour attention
Technical Field
The invention relates to the technical field of medical image segmentation, in particular to a CT image segmentation method based on Pasteur coefficient active contour attention.
Background
CT image segmentation is an important task in the field of medical imaging, aimed at segmenting out target structures in images, so that doctors can diagnose diseases and formulate treatment schemes. The CT image segmentation method can be classified into a conventional segmentation method and a deep learning segmentation method. Conventional segmentation methods include classical methods based on image processing techniques such as graph-based segmentation, thresholding, region growing, etc., whose effects may be affected by image quality, background noise, etc. The deep learning segmentation method comprises U-Net and variants thereof, deep Lab series and the like, and the deep learning method achieves good performance in medical image segmentation by means of large-scale data and strong computing power. However, for CT images, the image acquisition parameters are affected by different tissue densities, so that the condition of uneven gray level occurs, and the difficulty of segmentation is greatly increased.
Therefore, in order to solve the above problems, a CT image segmentation method based on the active contour attention of the papanicolaou coefficient is proposed.
Disclosure of Invention
Aiming at the defects of the prior art, the invention develops a CT image segmentation method based on the active contour attention of the Pasteur coefficient, and the invention can focus a target structure and can obtain better segmentation results for pictures with uneven gray value distribution.
The technical scheme for solving the technical problems is as follows:
a CT image segmentation method based on Pasteur coefficient active contour attention comprises the following steps:
s1, a data set is a CT image of a 3D kidney and kidney tumor, a training set of the data set comprises 210 3D-CT images and a test set comprises 90 3D-CT images from MICCAI KiTS 19;
s2, invoking a nibabel library to process the volume data in the data set to obtain 2D slices in the png format, respectively selecting 10 slices from each volume data to obtain a data set D ', and carrying out data enhancement on a training set in the data set D' to obtain a data set D;
s3, the network structure comprises an encoding part and a decoding part, wherein the encoding part uses a VGG16 pre-training network to extract characteristics of an input image, and the decoding part uses a Babbitt coefficient active contour attention structure and an attention depth separable convolution block;
s4, calculating a Loss function Loss: the Loss function Loss uses a Dice Loss function;
s5, using an SGD optimizer to adjust weight and bias in a network through back propagation;
s6, setting a variable for storing an optimal evaluation index in the training process of the data set D, storing the optimal weight and the optimal bias, and storing the optimal weight and the optimal bias in a ph file;
s7, loading the optimal weight and the optimal offset stored in the ph file into a network, reading the image in the test set to complete segmentation, and storing the result as a jpg file.
Based on the CT image segmentation method based on the active contour attention of Pasteur coefficients, the first five stages of the pretrained VGG16 network are used, wherein the fifth stage does not comprise the maximum pooling operation Maxpooling operation, and each stage respectively obtains a characteristic image A 1、 A 2、 A 3、 A 4、 A 5
Based on the CT image segmentation method based on the Babbitt coefficient active contour attention, the decoding part comprises a first Babbitt coefficient active contour attention module, a first attention depth separable convolution block, a second Babbitt coefficient active contour attention module, a second attention depth separable convolution block, a third Babbitt coefficient active contour attention module, a third attention depth separable convolution block, a fourth Babbitt coefficient active contour attention module, a fourth attention depth separable convolution block and an output module.
Based on the CT image segmentation method based on the Babbitt coefficient active contour attention, the first Babbitt coefficient active contour attention module comprises a first initial contour block, a first image processing block, a first Babbitt coefficient active contour block and a first attention block;
(1) The first initial contour block implementation steps are as follows: feature map A 5 Up-sampling to obtain feature map M 1 Feature map M 1 Obtaining a characteristic diagram B through convolution of 1 multiplied by 1 1 Feature map A 4 Obtaining a characteristic diagram C through convolution of 1 multiplied by 1 1 Feature map B 1 And feature map C 1 Adding to obtain a feature map D 1 Feature map D 1 The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 1 Feature map N 1 Obtaining a characteristic diagram E after distance conversion 1 I.e. the initial profile;
distance transformation is composed ofFind out->Is Euclidean distance, is convolution multiplied, exp is an exponential function based on a natural constant e, ++>Is given by human beings and +.>
(2) The first image processing block is implemented as follows: feature mapObtaining a characteristic diagram F through convolution of 1 multiplied by 1 1 The input image obtains a feature graph G through a Resize image processing function and a Sigmoid activation function 1 Feature map F 1 And feature map G 1 Adding to obtain a characteristic diagram H 1 Feature map H 1 Obtaining a feature map I through 1X 1 convolution and Sigmoid activation function 1
(3) The first Pasteur coefficient active contour block is realized by using a Pasteur coefficient active contour algorithm to perform the following steps of 1 Performing limited iterative segmentation to obtain a feature map,/>Wherein u and v are positive parameters, which are artificially given; BCV is the papanicolaou coefficient active contour algorithm; e (E) 1 Is a characteristic diagram E 1
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein->The following formula indicates the length of the profile, < >>The latter formula indicates the area within the outline, < >>Representing the dirac function, H being the Heavyside function, ++>Is a definition field, Q is RGB color space, in and out are initial contours +.>An inner and an outer region. For a given color q, a probability distribution function is estimated using a gaussian kernel based histogram +.>And->
(4) The first attention block is implemented by the following steps of a characteristic diagram A 4 And feature map J 1 Multiplication to obtain a feature map K 1
Based on the CT image segmentation method based on the Babbitt coefficient active contour attention, the implementation steps of the first attention depth separable convolution block are as follows: map the characteristic mapAnd feature map->After splicing, the splicing result is sequentially subjected to point-by-point convolution, batch Normalization batch normalization, 1X 7 axial depth convolution, 7X 1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain a feature map->Feature map->Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then adding to the feature map>Multiplication to obtain a feature map->
The second barking coefficient active contour attention module comprises a second initial contour block, a second image processing block, a second barking coefficient active contour block and a second attention block on the basis of the CT image segmentation method based on the barking coefficient active contour attention;
(1) The second initial contour block implementation steps are as follows: feature map P 1 Up-sampling to obtain feature map M 2 Feature map M 2 Obtaining a characteristic diagram B through convolution of 1 multiplied by 1 2 Feature map A 3 Obtaining a characteristic diagram C through convolution of 1 multiplied by 1 2 Feature map B 2 And feature map C 2 Adding to obtain a feature map D 2 Feature map D 2 The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 2 Feature map N 2 Obtaining a characteristic diagram E after distance conversion 2 I.e. the initial profile;
distance transformation is composed ofFind out->Is Euclidean distance, is convolution multiplied, exp is an exponential function based on a natural constant e, ++>Is given by human beings and +.>
(2) The second image processing block is implemented as follows: feature map M 2 Obtaining a characteristic diagram F through convolution of 1 multiplied by 1 2 The input image obtains a feature graph G through a Resize image processing function and a Sigmoid activation function 2 Feature map F 2 And feature map G 2 Adding to obtain a characteristic diagram H 2 Feature map H 2 Obtaining a feature map I through 1X 1 convolution and Sigmoid activation function 2
(3) The second Pasteur coefficient active contour block is realized by using the Pasteur coefficient active contour algorithm to perform the following steps of comparing the characteristic diagram I 2 Performing limited iterative segmentation to obtain a feature map,/>Wherein->And->Is a positive parameter, is artificially given; BCV is the papanicolaou coefficient active contour algorithm; e (E) 2 Is a characteristic diagram E 2
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein->The following formula indicates the length of the profile, < >>The latter formula indicates the area within the outline, < >>Representing the dirac function, H being the Heavyside function, ++>Is a definition field, Q is RGB color space, in and out are initial contours +.>Inside and outside regions, for a given color q, a probability distribution function is estimated using a gaussian kernel based histogram +.>And
(4) Second attention block implementationThe steps are as follows 3 And feature map J 2 Multiplication to obtain a feature map K 2
Based on the CT image segmentation method based on the Babbitt coefficient active contour attention, the implementation steps of the second attention depth separable convolution block are as follows: map the characteristic mapAnd feature map->After splicing, the splicing result is subjected to point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function in sequence to obtain a feature map L 2 Feature map L 2 Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then combining with feature map L 2 Multiplication to obtain a feature map P 2
On the basis of the CT image segmentation method based on the Pasteur coefficient active contour attention, the third Pasteur coefficient active contour attention module comprises a third initial contour block, a third image processing block, a third Pasteur coefficient active contour block and a third attention block;
(1) The third initial contour block implementation steps are as follows: feature map P 2 Up-sampling to obtain feature map M 3 Feature map M 3 Obtaining a characteristic diagram B through convolution of 1 multiplied by 1 3 Feature mapObtaining a characteristic diagram C through convolution of 1 multiplied by 1 3 Feature map B 3 And feature map C 3 Adding to obtain a feature map D 3 Feature map D 3 The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 3 Feature map N 3 Obtaining a characteristic diagram E after distance conversion 3 I.e. the initial profile;
distance transformation is composed ofFind out->Is Euclidean distance, is convolution multiplied, exp is an exponential function based on a natural constant e, ++>Is given by human beings and +.>
(2) The third image processing block is implemented as follows: feature map M 3 Obtaining a characteristic diagram F through convolution of 1 multiplied by 1 3 The input image obtains a feature graph G through a Resize image processing function and a Sigmoid activation function 3 Feature map F 3 And feature map G 3 Adding to obtain a characteristic diagram H 3 Feature map H 3 Obtaining a feature map I through 1X 1 convolution and Sigmoid activation function 3 ;
(3) The third Pasteur coefficient active contour block is realized by using the Pasteur coefficient active contour algorithm to perform the following steps of 3 Performing limited iterative segmentation to obtain a feature map,/>Wherein->And->Is a positive parameter, is artificially given; BCV is the papanicolaou coefficient active contour algorithm; e (E) 3 Is a characteristic diagram E 3
The energy function of the papanicolaou coefficient active contour algorithm is:wherein->The following formula indicates the length of the profile, < >>The latter formula indicates the area within the outline, < >>Representing the dirac function, H being the Heavyside function, ++>Is a definition field, Q is RGB color space, in and out are initial contours +.>Inside and outside regions, for a given color q, a probability distribution function is estimated using a gaussian kernel based histogram +.>And
(4) The third attention block is implemented by the following steps of a characteristic diagram A 2 And feature map J 3 Multiplication to obtain a feature map K 3
Based on the CT image segmentation method based on the Babbitt coefficient active contour attention, the implementation steps of the third attention depth separable convolution block are as follows: map M of features 3 And feature map K 3 After splicing, the splicing result is subjected to point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function in sequence to obtain a feature map L 3 Feature map L 3 Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then combining with feature map L 3 Multiplication to obtain a feature map P 3
Based on the CT image segmentation method based on the Pasteur coefficient active contour attention, the fourth Pasteur coefficient active contour attention module comprises a fourth initial contour block, a fourth image processing block, a fourth Pasteur coefficient active contour block and a fourth attention block,
(1) The fourth initial contour block implementation steps are as follows: feature map P 3 Up-sampling to obtain feature map M 4 Feature map M 4 Obtaining a characteristic diagram B through convolution of 1 multiplied by 1 4 Feature map A 1 Obtaining a characteristic diagram C through convolution of 1 multiplied by 1 4 Feature map B 4 And feature map C 4 Adding to obtain a feature map D 4 Feature map D 4 The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 4 Feature map N 4 Obtaining a characteristic diagram E after distance conversion 4 I.e. the initial profile;
distance transformation is composed ofFind out->Is Euclidean distance, is convolution multiplied, exp is an exponential function based on a natural constant e, ++>Is given by human beings and +.>
(2) The fourth image processing block is implemented as follows: feature map M 4 Obtaining a characteristic diagram F through convolution of 1 multiplied by 1 4 The input image obtains a feature graph G through a Resize image processing function and a Sigmoid activation function 4 Feature map F 4 And feature map G 4 Adding to obtain a characteristic diagram H 4 Feature map H 4 Obtaining a feature map I through 1X 1 convolution and Sigmoid activation function 4 ;
(3) The fourth Babbitt coefficient active contour block is realized by using Babbitt coefficient active contour algorithm to perform characteristic diagramPerforming limited iterative segmentation to obtain a feature map +.>,/>Wherein->And->Is a positive parameter, is artificially given; BCV is the papanicolaou coefficient active contour algorithm; e (E) 4 Is a characteristic diagram E 4
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein the method comprises the steps ofThe following formula indicates the length of the profile, < >>The latter formula indicates the area within the outline, < >>Representing the dirac function, H being the Heavyside function, ++>Is a definition field, Q is RGB color space, and in and out are initial contours E, respectively 4 An inner and an outer region. Estimating probability distribution functions using gaussian kernel based histograms for a given color qAnd->
(4) Fourth attention block implementation stepComprises the following steps of characteristic diagramAnd feature map->Multiplication to obtain a feature map->
Based on the CT image segmentation method based on the Pasteur coefficient active contour attention,
the fourth attention depth separable convolution block implementation steps are as follows:
map the characteristic mapAnd feature map->After splicing, the splicing result is sequentially subjected to point-by-point convolution, batch Normalization batch normalization, 1X 7 axial depth convolution, 7X 1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function to obtain a feature map->
Feature mapSequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then adding to the feature map>Multiplication to obtain a feature map->
Based on the CT image segmentation method based on the Pasteur coefficient active contour attention,
the output module comprises the following implementation steps: feature mapObtaining a segmented image +.The segmented image is obtained by 1X 1 convolution, batchNorm acceleration depth neural network convergence, relu activation function>
The effects provided in the summary of the invention are merely effects of embodiments, not all effects of the invention, and the above technical solution has the following advantages or beneficial effects:
the invention provides a Papanicolaou coefficient active contour attention structure, which helps our segmentation method to better adapt to gray scale differences between different areas and concentrate attention on a target structure by quantifying the similarity between the gray scale value distribution of a segmented area and an image; in the decoding stage, an attention depth separable convolution module is provided, and the module has lower computational complexity and can adaptively adjust the weight of each channel in the feature map so as to capture the relation between features more pertinently and learn the features with unimportant important feature inhibition better.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
FIG. 1 is a flow chart of a training network using a training set in accordance with the present invention.
FIG. 2 is a graph of network partitioning effect using testing in accordance with the present invention.
Detailed Description
The present invention will be further described with reference to the drawings and the detailed description below, in order to make the objects, technical solutions and advantages of the present invention more apparent. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A CT image segmentation method based on Pasteur coefficient active contour attention comprises the following steps:
s1, a data set is a CT image of a 3D kidney and kidney tumor, a training set of the data set comprises 210 3D-CT images and a test set comprises 90 3D-CT images from MICCAI KiTS 19;
s2, invoking a nibabel library to process the volume data in the data set to obtain 2D slices in the png format, respectively selecting 10 slices from each volume data to obtain a data set D ', and carrying out data enhancement on a training set in the data set D' to obtain a data set D;
s3, the network structure comprises an encoding part and a decoding part, wherein the encoding part uses a VGG16 pre-training network to extract characteristics of an input image, and the decoding part uses a Babbitt coefficient active contour attention structure and an attention depth separable convolution block;
s4, calculating a Loss function Loss by using the Dice Loss function;
s5, using an SGD optimizer to adjust weight and bias in a network through back propagation;
s6, setting a variable for storing and evaluating an optimal index in the training process of the data set D, and storing the optimal weight and the bias in a newly built file ph;
s7, loading the optimal weight and the optimal offset into a network, reading the image in the test set to complete segmentation, and storing the result as a jpg file.
In this embodiment, the encoding part uses the VGG16 pretraining network to perform feature extraction on the input image, and the present invention directly uses the pretraining VGG16 network and the weight file, where the pretraining VGG16 network includes six stages in total, and only uses the first five stages, where the fifth stage does not include the operation of Maxpooling, and each stage obtains the feature maps A1, A2, A3, A4, and A5 respectively.
In this embodiment, the decoding part uses a Babbitt coefficient active contour attention structure and an attention depth separable convolution block, babbitt coefficient active contour attention structure reference: the decoding part comprises a first Pasteur coefficient active contour Attention module, a first Attention depth separable convolution block, a second Pasteur coefficient active contour Attention module, a second Attention depth separable convolution block, a third Pasteur coefficient active contour Attention module, a third Attention depth separable convolution block, a fourth Pasteur coefficient active contour Attention module, a fourth Attention depth separable convolution block and an output module.
In this embodiment, the first barton coefficient active contour attention module includes a first initial contour block, a first image processing block, a first barton coefficient active contour block, and a first attention block;
(1) The first initial contour block implementation steps are as follows: feature mapObtaining a characteristic diagram through upsampling>Feature mapObtaining a characteristic diagram by convolution of 1 multiplied by 1>Feature map->Obtaining a characteristic diagram by convolution of 1 multiplied by 1>Feature map->And feature map->Adding to obtain a feature map->Feature map->Obtaining a characteristic diagram through a Relu activation function, a convolution of 1 multiplied by 1 and a Sigmoid activation function respectively>Feature map->Obtaining a characteristic diagram after distance transformation>I.e. the initial profile;
distance transformation is composed ofFind out->Is Euclidean distance, is convolution multiplied, exp is an exponential function based on a natural constant e, ++>Is given by human beings and +.>
(2) The first image processing block is implemented as follows: feature mapObtaining a characteristic diagram by convolution of 1 multiplied by 1>The input image gets the feature map +.f. by the Resize image processing function, sigmoid activation function>Feature map->And feature map->Adding to obtain featuresFigure->Feature map->Obtaining a feature map by a convolution of 1×1 and a Sigmoid activation function>
(3) The first Pasteur coefficient active contour block is realized by using a Pasteur coefficient active contour algorithm to perform characteristic diagramPerforming limited iterative segmentation to obtain a feature map +.>,/>Wherein->And->Is a positive parameter, is artificially given; BCV is the Pasteur coefficient active contour algorithm, E 1 Is a characteristic diagram E 1
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein the method comprises the steps ofThe following formula indicates the length of the profile, < >>The latter formula indicates the area within the outline, < >>Representing the dirac function, H being the Heavyside function, ++>Is a definition field, Q is RGB color space, in and out are initial contours +.>An inner and an outer region. For a given color q>And->
(4) The first attention block is implemented by the following steps of a characteristic diagramAnd feature map->Multiplication to obtain a feature map->
In this embodiment, the first attention depth separable convolution block implementation steps are as follows: map the characteristic mapAnd feature mapAfter splicing, sequentially carrying out point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function on the spliced result to obtain a feature map. Feature map->Sequentially through global average pooling, 1X 1 convolution, relu activation function, 1 x 1 convolution, sigmoid activation function, and then ++feature map>Multiplication to obtain a feature map->
In this embodiment, the second barton coefficient active contour attention module includes a second initial contour block, a second image processing block, a second barton coefficient active contour block, and a second attention block;
(1) The second initial contour block implementation steps are as follows: feature mapUp-sampling to obtain characteristic diagram->Feature map->Obtaining a characteristic diagram by convolution of 1 multiplied by 1>Feature map->Obtaining a characteristic diagram by convolution of 1 multiplied by 1>Feature mapAnd feature map->Adding to obtain a feature map->Feature map->Respectively throughRelu activation function, convolution of 1×1, sigmoid activation function gets feature map N 2 Feature map N 2 Obtaining a characteristic diagram E after distance conversion 2 I.e. the initial profile;
distance transformation is composed ofFind out->Is Euclidean distance, is convolution multiplied, exp is an exponential function based on a natural constant e, ++>Is given by human beings and +.>
(2) The second image processing block is implemented as follows: feature mapObtaining a characteristic diagram by convolution of 1 multiplied by 1>The input image gets the feature map +.f. by the Resize image processing function, sigmoid activation function>Feature map->And feature map->Adding to obtain a feature map->Feature map->Obtaining a feature map by a convolution of 1×1 and a Sigmoid activation function>
(3) The second Babbitt coefficient active contour block is realized by using Babbitt coefficient active contour algorithm to perform characteristic diagramPerforming limited iterative segmentation to obtain a feature map +.>,/>Wherein->And->Is a positive parameter, the parameter is a positive parameter,
is artificially given; BCV is the Pasteur coefficient active contour algorithm, E 2 Is a characteristic diagram E 2
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein the method comprises the steps ofThe following formula indicates the length of the profile, < >>The latter formula indicates the area within the outline, < >>Representing the dirac function, H being the Heavyside function, ++>Is a definition field, Q is RGB color space, in and out are initial contours +.>An inner and an outer region. Estimating probability distribution functions using gaussian kernel based histograms for a given color qAnd->
(5) The second attention block is implemented by the following steps of feature mapAnd feature map->Multiplication to obtain a feature map->
In this embodiment, the second attention depth separable convolution block implementation steps are as follows: map the characteristic mapAnd feature mapAfter splicing, sequentially carrying out point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function on the spliced result to obtain a feature map. Feature map->Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then adding to the feature map>Multiplication to obtain a feature map->
In this embodiment, the third barton coefficient active contour attention module includes a third initial contour block, a third image processing block, a third barton coefficient active contour block, and a third attention block;
(1) The third initial contour block implementation steps are as follows: feature mapObtaining a characteristic diagram through upsampling>Feature mapObtaining a characteristic diagram by convolution of 1 multiplied by 1>Feature map->Obtaining a characteristic diagram by convolution of 1 multiplied by 1>Feature map->And feature map->Adding to obtain a feature map->Feature map->The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 3 Feature map N 3 Obtaining a characteristic diagram E after distance conversion 3 I.e. the initial profile;
distance transformation is composed ofFind out->Is Euclidean distance, is convolution multiplied, exp is an exponential function based on a natural constant e, ++>Is given by human beings and +.>
(2) The third image processing block is implemented as follows: feature mapObtaining a characteristic diagram by convolution of 1 multiplied by 1>The input image gets the feature map +.f. by the Resize image processing function, sigmoid activation function>Feature map->And feature map->Adding to obtain a feature map->Feature map->Obtaining a feature map by a convolution of 1×1 and a Sigmoid activation function>
(3) The third Pasteur coefficient active contour block is realized by using the Pasteur coefficient active contour algorithm to perform the following steps ofPerforming limited iterative segmentation to obtain a feature map +.>,/>Wherein->And->Is a positive parameter, is artificially given; BCV is the Pasteur coefficient active contour algorithm, E 3 Is a characteristic diagram E 3
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein the method comprises the steps ofThe following formula indicates the length of the profile, < >>The latter formula indicates the area within the outline, < >>Representing the dirac function, H being the Heavyside function, ++>Is a definition field, Q is RGB color space, in and out are initial contours +.>An inner and an outer region. Estimating probability distribution functions using gaussian kernel based histograms for a given color qAnd->
(4) The third attention block is implemented by the following steps of feature mapAnd feature map->Multiplication to obtain a feature map->
In this embodiment, the third attention depth separable convolution block implementation steps are as follows: map the characteristic mapAnd feature mapAfter splicing, the splicing result is subjected to point-by-point convolution, batch Normalization batch normalization, 1X 7 axial depth convolution, 7X 1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function in sequence to obtain a feature map, wherein the feature map is->Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then adding to the feature map>Multiplication to obtain a feature map->
In this embodiment, the fourth barker coefficient active contour attention module includes a fourth initial contour block, a fourth image processing block, a fourth barker coefficient active contour block, and a fourth attention block.
(1) The fourth initial contour block implementation steps are as follows: feature mapObtaining a characteristic diagram through upsampling>Feature mapObtaining a characteristic diagram by convolution of 1 multiplied by 1>Feature map->Obtaining a characteristic diagram by convolution of 1 multiplied by 1>Feature map->And feature map->Adding to obtain a feature map->Feature map->The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 4 Feature map N 4 Obtaining a characteristic diagram E after distance conversion 4 I.e. the initial profile;
distance transformation is composed ofFind out->Is Euclidean distance, is convolution multiplied, exp is an exponential function based on a natural constant e, ++>Is given by peopleAnd->
(2) The fourth image processing block is implemented as follows: feature mapObtaining a characteristic diagram by convolution of 1 multiplied by 1>The input image gets the feature map +.f. by the Resize image processing function, sigmoid activation function>Feature map->And feature map->Adding to obtain a feature map->Feature map->Obtaining a feature map by a convolution of 1×1 and a Sigmoid activation function>
(3) The fourth Babbitt coefficient active contour block is realized by using Babbitt coefficient active contour algorithm to perform characteristic diagramPerforming limited iterative segmentation to obtain a feature map +.>,/>Wherein->And->Is a positive parameter, is artificially given; BCV is the Pasteur coefficient active contour algorithm, E 4 Is a characteristic diagram E 4
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein the method comprises the steps ofThe following formula indicates the length of the profile, < >>The latter formula indicates the area within the outline, < >>Representing the dirac function, H being the Heavyside function, ++>Is a definition field, Q is RGB color space, in and out are initial contours +.>Internal and external regions, for a given color q, using a gaussian kernel based histogram estimation probability distribution functionAnd->;/>
(5) The fourth attention block is implemented by the following steps of feature mapAnd feature map->Multiplication to obtain a feature map->
In this embodiment, the fourth attention depth separable convolution block implementation steps are as follows: map the characteristic mapAnd feature mapAfter splicing, sequentially carrying out point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function on the spliced result to obtain a feature mapFeature map->Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then adding to the feature map>Multiplication to obtain a feature map->
In this embodiment, the output module includes the following implementation steps: feature mapObtaining a segmented image +.The segmented image is obtained by 1X 1 convolution, batchNorm acceleration depth neural network convergence, relu activation function>
According to the invention, the results of 97.97%, 95.56%, 94.04% and 95.88% are respectively achieved on the accuracy, precision, jaccard similarity coefficient and Dice similarity coefficient evaluation indexes, and compared with advanced U-Net, UNet++, transUNet and other models, the invention can focus a target structure, and can obtain a better segmentation result for pictures with uneven gray value distribution.
While the foregoing description of the embodiments of the present invention has been presented with reference to the drawings, it is not intended to limit the scope of the invention, but rather, it is apparent that various modifications or variations can be made by those skilled in the art without the need for inventive work on the basis of the technical solutions of the present invention.

Claims (7)

1. A CT image segmentation method based on Pasteur coefficient active contour attention is characterized by comprising the following steps:
s1, a data set is a CT image of a 3D kidney and kidney tumor, a training set of the data set comprises 210 3D-CT images, and a test set comprises 90 3D-CT images from MICCAI KiTS 19;
s2, invoking a nibabel library to process the volume data in the data set to obtain 2D png format slices, respectively selecting 10 slices from each volume data to obtain a data set D ', and performing data enhancement on the training set in the data set D' to obtain a data set D;
s3, the network structure comprises an encoding part and a decoding part, wherein the encoding part uses a VGG16 pre-training network to extract characteristics of an input image, and uses the first five stages of the pre-training VGG16 network, wherein the fifth stage does not comprise maximum pooling operation Maxpooling operation, and each stage respectively obtains a characteristic image A 1 、A 2 、A 3 、A 4 、A 5 The decoding part uses a Babbitt coefficient active contour attention and attention depth separable convolution block;
the decoding part comprises a first Pasteur coefficient active contour attention module, a first attention depth separable convolution block, a second Pasteur coefficient active contour attention module, a second attention depth separable convolution block, a third Pasteur coefficient active contour attention module, a third attention depth separable convolution block, a fourth Pasteur coefficient active contour attention module, a fourth attention depth separable convolution block and an output module;
the first pap coefficient active contour attention module comprises a first initial contour block, a first image processing block, a first pap coefficient active contour block and a first attention block;
(1) The first initial contour block implementation steps are as follows: feature map A 5 Up-sampling to obtain feature map M 1 Feature map M 1 Obtaining a characteristic diagram B through convolution of 1 multiplied by 1 1 Feature map A 4 Obtaining a characteristic diagram C through convolution of 1 multiplied by 1 1 Feature map B 1 And feature map C 1 Adding to obtain a feature map D 1 Feature map D 1 The feature map N is obtained through a Relu activation function, a convolution of 1 multiplied by 1 and a Sigmoid activation function respectively 1 Feature map N 1 Obtaining a characteristic diagram E after distance conversion 1 I.e. the initial profile;
distance transformation is composed ofThe determination, where d (, 0) is the euclidean distance, x is the convolution product, exp is an exponential function based on a natural constant e, λ is artificially given, and 1>λ>0;
(2) The first image processing block is implemented as follows: feature map M 1 Obtaining a characteristic diagram F through convolution of 1 multiplied by 1 1 The input image obtains a feature graph G through a Resize image processing function and a Sigmoid activation function 1 Feature map F 1 And feature map G 1 Adding to obtain a characteristic diagram H 1 Feature map H 1 Obtaining a feature map I through 1X 1 convolution and Sigmoid activation function 1
(3) The first Pasteur coefficient active contour block is realized by using a Pasteur coefficient active contour algorithm to perform the following steps of 1 Performing limited iterative segmentation to obtain a feature map J 1 :J 1 =BCV(I 1 ,E 1 μ, ν), wherein μ and ν are positive parameters, are artificially given; BCV is the papanicolaou coefficient active contour algorithm; e (E) 1 Is a characteristic diagram E 1
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein μ followed by formula represents the length of the contour, ν followed by formula represents the area within the contour, δ represents the dirac function, H is the heavside function, Ω is the definition domain, Q is the RGB color space, and in and out are the initial contours E, respectively 1 Inner and outer regions; for a given color q, a gaussian kernel based histogram estimation probability distribution function P is used in (E 1 Q) and P out (E 1 ,q);
(4) The first attention block is implemented by the following steps of a characteristic diagram A 4 And feature map J 1 Multiplication to obtain a feature map K 1
The first attention depth separable convolution block implementation steps are as follows: map M of features 1 And feature map K 1 After splicing, the splicing result is subjected to point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function in sequence to obtain a feature map L 1 Feature map L 1 Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then combining with feature map L 1 Multiplication to obtain a feature map P 1
S4, calculating a Loss function Loss by using the Dice Loss function;
s5, using an SGD optimizer to adjust weight and bias in a network through back propagation;
s6, setting a variable for storing and evaluating an optimal index in the training process of the data set D, and storing the optimal weight and the bias in a newly built file ph;
s7, loading the optimal weight and the optimal offset into a network, reading the image in the test set to complete segmentation, and storing the result as a jpg file.
2. The CT image segmentation method based on the active contour attention of the papanicolaou coefficient according to claim 1, wherein: the second pap coefficient active contour attention module comprises a second initial contour block, a second image processing block, a second pap coefficient active contour block and a second attention block;
(1) The second initial contour block implementation steps are as follows: feature map P 1 Up-sampling to obtain feature map M 2 Feature map M 2 Obtaining a characteristic diagram B through convolution of 1 multiplied by 1 2 Feature map A 3 Obtaining a characteristic diagram C through convolution of 1 multiplied by 1 2 Feature map B 2 And feature map C 2 Adding to obtain a feature map D 2 Feature map D 2 The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 2 Feature map N 2 Obtaining a characteristic diagram E after distance conversion 2 I.e. the initial profile;
distance transformation is composed ofThe determination, where d (, 0) is the euclidean distance, x is the convolution product, exp is an exponential function based on a natural constant e, λ is artificially given, and 1>λ>0;
(2) The second image processing block is implemented as follows: feature map M 2 Obtaining a characteristic diagram F through convolution of 1 multiplied by 1 2 The input image obtains a feature graph G through a Resize image processing function and a Sigmoid activation function 2 Feature map F 2 And feature map G 2 Adding to obtain a characteristic diagram H 2 Feature map H 2 Obtaining a feature map I through 1X 1 convolution and Sigmoid activation function 2
(3) The second Pasteur coefficient active contour block is realized by using the Pasteur coefficient active contour algorithm to perform the following steps of comparing the characteristic diagram I 2 Performing limited iterative segmentation to obtain a feature map J 2 ,J 2 =BCV(I 2 ,E 2 μ, ν), wherein μ and ν are positive parameters, are artificially given; BCV is the papanicolaou coefficient active contour algorithm; e (E) 2 Is a characteristic diagram E 2
(4) The energy function of the papanicolaou coefficient active contour algorithm is:
wherein μ followed by formula represents the length of the contour, ν followed by formula represents the area within the contour, δ represents the dirac function, H is the heavside function, Ω is the definition domain, Q is the RGB color space, and in and out are the initial contours E, respectively 2 Internal and external regions, for a given color q, using a gaussian kernel based histogram estimation probability distribution function P in (E 2 Q) and P out (E 2 ,q);
(4) The second attention block is implemented by the following steps of a characteristic diagram A 3 And feature map J 2 Multiplication to obtain a feature map K 2
3. The CT image segmentation method based on the active contour attention of the papanicolaou coefficient according to claim 2, wherein: the second attention depth separable convolution block implementation steps are as follows: map M of features 2 And feature map K 2 After splicing, the splicing result is subjected to point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function in sequence to obtain a feature map L 2 Feature map L 2 Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then combining with feature map L 2 Multiplication to obtain a feature map P 2
4. A CT image segmentation method based on active contour attention based on papanicolaou coefficient as claimed in claim 3, wherein: the third pap coefficient active contour attention module comprises a third initial contour block, a third image processing block, a third pap coefficient active contour block and a third attention block;
(1) The third initial contour block implementation steps are as follows: feature map P 2 Up-sampling to obtain feature map M 3 Feature map M 3 Is convolved by 1 x 1 to obtainTo feature map B 3 Feature map A 2 Obtaining a characteristic diagram C through convolution of 1 multiplied by 1 3 Feature map B 3 And feature map C 3 Adding to obtain a feature map D 3 Feature map D 3 The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 3 Feature map N 3 Obtaining a characteristic diagram E after distance conversion 3 I.e. the initial profile;
distance transformation is composed ofThe determination, where d (, 0) is the euclidean distance, x is the convolution product, exp is an exponential function based on a natural constant e, λ is artificially given, and 1>λ>0;
(2) The third image processing block is implemented as follows: feature map M 3 Obtaining a characteristic diagram F through convolution of 1 multiplied by 1 3 The input image obtains a feature graph G through a Resize image processing function and a Sigmoid activation function 3 Feature map F 3 And feature map G 3 Adding to obtain a characteristic diagram H 3 Feature map H 3 Obtaining a feature map I through 1X 1 convolution and Sigmoid activation function 3
(3) The third Pasteur coefficient active contour block is realized by using the Pasteur coefficient active contour algorithm to perform the following steps of 3 Performing limited iterative segmentation to obtain a feature map J 3 :J 3 =BCV(I 3 ,E 3 μ, ν), wherein μ and ν are positive parameters, are artificially given; BCV is the papanicolaou coefficient active contour algorithm; e (E) 3 Is a characteristic diagram E 3
The energy function of the papanicolaou coefficient active contour algorithm is:
wherein μ followed by formula represents the length of the contour, ν followed by formula represents the area within the contour, δ represents the dirac function, H is the heaviside function, Ω is the definition domain, Q is the RGB color space, inAnd out are respectively the initial profile E 3 Internal and external regions, for a given color q, using a gaussian kernel based histogram estimation probability distribution function P in (E 3 Q) and P out (E 3 ,q);
(4) The third attention block is implemented by the following steps of a characteristic diagram A 2 And feature map J 3 Multiplication to obtain a feature map K 3
5. The CT image segmentation method based on active contour attention by a papanicolaou coefficient according to claim 4, wherein: the third attention depth separable convolution block implementation steps are as follows: map M of features 3 And feature map K 3 After splicing, the splicing result is subjected to point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function in sequence to obtain a feature map L 3 Feature map L 3 Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then combining with feature map L 3 Multiplication to obtain a feature map P 3
6. The CT image segmentation method based on active contour attention by a papanicolaou coefficient according to claim 5, wherein: the fourth pap coefficient active contour attention module comprises a fourth initial contour block, a fourth image processing block, a fourth pap coefficient active contour block and a fourth attention block,
(1) The fourth initial contour block implementation steps are as follows: feature map P 3 Up-sampling to obtain feature map M 4 Feature map M 4 Obtaining a characteristic diagram B through convolution of 1 multiplied by 1 4 Feature map A 1 Obtaining a characteristic diagram C through convolution of 1 multiplied by 1 4 Feature map B 4 And feature map C 4 Adding to obtain a feature map D 4 Feature map D 4 The signature N is obtained by the Relu activation function and the 1 multiplied by 1 convolution respectively by the Sigmoid activation function 4 Feature map N 4 Obtaining a characteristic diagram E after distance conversion 4 I.e. the initial profile;
distance transformation is composed ofThe determination, where d (, 0) is the euclidean distance, x is the convolution product, exp is an exponential function based on a natural constant e, λ is artificially given, and 1>λ>0;
(2) The fourth image processing block is implemented as follows: feature map M 4 Obtaining a characteristic diagram F through convolution of 1 multiplied by 1 4 The input image obtains a feature graph G through a Resize image processing function and a Sigmoid activation function 4 Feature map F 4 And feature map G 4 Adding to obtain a characteristic diagram H 4 Feature map H 4 Obtaining a feature map I through 1X 1 convolution and Sigmoid activation function 4
(3) The fourth Pasteur coefficient active contour block is realized by using the Pasteur coefficient active contour algorithm to perform the following steps of comparing the characteristic diagram I 4 Performing limited iterative segmentation to obtain a feature map J 4 ,J 4 =BCV(I 4 ,E 4 μ, ν), wherein μ and ν are positive parameters, are artificially given; BCV is the papanicolaou coefficient active contour algorithm; e (E) 4 Is a characteristic diagram E 4
The energy function of the papanicolaou coefficient active contour algorithm is:wherein μ followed by formula represents the length of the contour, ν followed by formula represents the area within the contour, δ represents the dirac function, H is the heavside function, Ω is the definition domain, Q is the RGB color space, and in and out are the initial contours E, respectively 4 Inner and outer regions; for a given color q, a gaussian kernel based histogram estimation probability distribution function P is used in (E 4 Q) and P out (E 4 ,q);
(4) The fourth attention block is implemented by the following steps of a characteristic diagram A 1 And feature map J 4 Multiplication to obtain a feature map K 4
7. The CT image segmentation method based on active contour attention by papanicolaou coefficient according to claim 6, wherein:
the fourth attention depth separable convolution block implementation steps are as follows:
map M of features 4 And feature map K 4 After splicing, the splicing result is subjected to point-by-point convolution, batch Normalization batch normalization, 1×7 axial depth convolution, 7×1 axial depth convolution, residual convolution, point-by-point convolution and GELU activation function in sequence to obtain a feature map L 4
Feature map L 4 Sequentially performing global average pooling, 1×1 convolution, relu activation function, 1×1 convolution, sigmoid activation function, and then combining with feature map L 4 Multiplication to obtain a feature map P 4
The output module comprises the following implementation steps: feature map P 4 Obtaining a segmented image d through 1X 1 convolution, batchNorm acceleration depth neural network convergence and Relu activation function end
CN202311344404.8A 2023-10-18 2023-10-18 CT image segmentation method based on Pasteur coefficient active contour attention Active CN117078705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311344404.8A CN117078705B (en) 2023-10-18 2023-10-18 CT image segmentation method based on Pasteur coefficient active contour attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311344404.8A CN117078705B (en) 2023-10-18 2023-10-18 CT image segmentation method based on Pasteur coefficient active contour attention

Publications (2)

Publication Number Publication Date
CN117078705A CN117078705A (en) 2023-11-17
CN117078705B true CN117078705B (en) 2024-02-13

Family

ID=88706521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311344404.8A Active CN117078705B (en) 2023-10-18 2023-10-18 CT image segmentation method based on Pasteur coefficient active contour attention

Country Status (1)

Country Link
CN (1) CN117078705B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114723669A (en) * 2022-03-08 2022-07-08 同济大学 Liver tumor two-point five-dimensional deep learning segmentation algorithm based on context information perception
CN115393293A (en) * 2022-08-12 2022-11-25 西南大学 Electron microscope red blood cell segmentation and positioning method based on UNet network and watershed algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10420523B2 (en) * 2016-03-21 2019-09-24 The Board Of Trustees Of The Leland Stanford Junior University Adaptive local window-based methods for characterizing features of interest in digital images and systems for practicing same

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114723669A (en) * 2022-03-08 2022-07-08 同济大学 Liver tumor two-point five-dimensional deep learning segmentation algorithm based on context information perception
CN115393293A (en) * 2022-08-12 2022-11-25 西南大学 Electron microscope red blood cell segmentation and positioning method based on UNet network and watershed algorithm

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A-PSPNet:一种融合注意力机制的PSPNet图像语义分割模型;高丹;陈建英;谢盈;;中国电子科学研究院学报(第06期);全文 *
Chan-Vese Attention U-Net: An attention mechanism for robust segmentation;Nicolas Makaroff等;《arXiv》;全文 *
Deep Depthwise Separable Convolutional Network for Change Detection in Optical Aerial Images;Ruochen Liu等;《 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;全文 *
基于多尺度卷积神经网络的CT图像肾肿瘤分割研究;冀宏;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;全文 *

Also Published As

Publication number Publication date
CN117078705A (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN110889853B (en) Tumor segmentation method based on residual error-attention deep neural network
CN110211140B (en) Abdominal blood vessel segmentation method based on 3D residual U-Net and weighting loss function
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
CN107016681B (en) Brain MRI tumor segmentation method based on full convolution network
Guo et al. Integrating guided filter into fuzzy clustering for noisy image segmentation
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN112233026A (en) SAR image denoising method based on multi-scale residual attention network
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
CN111489364B (en) Medical image segmentation method based on lightweight full convolution neural network
CN109492668B (en) MRI (magnetic resonance imaging) different-phase multimode image characterization method based on multi-channel convolutional neural network
CN115170582A (en) Liver image segmentation method based on multi-scale feature fusion and grid attention mechanism
CN111968138B (en) Medical image segmentation method based on 3D dynamic edge insensitivity loss function
CN111783583B (en) SAR image speckle suppression method based on non-local mean algorithm
CN116664605B (en) Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN115393584A (en) Establishment method based on multi-task ultrasonic thyroid nodule segmentation and classification model, segmentation and classification method and computer equipment
Zhang et al. A novel denoising method for CT images based on U-net and multi-attention
CN107292855B (en) Image denoising method combining self-adaptive non-local sample and low rank
Tan et al. Automatic prostate segmentation based on fusion between deep network and variational methods
CN110766657A (en) Laser interference image quality evaluation method
CN113378620B (en) Cross-camera pedestrian re-identification method in surveillance video noise environment
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN117078705B (en) CT image segmentation method based on Pasteur coefficient active contour attention
Baldeon-Calisto et al. Resu-net: Residual convolutional neural network for prostate mri segmentation
CN115761358A (en) Method for classifying myocardial fibrosis based on residual capsule network
CN114972937A (en) Feature point detection and descriptor generation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant